2026-01-02 00:00:07.804418 | Job console starting 2026-01-02 00:00:07.845494 | Updating git repos 2026-01-02 00:00:07.960001 | Cloning repos into workspace 2026-01-02 00:00:08.362919 | Restoring repo states 2026-01-02 00:00:08.394504 | Merging changes 2026-01-02 00:00:08.394526 | Checking out repos 2026-01-02 00:00:08.756910 | Preparing playbooks 2026-01-02 00:00:10.209959 | Running Ansible setup 2026-01-02 00:00:21.607752 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-01-02 00:00:26.180311 | 2026-01-02 00:00:26.180502 | PLAY [Base pre] 2026-01-02 00:00:26.262535 | 2026-01-02 00:00:26.262823 | TASK [Setup log path fact] 2026-01-02 00:00:26.361116 | orchestrator | ok 2026-01-02 00:00:26.455117 | 2026-01-02 00:00:26.455327 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-01-02 00:00:26.600080 | orchestrator | ok 2026-01-02 00:00:26.661432 | 2026-01-02 00:00:26.661592 | TASK [emit-job-header : Print job information] 2026-01-02 00:00:26.761618 | # Job Information 2026-01-02 00:00:26.761817 | Ansible Version: 2.16.14 2026-01-02 00:00:26.761855 | Job: testbed-deploy-current-in-a-nutshell-with-tempest-ubuntu-24.04 2026-01-02 00:00:26.761889 | Pipeline: periodic-midnight 2026-01-02 00:00:26.761914 | Executor: 521e9411259a 2026-01-02 00:00:26.761935 | Triggered by: https://github.com/osism/testbed 2026-01-02 00:00:26.761958 | Event ID: dd79a623558a426383a27ac1b46d7c38 2026-01-02 00:00:26.769148 | 2026-01-02 00:00:26.769322 | LOOP [emit-job-header : Print node information] 2026-01-02 00:00:27.912956 | orchestrator | ok: 2026-01-02 00:00:27.913298 | orchestrator | # Node Information 2026-01-02 00:00:27.913343 | orchestrator | Inventory Hostname: orchestrator 2026-01-02 00:00:27.913369 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-01-02 00:00:27.913392 | orchestrator | Username: zuul-testbed04 2026-01-02 00:00:27.913413 | orchestrator | Distro: Debian 12.12 2026-01-02 00:00:27.913436 | orchestrator | Provider: static-testbed 2026-01-02 00:00:27.913458 | orchestrator | Region: 2026-01-02 00:00:27.913479 | orchestrator | Label: testbed-orchestrator 2026-01-02 00:00:27.913498 | orchestrator | Product Name: OpenStack Nova 2026-01-02 00:00:27.913517 | orchestrator | Interface IP: 81.163.193.140 2026-01-02 00:00:27.930812 | 2026-01-02 00:00:27.930996 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-01-02 00:00:29.605293 | orchestrator -> localhost | changed 2026-01-02 00:00:29.614003 | 2026-01-02 00:00:29.614122 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-01-02 00:00:32.753819 | orchestrator -> localhost | changed 2026-01-02 00:00:32.785080 | 2026-01-02 00:00:32.785257 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-01-02 00:00:34.286262 | orchestrator -> localhost | ok 2026-01-02 00:00:34.292571 | 2026-01-02 00:00:34.292729 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-01-02 00:00:34.335417 | orchestrator | ok 2026-01-02 00:00:34.396658 | orchestrator | included: /var/lib/zuul/builds/fd31d5addc3042eb80219b9fb0deced2/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-01-02 00:00:34.418345 | 2026-01-02 00:00:34.418475 | TASK [add-build-sshkey : Create Temp SSH key] 2026-01-02 00:00:37.058119 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-01-02 00:00:37.058317 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/fd31d5addc3042eb80219b9fb0deced2/work/fd31d5addc3042eb80219b9fb0deced2_id_rsa 2026-01-02 00:00:37.058356 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/fd31d5addc3042eb80219b9fb0deced2/work/fd31d5addc3042eb80219b9fb0deced2_id_rsa.pub 2026-01-02 00:00:37.058383 | orchestrator -> localhost | The key fingerprint is: 2026-01-02 00:00:37.058407 | orchestrator -> localhost | SHA256:A/GvBx5pQfB86mXkTNGz1YSYqc7qV5sFxN1wqjFJ+lo zuul-build-sshkey 2026-01-02 00:00:37.058429 | orchestrator -> localhost | The key's randomart image is: 2026-01-02 00:00:37.058463 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-01-02 00:00:37.058486 | orchestrator -> localhost | | o.. .o.=.*o| 2026-01-02 00:00:37.058507 | orchestrator -> localhost | | * +O.+oo| 2026-01-02 00:00:37.058527 | orchestrator -> localhost | | . = =o++. | 2026-01-02 00:00:37.058547 | orchestrator -> localhost | | . @..o+ | 2026-01-02 00:00:37.058566 | orchestrator -> localhost | | So* E. | 2026-01-02 00:00:37.058592 | orchestrator -> localhost | | + Boo. . | 2026-01-02 00:00:37.058613 | orchestrator -> localhost | | +.o. + | 2026-01-02 00:00:37.058635 | orchestrator -> localhost | | ... o | 2026-01-02 00:00:37.058656 | orchestrator -> localhost | | ... | 2026-01-02 00:00:37.058677 | orchestrator -> localhost | +----[SHA256]-----+ 2026-01-02 00:00:37.058724 | orchestrator -> localhost | ok: Runtime: 0:00:01.217473 2026-01-02 00:00:37.065911 | 2026-01-02 00:00:37.066007 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-01-02 00:00:37.106020 | orchestrator | ok 2026-01-02 00:00:37.131513 | orchestrator | included: /var/lib/zuul/builds/fd31d5addc3042eb80219b9fb0deced2/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-01-02 00:00:37.157611 | 2026-01-02 00:00:37.157722 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-01-02 00:00:37.213356 | orchestrator | skipping: Conditional result was False 2026-01-02 00:00:37.221603 | 2026-01-02 00:00:37.221709 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-01-02 00:00:38.166518 | orchestrator | changed 2026-01-02 00:00:38.181404 | 2026-01-02 00:00:38.181536 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-01-02 00:00:38.505960 | orchestrator | ok 2026-01-02 00:00:38.512488 | 2026-01-02 00:00:38.512587 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-01-02 00:00:39.094060 | orchestrator | ok 2026-01-02 00:00:39.119774 | 2026-01-02 00:00:39.120015 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-01-02 00:00:39.708603 | orchestrator | ok 2026-01-02 00:00:39.731565 | 2026-01-02 00:00:39.731685 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-01-02 00:00:39.811500 | orchestrator | skipping: Conditional result was False 2026-01-02 00:00:39.818273 | 2026-01-02 00:00:39.818382 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-01-02 00:00:41.279119 | orchestrator -> localhost | changed 2026-01-02 00:00:41.343298 | 2026-01-02 00:00:41.343427 | TASK [add-build-sshkey : Add back temp key] 2026-01-02 00:00:43.054876 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/fd31d5addc3042eb80219b9fb0deced2/work/fd31d5addc3042eb80219b9fb0deced2_id_rsa (zuul-build-sshkey) 2026-01-02 00:00:43.055090 | orchestrator -> localhost | ok: Runtime: 0:00:00.030339 2026-01-02 00:00:43.062139 | 2026-01-02 00:00:43.062271 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-01-02 00:00:44.258360 | orchestrator | ok 2026-01-02 00:00:44.269431 | 2026-01-02 00:00:44.269539 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-01-02 00:00:44.335997 | orchestrator | skipping: Conditional result was False 2026-01-02 00:00:44.504147 | 2026-01-02 00:00:44.504263 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-01-02 00:00:45.251324 | orchestrator | ok 2026-01-02 00:00:45.262887 | 2026-01-02 00:00:45.262993 | TASK [validate-host : Define zuul_info_dir fact] 2026-01-02 00:00:45.312696 | orchestrator | ok 2026-01-02 00:00:45.321712 | 2026-01-02 00:00:45.321840 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-01-02 00:00:47.639236 | orchestrator -> localhost | ok 2026-01-02 00:00:47.654522 | 2026-01-02 00:00:47.654656 | TASK [validate-host : Collect information about the host] 2026-01-02 00:00:51.038818 | orchestrator | ok 2026-01-02 00:00:51.092005 | 2026-01-02 00:00:51.092260 | TASK [validate-host : Sanitize hostname] 2026-01-02 00:00:51.360051 | orchestrator | ok 2026-01-02 00:00:51.383586 | 2026-01-02 00:00:51.383759 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-01-02 00:00:55.436088 | orchestrator -> localhost | changed 2026-01-02 00:00:55.441735 | 2026-01-02 00:00:55.441840 | TASK [validate-host : Collect information about zuul worker] 2026-01-02 00:00:56.339393 | orchestrator | ok 2026-01-02 00:00:56.343724 | 2026-01-02 00:00:56.343803 | TASK [validate-host : Write out all zuul information for each host] 2026-01-02 00:00:59.008133 | orchestrator -> localhost | changed 2026-01-02 00:00:59.017137 | 2026-01-02 00:00:59.017260 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-01-02 00:00:59.389261 | orchestrator | ok 2026-01-02 00:00:59.394250 | 2026-01-02 00:00:59.394340 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-01-02 00:02:15.855668 | orchestrator | changed: 2026-01-02 00:02:15.859490 | orchestrator | .d..t...... src/ 2026-01-02 00:02:15.859564 | orchestrator | .d..t...... src/github.com/ 2026-01-02 00:02:15.859591 | orchestrator | .d..t...... src/github.com/osism/ 2026-01-02 00:02:15.859613 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-01-02 00:02:15.859635 | orchestrator | RedHat.yml 2026-01-02 00:02:15.874990 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-01-02 00:02:15.875007 | orchestrator | RedHat.yml 2026-01-02 00:02:15.875059 | orchestrator | = 2.2.0"... 2026-01-02 00:02:28.115193 | orchestrator | - Finding latest version of hashicorp/null... 2026-01-02 00:02:28.135473 | orchestrator | - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2026-01-02 00:02:28.277372 | orchestrator | - Installing hashicorp/local v2.6.1... 2026-01-02 00:02:29.583403 | orchestrator | - Installed hashicorp/local v2.6.1 (signed, key ID 0C0AF313E5FD9F80) 2026-01-02 00:02:29.645568 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-01-02 00:02:30.185228 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-01-02 00:02:30.253537 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-01-02 00:02:30.958918 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-01-02 00:02:30.959024 | orchestrator | 2026-01-02 00:02:30.959034 | orchestrator | Providers are signed by their developers. 2026-01-02 00:02:30.959041 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-01-02 00:02:30.959046 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-01-02 00:02:30.959054 | orchestrator | 2026-01-02 00:02:30.959059 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-01-02 00:02:30.959081 | orchestrator | selections it made above. Include this file in your version control repository 2026-01-02 00:02:30.959086 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-01-02 00:02:30.959091 | orchestrator | you run "tofu init" in the future. 2026-01-02 00:02:30.959431 | orchestrator | 2026-01-02 00:02:30.959440 | orchestrator | OpenTofu has been successfully initialized! 2026-01-02 00:02:30.959445 | orchestrator | 2026-01-02 00:02:30.959449 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-01-02 00:02:30.959459 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-01-02 00:02:30.959477 | orchestrator | should now work. 2026-01-02 00:02:30.959482 | orchestrator | 2026-01-02 00:02:30.959486 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-01-02 00:02:30.959490 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-01-02 00:02:30.959494 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-01-02 00:02:31.120357 | orchestrator | Created and switched to workspace "ci"! 2026-01-02 00:02:31.120422 | orchestrator | 2026-01-02 00:02:31.120429 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-01-02 00:02:31.120435 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-01-02 00:02:31.120464 | orchestrator | for this configuration. 2026-01-02 00:02:31.240070 | orchestrator | ci.auto.tfvars 2026-01-02 00:02:31.450667 | orchestrator | default_custom.tf 2026-01-02 00:02:32.868726 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-01-02 00:02:33.959665 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-01-02 00:02:34.790561 | orchestrator | 2026-01-02 00:02:34.790632 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-01-02 00:02:34.790639 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-01-02 00:02:34.790665 | orchestrator | + create 2026-01-02 00:02:34.790681 | orchestrator | <= read (data resources) 2026-01-02 00:02:34.790698 | orchestrator | 2026-01-02 00:02:34.790704 | orchestrator | OpenTofu will perform the following actions: 2026-01-02 00:02:34.790812 | orchestrator | 2026-01-02 00:02:34.790827 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-01-02 00:02:34.790833 | orchestrator | # (config refers to values not yet known) 2026-01-02 00:02:34.790839 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-01-02 00:02:34.790845 | orchestrator | + checksum = (known after apply) 2026-01-02 00:02:34.790850 | orchestrator | + created_at = (known after apply) 2026-01-02 00:02:34.790855 | orchestrator | + file = (known after apply) 2026-01-02 00:02:34.790861 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.790878 | orchestrator | + metadata = (known after apply) 2026-01-02 00:02:34.790883 | orchestrator | + min_disk_gb = (known after apply) 2026-01-02 00:02:34.790888 | orchestrator | + min_ram_mb = (known after apply) 2026-01-02 00:02:34.790892 | orchestrator | + most_recent = true 2026-01-02 00:02:34.790896 | orchestrator | + name = (known after apply) 2026-01-02 00:02:34.790900 | orchestrator | + protected = (known after apply) 2026-01-02 00:02:34.790904 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.790912 | orchestrator | + schema = (known after apply) 2026-01-02 00:02:34.790917 | orchestrator | + size_bytes = (known after apply) 2026-01-02 00:02:34.790920 | orchestrator | + tags = (known after apply) 2026-01-02 00:02:34.790925 | orchestrator | + updated_at = (known after apply) 2026-01-02 00:02:34.790929 | orchestrator | } 2026-01-02 00:02:34.791014 | orchestrator | 2026-01-02 00:02:34.791026 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-01-02 00:02:34.791030 | orchestrator | # (config refers to values not yet known) 2026-01-02 00:02:34.791035 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-01-02 00:02:34.791039 | orchestrator | + checksum = (known after apply) 2026-01-02 00:02:34.791043 | orchestrator | + created_at = (known after apply) 2026-01-02 00:02:34.791047 | orchestrator | + file = (known after apply) 2026-01-02 00:02:34.791051 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.791056 | orchestrator | + metadata = (known after apply) 2026-01-02 00:02:34.791060 | orchestrator | + min_disk_gb = (known after apply) 2026-01-02 00:02:34.791064 | orchestrator | + min_ram_mb = (known after apply) 2026-01-02 00:02:34.791068 | orchestrator | + most_recent = true 2026-01-02 00:02:34.791072 | orchestrator | + name = (known after apply) 2026-01-02 00:02:34.791076 | orchestrator | + protected = (known after apply) 2026-01-02 00:02:34.791080 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.791084 | orchestrator | + schema = (known after apply) 2026-01-02 00:02:34.791088 | orchestrator | + size_bytes = (known after apply) 2026-01-02 00:02:34.791092 | orchestrator | + tags = (known after apply) 2026-01-02 00:02:34.791095 | orchestrator | + updated_at = (known after apply) 2026-01-02 00:02:34.791099 | orchestrator | } 2026-01-02 00:02:34.791180 | orchestrator | 2026-01-02 00:02:34.791192 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-01-02 00:02:34.791197 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-01-02 00:02:34.791201 | orchestrator | + content = (known after apply) 2026-01-02 00:02:34.791206 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-02 00:02:34.791210 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-02 00:02:34.791214 | orchestrator | + content_md5 = (known after apply) 2026-01-02 00:02:34.791218 | orchestrator | + content_sha1 = (known after apply) 2026-01-02 00:02:34.791222 | orchestrator | + content_sha256 = (known after apply) 2026-01-02 00:02:34.791226 | orchestrator | + content_sha512 = (known after apply) 2026-01-02 00:02:34.791230 | orchestrator | + directory_permission = "0777" 2026-01-02 00:02:34.791235 | orchestrator | + file_permission = "0644" 2026-01-02 00:02:34.791239 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-01-02 00:02:34.791243 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.791247 | orchestrator | } 2026-01-02 00:02:34.791336 | orchestrator | 2026-01-02 00:02:34.791348 | orchestrator | # local_file.id_rsa_pub will be created 2026-01-02 00:02:34.791353 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-01-02 00:02:34.791357 | orchestrator | + content = (known after apply) 2026-01-02 00:02:34.791361 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-02 00:02:34.791365 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-02 00:02:34.791369 | orchestrator | + content_md5 = (known after apply) 2026-01-02 00:02:34.791373 | orchestrator | + content_sha1 = (known after apply) 2026-01-02 00:02:34.791377 | orchestrator | + content_sha256 = (known after apply) 2026-01-02 00:02:34.791389 | orchestrator | + content_sha512 = (known after apply) 2026-01-02 00:02:34.791393 | orchestrator | + directory_permission = "0777" 2026-01-02 00:02:34.791397 | orchestrator | + file_permission = "0644" 2026-01-02 00:02:34.791406 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-01-02 00:02:34.791410 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.791414 | orchestrator | } 2026-01-02 00:02:34.791486 | orchestrator | 2026-01-02 00:02:34.791498 | orchestrator | # local_file.inventory will be created 2026-01-02 00:02:34.791503 | orchestrator | + resource "local_file" "inventory" { 2026-01-02 00:02:34.791507 | orchestrator | + content = (known after apply) 2026-01-02 00:02:34.791511 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-02 00:02:34.791515 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-02 00:02:34.791519 | orchestrator | + content_md5 = (known after apply) 2026-01-02 00:02:34.791523 | orchestrator | + content_sha1 = (known after apply) 2026-01-02 00:02:34.791527 | orchestrator | + content_sha256 = (known after apply) 2026-01-02 00:02:34.791532 | orchestrator | + content_sha512 = (known after apply) 2026-01-02 00:02:34.791536 | orchestrator | + directory_permission = "0777" 2026-01-02 00:02:34.791540 | orchestrator | + file_permission = "0644" 2026-01-02 00:02:34.791544 | orchestrator | + filename = "inventory.ci" 2026-01-02 00:02:34.791548 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.791552 | orchestrator | } 2026-01-02 00:02:34.791632 | orchestrator | 2026-01-02 00:02:34.791644 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-01-02 00:02:34.791648 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-01-02 00:02:34.791653 | orchestrator | + content = (sensitive value) 2026-01-02 00:02:34.791657 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-02 00:02:34.791661 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-02 00:02:34.791665 | orchestrator | + content_md5 = (known after apply) 2026-01-02 00:02:34.791669 | orchestrator | + content_sha1 = (known after apply) 2026-01-02 00:02:34.791673 | orchestrator | + content_sha256 = (known after apply) 2026-01-02 00:02:34.791677 | orchestrator | + content_sha512 = (known after apply) 2026-01-02 00:02:34.791681 | orchestrator | + directory_permission = "0700" 2026-01-02 00:02:34.791685 | orchestrator | + file_permission = "0600" 2026-01-02 00:02:34.791689 | orchestrator | + filename = ".id_rsa.ci" 2026-01-02 00:02:34.791693 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.791697 | orchestrator | } 2026-01-02 00:02:34.791719 | orchestrator | 2026-01-02 00:02:34.791730 | orchestrator | # null_resource.node_semaphore will be created 2026-01-02 00:02:34.791735 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-01-02 00:02:34.791738 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.791743 | orchestrator | } 2026-01-02 00:02:34.791807 | orchestrator | 2026-01-02 00:02:34.791818 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-01-02 00:02:34.791823 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-01-02 00:02:34.791827 | orchestrator | + attachment = (known after apply) 2026-01-02 00:02:34.791831 | orchestrator | + availability_zone = "nova" 2026-01-02 00:02:34.791835 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.791839 | orchestrator | + image_id = (known after apply) 2026-01-02 00:02:34.791843 | orchestrator | + metadata = (known after apply) 2026-01-02 00:02:34.791847 | orchestrator | + name = "testbed-volume-manager-base" 2026-01-02 00:02:34.791852 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.791856 | orchestrator | + size = 80 2026-01-02 00:02:34.791860 | orchestrator | + volume_retype_policy = "never" 2026-01-02 00:02:34.791864 | orchestrator | + volume_type = "ssd" 2026-01-02 00:02:34.791868 | orchestrator | } 2026-01-02 00:02:34.791936 | orchestrator | 2026-01-02 00:02:34.791948 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-01-02 00:02:34.791953 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-02 00:02:34.791957 | orchestrator | + attachment = (known after apply) 2026-01-02 00:02:34.791961 | orchestrator | + availability_zone = "nova" 2026-01-02 00:02:34.791965 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.791973 | orchestrator | + image_id = (known after apply) 2026-01-02 00:02:34.791977 | orchestrator | + metadata = (known after apply) 2026-01-02 00:02:34.791981 | orchestrator | + name = "testbed-volume-0-node-base" 2026-01-02 00:02:34.791985 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.791989 | orchestrator | + size = 80 2026-01-02 00:02:34.791993 | orchestrator | + volume_retype_policy = "never" 2026-01-02 00:02:34.791997 | orchestrator | + volume_type = "ssd" 2026-01-02 00:02:34.792001 | orchestrator | } 2026-01-02 00:02:34.792064 | orchestrator | 2026-01-02 00:02:34.792076 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-01-02 00:02:34.792081 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-02 00:02:34.792085 | orchestrator | + attachment = (known after apply) 2026-01-02 00:02:34.792089 | orchestrator | + availability_zone = "nova" 2026-01-02 00:02:34.792093 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.792097 | orchestrator | + image_id = (known after apply) 2026-01-02 00:02:34.792101 | orchestrator | + metadata = (known after apply) 2026-01-02 00:02:34.792105 | orchestrator | + name = "testbed-volume-1-node-base" 2026-01-02 00:02:34.792109 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.792113 | orchestrator | + size = 80 2026-01-02 00:02:34.792117 | orchestrator | + volume_retype_policy = "never" 2026-01-02 00:02:34.792121 | orchestrator | + volume_type = "ssd" 2026-01-02 00:02:34.792125 | orchestrator | } 2026-01-02 00:02:34.792183 | orchestrator | 2026-01-02 00:02:34.792195 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-01-02 00:02:34.792200 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-02 00:02:34.792204 | orchestrator | + attachment = (known after apply) 2026-01-02 00:02:34.792208 | orchestrator | + availability_zone = "nova" 2026-01-02 00:02:34.792212 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.792216 | orchestrator | + image_id = (known after apply) 2026-01-02 00:02:34.792220 | orchestrator | + metadata = (known after apply) 2026-01-02 00:02:34.792224 | orchestrator | + name = "testbed-volume-2-node-base" 2026-01-02 00:02:34.792228 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.792232 | orchestrator | + size = 80 2026-01-02 00:02:34.792238 | orchestrator | + volume_retype_policy = "never" 2026-01-02 00:02:34.792243 | orchestrator | + volume_type = "ssd" 2026-01-02 00:02:34.792247 | orchestrator | } 2026-01-02 00:02:34.792310 | orchestrator | 2026-01-02 00:02:34.792353 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-01-02 00:02:34.792358 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-02 00:02:34.792362 | orchestrator | + attachment = (known after apply) 2026-01-02 00:02:34.792366 | orchestrator | + availability_zone = "nova" 2026-01-02 00:02:34.792370 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.792374 | orchestrator | + image_id = (known after apply) 2026-01-02 00:02:34.792378 | orchestrator | + metadata = (known after apply) 2026-01-02 00:02:34.792382 | orchestrator | + name = "testbed-volume-3-node-base" 2026-01-02 00:02:34.792386 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.792390 | orchestrator | + size = 80 2026-01-02 00:02:34.792394 | orchestrator | + volume_retype_policy = "never" 2026-01-02 00:02:34.792398 | orchestrator | + volume_type = "ssd" 2026-01-02 00:02:34.792402 | orchestrator | } 2026-01-02 00:02:34.792462 | orchestrator | 2026-01-02 00:02:34.792473 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-01-02 00:02:34.792478 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-02 00:02:34.792482 | orchestrator | + attachment = (known after apply) 2026-01-02 00:02:34.792486 | orchestrator | + availability_zone = "nova" 2026-01-02 00:02:34.792490 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.792499 | orchestrator | + image_id = (known after apply) 2026-01-02 00:02:34.792503 | orchestrator | + metadata = (known after apply) 2026-01-02 00:02:34.792507 | orchestrator | + name = "testbed-volume-4-node-base" 2026-01-02 00:02:34.792511 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.792515 | orchestrator | + size = 80 2026-01-02 00:02:34.792519 | orchestrator | + volume_retype_policy = "never" 2026-01-02 00:02:34.792523 | orchestrator | + volume_type = "ssd" 2026-01-02 00:02:34.792527 | orchestrator | } 2026-01-02 00:02:34.792595 | orchestrator | 2026-01-02 00:02:34.792606 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-01-02 00:02:34.792611 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-02 00:02:34.792615 | orchestrator | + attachment = (known after apply) 2026-01-02 00:02:34.792619 | orchestrator | + availability_zone = "nova" 2026-01-02 00:02:34.792623 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.792627 | orchestrator | + image_id = (known after apply) 2026-01-02 00:02:34.792631 | orchestrator | + metadata = (known after apply) 2026-01-02 00:02:34.792636 | orchestrator | + name = "testbed-volume-5-node-base" 2026-01-02 00:02:34.792640 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.792644 | orchestrator | + size = 80 2026-01-02 00:02:34.792648 | orchestrator | + volume_retype_policy = "never" 2026-01-02 00:02:34.792652 | orchestrator | + volume_type = "ssd" 2026-01-02 00:02:34.792657 | orchestrator | } 2026-01-02 00:02:34.792717 | orchestrator | 2026-01-02 00:02:34.792729 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-01-02 00:02:34.792734 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-02 00:02:34.792738 | orchestrator | + attachment = (known after apply) 2026-01-02 00:02:34.792742 | orchestrator | + availability_zone = "nova" 2026-01-02 00:02:34.792746 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.792750 | orchestrator | + metadata = (known after apply) 2026-01-02 00:02:34.792754 | orchestrator | + name = "testbed-volume-0-node-3" 2026-01-02 00:02:34.792758 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.792762 | orchestrator | + size = 20 2026-01-02 00:02:34.792766 | orchestrator | + volume_retype_policy = "never" 2026-01-02 00:02:34.792771 | orchestrator | + volume_type = "ssd" 2026-01-02 00:02:34.792775 | orchestrator | } 2026-01-02 00:02:34.792833 | orchestrator | 2026-01-02 00:02:34.792845 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-01-02 00:02:34.792850 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-02 00:02:34.792854 | orchestrator | + attachment = (known after apply) 2026-01-02 00:02:34.792858 | orchestrator | + availability_zone = "nova" 2026-01-02 00:02:34.792862 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.792866 | orchestrator | + metadata = (known after apply) 2026-01-02 00:02:34.792870 | orchestrator | + name = "testbed-volume-1-node-4" 2026-01-02 00:02:34.792874 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.792878 | orchestrator | + size = 20 2026-01-02 00:02:34.792882 | orchestrator | + volume_retype_policy = "never" 2026-01-02 00:02:34.792886 | orchestrator | + volume_type = "ssd" 2026-01-02 00:02:34.792890 | orchestrator | } 2026-01-02 00:02:34.792956 | orchestrator | 2026-01-02 00:02:34.792968 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-01-02 00:02:34.792973 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-02 00:02:34.792977 | orchestrator | + attachment = (known after apply) 2026-01-02 00:02:34.792981 | orchestrator | + availability_zone = "nova" 2026-01-02 00:02:34.792985 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.792989 | orchestrator | + metadata = (known after apply) 2026-01-02 00:02:34.792993 | orchestrator | + name = "testbed-volume-2-node-5" 2026-01-02 00:02:34.792997 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.793005 | orchestrator | + size = 20 2026-01-02 00:02:34.793009 | orchestrator | + volume_retype_policy = "never" 2026-01-02 00:02:34.793013 | orchestrator | + volume_type = "ssd" 2026-01-02 00:02:34.793017 | orchestrator | } 2026-01-02 00:02:34.793079 | orchestrator | 2026-01-02 00:02:34.793091 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-01-02 00:02:34.793095 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-02 00:02:34.793100 | orchestrator | + attachment = (known after apply) 2026-01-02 00:02:34.793104 | orchestrator | + availability_zone = "nova" 2026-01-02 00:02:34.793108 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.793115 | orchestrator | + metadata = (known after apply) 2026-01-02 00:02:34.793119 | orchestrator | + name = "testbed-volume-3-node-3" 2026-01-02 00:02:34.793123 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.793127 | orchestrator | + size = 20 2026-01-02 00:02:34.793131 | orchestrator | + volume_retype_policy = "never" 2026-01-02 00:02:34.793135 | orchestrator | + volume_type = "ssd" 2026-01-02 00:02:34.793139 | orchestrator | } 2026-01-02 00:02:34.793194 | orchestrator | 2026-01-02 00:02:34.793206 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-01-02 00:02:34.793211 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-02 00:02:34.793215 | orchestrator | + attachment = (known after apply) 2026-01-02 00:02:34.793219 | orchestrator | + availability_zone = "nova" 2026-01-02 00:02:34.793223 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.793227 | orchestrator | + metadata = (known after apply) 2026-01-02 00:02:34.793231 | orchestrator | + name = "testbed-volume-4-node-4" 2026-01-02 00:02:34.793235 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.793239 | orchestrator | + size = 20 2026-01-02 00:02:34.793243 | orchestrator | + volume_retype_policy = "never" 2026-01-02 00:02:34.793247 | orchestrator | + volume_type = "ssd" 2026-01-02 00:02:34.793251 | orchestrator | } 2026-01-02 00:02:34.793330 | orchestrator | 2026-01-02 00:02:34.793343 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-01-02 00:02:34.793347 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-02 00:02:34.793351 | orchestrator | + attachment = (known after apply) 2026-01-02 00:02:34.793355 | orchestrator | + availability_zone = "nova" 2026-01-02 00:02:34.793359 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.793363 | orchestrator | + metadata = (known after apply) 2026-01-02 00:02:34.793367 | orchestrator | + name = "testbed-volume-5-node-5" 2026-01-02 00:02:34.793371 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.793375 | orchestrator | + size = 20 2026-01-02 00:02:34.793379 | orchestrator | + volume_retype_policy = "never" 2026-01-02 00:02:34.793383 | orchestrator | + volume_type = "ssd" 2026-01-02 00:02:34.793387 | orchestrator | } 2026-01-02 00:02:34.793446 | orchestrator | 2026-01-02 00:02:34.793458 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-01-02 00:02:34.793462 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-02 00:02:34.793467 | orchestrator | + attachment = (known after apply) 2026-01-02 00:02:34.793470 | orchestrator | + availability_zone = "nova" 2026-01-02 00:02:34.793475 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.793479 | orchestrator | + metadata = (known after apply) 2026-01-02 00:02:34.793482 | orchestrator | + name = "testbed-volume-6-node-3" 2026-01-02 00:02:34.793486 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.793490 | orchestrator | + size = 20 2026-01-02 00:02:34.793494 | orchestrator | + volume_retype_policy = "never" 2026-01-02 00:02:34.793498 | orchestrator | + volume_type = "ssd" 2026-01-02 00:02:34.793502 | orchestrator | } 2026-01-02 00:02:34.793562 | orchestrator | 2026-01-02 00:02:34.793573 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-01-02 00:02:34.793578 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-02 00:02:34.793587 | orchestrator | + attachment = (known after apply) 2026-01-02 00:02:34.793591 | orchestrator | + availability_zone = "nova" 2026-01-02 00:02:34.793595 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.793599 | orchestrator | + metadata = (known after apply) 2026-01-02 00:02:34.793603 | orchestrator | + name = "testbed-volume-7-node-4" 2026-01-02 00:02:34.793607 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.793611 | orchestrator | + size = 20 2026-01-02 00:02:34.793615 | orchestrator | + volume_retype_policy = "never" 2026-01-02 00:02:34.793619 | orchestrator | + volume_type = "ssd" 2026-01-02 00:02:34.793623 | orchestrator | } 2026-01-02 00:02:34.793687 | orchestrator | 2026-01-02 00:02:34.793699 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-01-02 00:02:34.793704 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-02 00:02:34.793708 | orchestrator | + attachment = (known after apply) 2026-01-02 00:02:34.793712 | orchestrator | + availability_zone = "nova" 2026-01-02 00:02:34.793716 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.793720 | orchestrator | + metadata = (known after apply) 2026-01-02 00:02:34.793724 | orchestrator | + name = "testbed-volume-8-node-5" 2026-01-02 00:02:34.793728 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.793732 | orchestrator | + size = 20 2026-01-02 00:02:34.793736 | orchestrator | + volume_retype_policy = "never" 2026-01-02 00:02:34.793740 | orchestrator | + volume_type = "ssd" 2026-01-02 00:02:34.793744 | orchestrator | } 2026-01-02 00:02:34.797084 | orchestrator | 2026-01-02 00:02:34.802254 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-01-02 00:02:34.802284 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-01-02 00:02:34.802291 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-02 00:02:34.802297 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-02 00:02:34.802302 | orchestrator | + all_metadata = (known after apply) 2026-01-02 00:02:34.802308 | orchestrator | + all_tags = (known after apply) 2026-01-02 00:02:34.802333 | orchestrator | + availability_zone = "nova" 2026-01-02 00:02:34.802339 | orchestrator | + config_drive = true 2026-01-02 00:02:34.802353 | orchestrator | + created = (known after apply) 2026-01-02 00:02:34.802359 | orchestrator | + flavor_id = (known after apply) 2026-01-02 00:02:34.802364 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-01-02 00:02:34.802388 | orchestrator | + force_delete = false 2026-01-02 00:02:34.802394 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-02 00:02:34.802399 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.802404 | orchestrator | + image_id = (known after apply) 2026-01-02 00:02:34.802409 | orchestrator | + image_name = (known after apply) 2026-01-02 00:02:34.802414 | orchestrator | + key_pair = "testbed" 2026-01-02 00:02:34.802420 | orchestrator | + name = "testbed-manager" 2026-01-02 00:02:34.802425 | orchestrator | + power_state = "active" 2026-01-02 00:02:34.802430 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.802435 | orchestrator | + security_groups = (known after apply) 2026-01-02 00:02:34.802439 | orchestrator | + stop_before_destroy = false 2026-01-02 00:02:34.802444 | orchestrator | + updated = (known after apply) 2026-01-02 00:02:34.802465 | orchestrator | + user_data = (sensitive value) 2026-01-02 00:02:34.802470 | orchestrator | 2026-01-02 00:02:34.802476 | orchestrator | + block_device { 2026-01-02 00:02:34.802482 | orchestrator | + boot_index = 0 2026-01-02 00:02:34.802486 | orchestrator | + delete_on_termination = false 2026-01-02 00:02:34.802491 | orchestrator | + destination_type = "volume" 2026-01-02 00:02:34.802496 | orchestrator | + multiattach = false 2026-01-02 00:02:34.802501 | orchestrator | + source_type = "volume" 2026-01-02 00:02:34.802506 | orchestrator | + uuid = (known after apply) 2026-01-02 00:02:34.802518 | orchestrator | } 2026-01-02 00:02:34.802524 | orchestrator | 2026-01-02 00:02:34.802528 | orchestrator | + network { 2026-01-02 00:02:34.802533 | orchestrator | + access_network = false 2026-01-02 00:02:34.802538 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-02 00:02:34.802543 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-02 00:02:34.802548 | orchestrator | + mac = (known after apply) 2026-01-02 00:02:34.802553 | orchestrator | + name = (known after apply) 2026-01-02 00:02:34.802557 | orchestrator | + port = (known after apply) 2026-01-02 00:02:34.802562 | orchestrator | + uuid = (known after apply) 2026-01-02 00:02:34.802567 | orchestrator | } 2026-01-02 00:02:34.802572 | orchestrator | } 2026-01-02 00:02:34.802583 | orchestrator | 2026-01-02 00:02:34.802588 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-01-02 00:02:34.802593 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-02 00:02:34.802598 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-02 00:02:34.802603 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-02 00:02:34.802608 | orchestrator | + all_metadata = (known after apply) 2026-01-02 00:02:34.802612 | orchestrator | + all_tags = (known after apply) 2026-01-02 00:02:34.802617 | orchestrator | + availability_zone = "nova" 2026-01-02 00:02:34.802622 | orchestrator | + config_drive = true 2026-01-02 00:02:34.802627 | orchestrator | + created = (known after apply) 2026-01-02 00:02:34.802631 | orchestrator | + flavor_id = (known after apply) 2026-01-02 00:02:34.802636 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-02 00:02:34.802641 | orchestrator | + force_delete = false 2026-01-02 00:02:34.802646 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-02 00:02:34.802652 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.802656 | orchestrator | + image_id = (known after apply) 2026-01-02 00:02:34.802661 | orchestrator | + image_name = (known after apply) 2026-01-02 00:02:34.802666 | orchestrator | + key_pair = "testbed" 2026-01-02 00:02:34.802671 | orchestrator | + name = "testbed-node-0" 2026-01-02 00:02:34.802675 | orchestrator | + power_state = "active" 2026-01-02 00:02:34.802680 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.802685 | orchestrator | + security_groups = (known after apply) 2026-01-02 00:02:34.802690 | orchestrator | + stop_before_destroy = false 2026-01-02 00:02:34.802695 | orchestrator | + updated = (known after apply) 2026-01-02 00:02:34.802700 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-02 00:02:34.802705 | orchestrator | 2026-01-02 00:02:34.802710 | orchestrator | + block_device { 2026-01-02 00:02:34.802715 | orchestrator | + boot_index = 0 2026-01-02 00:02:34.802720 | orchestrator | + delete_on_termination = false 2026-01-02 00:02:34.802725 | orchestrator | + destination_type = "volume" 2026-01-02 00:02:34.802729 | orchestrator | + multiattach = false 2026-01-02 00:02:34.802734 | orchestrator | + source_type = "volume" 2026-01-02 00:02:34.802739 | orchestrator | + uuid = (known after apply) 2026-01-02 00:02:34.802744 | orchestrator | } 2026-01-02 00:02:34.802749 | orchestrator | 2026-01-02 00:02:34.802753 | orchestrator | + network { 2026-01-02 00:02:34.802758 | orchestrator | + access_network = false 2026-01-02 00:02:34.802763 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-02 00:02:34.802768 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-02 00:02:34.802773 | orchestrator | + mac = (known after apply) 2026-01-02 00:02:34.802778 | orchestrator | + name = (known after apply) 2026-01-02 00:02:34.802782 | orchestrator | + port = (known after apply) 2026-01-02 00:02:34.802787 | orchestrator | + uuid = (known after apply) 2026-01-02 00:02:34.802792 | orchestrator | } 2026-01-02 00:02:34.802797 | orchestrator | } 2026-01-02 00:02:34.802802 | orchestrator | 2026-01-02 00:02:34.802807 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-01-02 00:02:34.802812 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-02 00:02:34.802816 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-02 00:02:34.802825 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-02 00:02:34.802830 | orchestrator | + all_metadata = (known after apply) 2026-01-02 00:02:34.802835 | orchestrator | + all_tags = (known after apply) 2026-01-02 00:02:34.802839 | orchestrator | + availability_zone = "nova" 2026-01-02 00:02:34.802844 | orchestrator | + config_drive = true 2026-01-02 00:02:34.802849 | orchestrator | + created = (known after apply) 2026-01-02 00:02:34.802854 | orchestrator | + flavor_id = (known after apply) 2026-01-02 00:02:34.802859 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-02 00:02:34.802863 | orchestrator | + force_delete = false 2026-01-02 00:02:34.802868 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-02 00:02:34.802873 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.802878 | orchestrator | + image_id = (known after apply) 2026-01-02 00:02:34.802883 | orchestrator | + image_name = (known after apply) 2026-01-02 00:02:34.802887 | orchestrator | + key_pair = "testbed" 2026-01-02 00:02:34.802892 | orchestrator | + name = "testbed-node-1" 2026-01-02 00:02:34.802897 | orchestrator | + power_state = "active" 2026-01-02 00:02:34.802902 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.802907 | orchestrator | + security_groups = (known after apply) 2026-01-02 00:02:34.802911 | orchestrator | + stop_before_destroy = false 2026-01-02 00:02:34.802916 | orchestrator | + updated = (known after apply) 2026-01-02 00:02:34.802924 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-02 00:02:34.802929 | orchestrator | 2026-01-02 00:02:34.802934 | orchestrator | + block_device { 2026-01-02 00:02:34.802939 | orchestrator | + boot_index = 0 2026-01-02 00:02:34.802944 | orchestrator | + delete_on_termination = false 2026-01-02 00:02:34.802949 | orchestrator | + destination_type = "volume" 2026-01-02 00:02:34.802953 | orchestrator | + multiattach = false 2026-01-02 00:02:34.802958 | orchestrator | + source_type = "volume" 2026-01-02 00:02:34.802963 | orchestrator | + uuid = (known after apply) 2026-01-02 00:02:34.802968 | orchestrator | } 2026-01-02 00:02:34.802973 | orchestrator | 2026-01-02 00:02:34.802978 | orchestrator | + network { 2026-01-02 00:02:34.802982 | orchestrator | + access_network = false 2026-01-02 00:02:34.802987 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-02 00:02:34.802992 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-02 00:02:34.802997 | orchestrator | + mac = (known after apply) 2026-01-02 00:02:34.803001 | orchestrator | + name = (known after apply) 2026-01-02 00:02:34.803006 | orchestrator | + port = (known after apply) 2026-01-02 00:02:34.803011 | orchestrator | + uuid = (known after apply) 2026-01-02 00:02:34.803016 | orchestrator | } 2026-01-02 00:02:34.803021 | orchestrator | } 2026-01-02 00:02:34.803028 | orchestrator | 2026-01-02 00:02:34.803033 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-01-02 00:02:34.803038 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-02 00:02:34.803042 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-02 00:02:34.803047 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-02 00:02:34.803055 | orchestrator | + all_metadata = (known after apply) 2026-01-02 00:02:34.803060 | orchestrator | + all_tags = (known after apply) 2026-01-02 00:02:34.803065 | orchestrator | + availability_zone = "nova" 2026-01-02 00:02:34.803069 | orchestrator | + config_drive = true 2026-01-02 00:02:34.803074 | orchestrator | + created = (known after apply) 2026-01-02 00:02:34.803079 | orchestrator | + flavor_id = (known after apply) 2026-01-02 00:02:34.803084 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-02 00:02:34.803089 | orchestrator | + force_delete = false 2026-01-02 00:02:34.803094 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-02 00:02:34.803098 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.803103 | orchestrator | + image_id = (known after apply) 2026-01-02 00:02:34.803116 | orchestrator | + image_name = (known after apply) 2026-01-02 00:02:34.803121 | orchestrator | + key_pair = "testbed" 2026-01-02 00:02:34.803125 | orchestrator | + name = "testbed-node-2" 2026-01-02 00:02:34.803130 | orchestrator | + power_state = "active" 2026-01-02 00:02:34.803135 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.803140 | orchestrator | + security_groups = (known after apply) 2026-01-02 00:02:34.803145 | orchestrator | + stop_before_destroy = false 2026-01-02 00:02:34.803149 | orchestrator | + updated = (known after apply) 2026-01-02 00:02:34.803154 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-02 00:02:34.803159 | orchestrator | 2026-01-02 00:02:34.803164 | orchestrator | + block_device { 2026-01-02 00:02:34.803169 | orchestrator | + boot_index = 0 2026-01-02 00:02:34.803174 | orchestrator | + delete_on_termination = false 2026-01-02 00:02:34.803178 | orchestrator | + destination_type = "volume" 2026-01-02 00:02:34.803183 | orchestrator | + multiattach = false 2026-01-02 00:02:34.803188 | orchestrator | + source_type = "volume" 2026-01-02 00:02:34.803193 | orchestrator | + uuid = (known after apply) 2026-01-02 00:02:34.803198 | orchestrator | } 2026-01-02 00:02:34.803202 | orchestrator | 2026-01-02 00:02:34.803207 | orchestrator | + network { 2026-01-02 00:02:34.803212 | orchestrator | + access_network = false 2026-01-02 00:02:34.803217 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-02 00:02:34.803222 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-02 00:02:34.803226 | orchestrator | + mac = (known after apply) 2026-01-02 00:02:34.803231 | orchestrator | + name = (known after apply) 2026-01-02 00:02:34.803236 | orchestrator | + port = (known after apply) 2026-01-02 00:02:34.803241 | orchestrator | + uuid = (known after apply) 2026-01-02 00:02:34.803246 | orchestrator | } 2026-01-02 00:02:34.803250 | orchestrator | } 2026-01-02 00:02:34.803255 | orchestrator | 2026-01-02 00:02:34.803263 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-01-02 00:02:34.803268 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-02 00:02:34.803273 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-02 00:02:34.803278 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-02 00:02:34.803283 | orchestrator | + all_metadata = (known after apply) 2026-01-02 00:02:34.803288 | orchestrator | + all_tags = (known after apply) 2026-01-02 00:02:34.803292 | orchestrator | + availability_zone = "nova" 2026-01-02 00:02:34.803297 | orchestrator | + config_drive = true 2026-01-02 00:02:34.803302 | orchestrator | + created = (known after apply) 2026-01-02 00:02:34.803307 | orchestrator | + flavor_id = (known after apply) 2026-01-02 00:02:34.803328 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-02 00:02:34.803333 | orchestrator | + force_delete = false 2026-01-02 00:02:34.803338 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-02 00:02:34.803343 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.803347 | orchestrator | + image_id = (known after apply) 2026-01-02 00:02:34.803352 | orchestrator | + image_name = (known after apply) 2026-01-02 00:02:34.803357 | orchestrator | + key_pair = "testbed" 2026-01-02 00:02:34.803362 | orchestrator | + name = "testbed-node-3" 2026-01-02 00:02:34.803367 | orchestrator | + power_state = "active" 2026-01-02 00:02:34.803371 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.803376 | orchestrator | + security_groups = (known after apply) 2026-01-02 00:02:34.803381 | orchestrator | + stop_before_destroy = false 2026-01-02 00:02:34.803386 | orchestrator | + updated = (known after apply) 2026-01-02 00:02:34.803391 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-02 00:02:34.803395 | orchestrator | 2026-01-02 00:02:34.803400 | orchestrator | + block_device { 2026-01-02 00:02:34.803405 | orchestrator | + boot_index = 0 2026-01-02 00:02:34.803410 | orchestrator | + delete_on_termination = false 2026-01-02 00:02:34.803415 | orchestrator | + destination_type = "volume" 2026-01-02 00:02:34.803423 | orchestrator | + multiattach = false 2026-01-02 00:02:34.803428 | orchestrator | + source_type = "volume" 2026-01-02 00:02:34.803433 | orchestrator | + uuid = (known after apply) 2026-01-02 00:02:34.803438 | orchestrator | } 2026-01-02 00:02:34.803442 | orchestrator | 2026-01-02 00:02:34.803447 | orchestrator | + network { 2026-01-02 00:02:34.803452 | orchestrator | + access_network = false 2026-01-02 00:02:34.803457 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-02 00:02:34.803462 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-02 00:02:34.803466 | orchestrator | + mac = (known after apply) 2026-01-02 00:02:34.803471 | orchestrator | + name = (known after apply) 2026-01-02 00:02:34.803476 | orchestrator | + port = (known after apply) 2026-01-02 00:02:34.803481 | orchestrator | + uuid = (known after apply) 2026-01-02 00:02:34.803486 | orchestrator | } 2026-01-02 00:02:34.803490 | orchestrator | } 2026-01-02 00:02:34.803498 | orchestrator | 2026-01-02 00:02:34.803503 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-01-02 00:02:34.803508 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-02 00:02:34.803513 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-02 00:02:34.803518 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-02 00:02:34.803523 | orchestrator | + all_metadata = (known after apply) 2026-01-02 00:02:34.803527 | orchestrator | + all_tags = (known after apply) 2026-01-02 00:02:34.803532 | orchestrator | + availability_zone = "nova" 2026-01-02 00:02:34.803537 | orchestrator | + config_drive = true 2026-01-02 00:02:34.803542 | orchestrator | + created = (known after apply) 2026-01-02 00:02:34.803547 | orchestrator | + flavor_id = (known after apply) 2026-01-02 00:02:34.803551 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-02 00:02:34.803556 | orchestrator | + force_delete = false 2026-01-02 00:02:34.803561 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-02 00:02:34.803566 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.803570 | orchestrator | + image_id = (known after apply) 2026-01-02 00:02:34.803575 | orchestrator | + image_name = (known after apply) 2026-01-02 00:02:34.803580 | orchestrator | + key_pair = "testbed" 2026-01-02 00:02:34.803585 | orchestrator | + name = "testbed-node-4" 2026-01-02 00:02:34.803590 | orchestrator | + power_state = "active" 2026-01-02 00:02:34.803594 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.803599 | orchestrator | + security_groups = (known after apply) 2026-01-02 00:02:34.803604 | orchestrator | + stop_before_destroy = false 2026-01-02 00:02:34.803609 | orchestrator | + updated = (known after apply) 2026-01-02 00:02:34.803614 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-02 00:02:34.803619 | orchestrator | 2026-01-02 00:02:34.803623 | orchestrator | + block_device { 2026-01-02 00:02:34.803628 | orchestrator | + boot_index = 0 2026-01-02 00:02:34.803633 | orchestrator | + delete_on_termination = false 2026-01-02 00:02:34.803638 | orchestrator | + destination_type = "volume" 2026-01-02 00:02:34.803642 | orchestrator | + multiattach = false 2026-01-02 00:02:34.803647 | orchestrator | + source_type = "volume" 2026-01-02 00:02:34.803652 | orchestrator | + uuid = (known after apply) 2026-01-02 00:02:34.803657 | orchestrator | } 2026-01-02 00:02:34.803662 | orchestrator | 2026-01-02 00:02:34.803667 | orchestrator | + network { 2026-01-02 00:02:34.803671 | orchestrator | + access_network = false 2026-01-02 00:02:34.803676 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-02 00:02:34.803681 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-02 00:02:34.803686 | orchestrator | + mac = (known after apply) 2026-01-02 00:02:34.803690 | orchestrator | + name = (known after apply) 2026-01-02 00:02:34.803695 | orchestrator | + port = (known after apply) 2026-01-02 00:02:34.803700 | orchestrator | + uuid = (known after apply) 2026-01-02 00:02:34.803705 | orchestrator | } 2026-01-02 00:02:34.803710 | orchestrator | } 2026-01-02 00:02:34.803719 | orchestrator | 2026-01-02 00:02:34.803724 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-01-02 00:02:34.803729 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-02 00:02:34.803734 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-02 00:02:34.803739 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-02 00:02:34.803743 | orchestrator | + all_metadata = (known after apply) 2026-01-02 00:02:34.803748 | orchestrator | + all_tags = (known after apply) 2026-01-02 00:02:34.803753 | orchestrator | + availability_zone = "nova" 2026-01-02 00:02:34.803758 | orchestrator | + config_drive = true 2026-01-02 00:02:34.803763 | orchestrator | + created = (known after apply) 2026-01-02 00:02:34.803767 | orchestrator | + flavor_id = (known after apply) 2026-01-02 00:02:34.803772 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-02 00:02:34.803777 | orchestrator | + force_delete = false 2026-01-02 00:02:34.803782 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-02 00:02:34.803786 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.803791 | orchestrator | + image_id = (known after apply) 2026-01-02 00:02:34.803796 | orchestrator | + image_name = (known after apply) 2026-01-02 00:02:34.803801 | orchestrator | + key_pair = "testbed" 2026-01-02 00:02:34.803806 | orchestrator | + name = "testbed-node-5" 2026-01-02 00:02:34.803810 | orchestrator | + power_state = "active" 2026-01-02 00:02:34.803815 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.803820 | orchestrator | + security_groups = (known after apply) 2026-01-02 00:02:34.803825 | orchestrator | + stop_before_destroy = false 2026-01-02 00:02:34.803829 | orchestrator | + updated = (known after apply) 2026-01-02 00:02:34.803834 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-02 00:02:34.803839 | orchestrator | 2026-01-02 00:02:34.803844 | orchestrator | + block_device { 2026-01-02 00:02:34.803849 | orchestrator | + boot_index = 0 2026-01-02 00:02:34.803853 | orchestrator | + delete_on_termination = false 2026-01-02 00:02:34.803858 | orchestrator | + destination_type = "volume" 2026-01-02 00:02:34.803863 | orchestrator | + multiattach = false 2026-01-02 00:02:34.803868 | orchestrator | + source_type = "volume" 2026-01-02 00:02:34.803873 | orchestrator | + uuid = (known after apply) 2026-01-02 00:02:34.803877 | orchestrator | } 2026-01-02 00:02:34.803882 | orchestrator | 2026-01-02 00:02:34.803887 | orchestrator | + network { 2026-01-02 00:02:34.803892 | orchestrator | + access_network = false 2026-01-02 00:02:34.803897 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-02 00:02:34.803901 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-02 00:02:34.803906 | orchestrator | + mac = (known after apply) 2026-01-02 00:02:34.803911 | orchestrator | + name = (known after apply) 2026-01-02 00:02:34.803916 | orchestrator | + port = (known after apply) 2026-01-02 00:02:34.803921 | orchestrator | + uuid = (known after apply) 2026-01-02 00:02:34.803926 | orchestrator | } 2026-01-02 00:02:34.803930 | orchestrator | } 2026-01-02 00:02:34.803935 | orchestrator | 2026-01-02 00:02:34.803940 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-01-02 00:02:34.803945 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-01-02 00:02:34.803950 | orchestrator | + fingerprint = (known after apply) 2026-01-02 00:02:34.803955 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.803959 | orchestrator | + name = "testbed" 2026-01-02 00:02:34.803964 | orchestrator | + private_key = (sensitive value) 2026-01-02 00:02:34.803969 | orchestrator | + public_key = (known after apply) 2026-01-02 00:02:34.803974 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.803979 | orchestrator | + user_id = (known after apply) 2026-01-02 00:02:34.803983 | orchestrator | } 2026-01-02 00:02:34.803988 | orchestrator | 2026-01-02 00:02:34.803997 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-01-02 00:02:34.804003 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-02 00:02:34.804012 | orchestrator | + device = (known after apply) 2026-01-02 00:02:34.804016 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.804021 | orchestrator | + instance_id = (known after apply) 2026-01-02 00:02:34.804026 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.804034 | orchestrator | + volume_id = (known after apply) 2026-01-02 00:02:34.804039 | orchestrator | } 2026-01-02 00:02:34.804044 | orchestrator | 2026-01-02 00:02:34.804049 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-01-02 00:02:34.804054 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-02 00:02:34.804059 | orchestrator | + device = (known after apply) 2026-01-02 00:02:34.804064 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.804068 | orchestrator | + instance_id = (known after apply) 2026-01-02 00:02:34.804073 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.804078 | orchestrator | + volume_id = (known after apply) 2026-01-02 00:02:34.804083 | orchestrator | } 2026-01-02 00:02:34.804087 | orchestrator | 2026-01-02 00:02:34.804092 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-01-02 00:02:34.804097 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-02 00:02:34.804102 | orchestrator | + device = (known after apply) 2026-01-02 00:02:34.804107 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.804112 | orchestrator | + instance_id = (known after apply) 2026-01-02 00:02:34.804116 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.804121 | orchestrator | + volume_id = (known after apply) 2026-01-02 00:02:34.804126 | orchestrator | } 2026-01-02 00:02:34.804131 | orchestrator | 2026-01-02 00:02:34.804136 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-01-02 00:02:34.804141 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-02 00:02:34.804145 | orchestrator | + device = (known after apply) 2026-01-02 00:02:34.804150 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.804155 | orchestrator | + instance_id = (known after apply) 2026-01-02 00:02:34.804160 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.804164 | orchestrator | + volume_id = (known after apply) 2026-01-02 00:02:34.804169 | orchestrator | } 2026-01-02 00:02:34.804174 | orchestrator | 2026-01-02 00:02:34.804179 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-01-02 00:02:34.804184 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-02 00:02:34.804189 | orchestrator | + device = (known after apply) 2026-01-02 00:02:34.804193 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.804198 | orchestrator | + instance_id = (known after apply) 2026-01-02 00:02:34.804203 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.804208 | orchestrator | + volume_id = (known after apply) 2026-01-02 00:02:34.804213 | orchestrator | } 2026-01-02 00:02:34.804218 | orchestrator | 2026-01-02 00:02:34.804222 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-01-02 00:02:34.804227 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-02 00:02:34.804232 | orchestrator | + device = (known after apply) 2026-01-02 00:02:34.804237 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.804242 | orchestrator | + instance_id = (known after apply) 2026-01-02 00:02:34.804246 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.804251 | orchestrator | + volume_id = (known after apply) 2026-01-02 00:02:34.804256 | orchestrator | } 2026-01-02 00:02:34.804261 | orchestrator | 2026-01-02 00:02:34.804266 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-01-02 00:02:34.804270 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-02 00:02:34.804275 | orchestrator | + device = (known after apply) 2026-01-02 00:02:34.804280 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.804285 | orchestrator | + instance_id = (known after apply) 2026-01-02 00:02:34.804290 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.804298 | orchestrator | + volume_id = (known after apply) 2026-01-02 00:02:34.804302 | orchestrator | } 2026-01-02 00:02:34.804307 | orchestrator | 2026-01-02 00:02:34.804323 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-01-02 00:02:34.804328 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-02 00:02:34.804333 | orchestrator | + device = (known after apply) 2026-01-02 00:02:34.804337 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.804342 | orchestrator | + instance_id = (known after apply) 2026-01-02 00:02:34.804347 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.804352 | orchestrator | + volume_id = (known after apply) 2026-01-02 00:02:34.804357 | orchestrator | } 2026-01-02 00:02:34.804361 | orchestrator | 2026-01-02 00:02:34.804366 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-01-02 00:02:34.804371 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-02 00:02:34.804376 | orchestrator | + device = (known after apply) 2026-01-02 00:02:34.804381 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.804385 | orchestrator | + instance_id = (known after apply) 2026-01-02 00:02:34.804390 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.804395 | orchestrator | + volume_id = (known after apply) 2026-01-02 00:02:34.804400 | orchestrator | } 2026-01-02 00:02:34.804405 | orchestrator | 2026-01-02 00:02:34.804410 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-01-02 00:02:34.804415 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-01-02 00:02:34.804420 | orchestrator | + fixed_ip = (known after apply) 2026-01-02 00:02:34.804425 | orchestrator | + floating_ip = (known after apply) 2026-01-02 00:02:34.804430 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.804435 | orchestrator | + port_id = (known after apply) 2026-01-02 00:02:34.804439 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.804444 | orchestrator | } 2026-01-02 00:02:34.804449 | orchestrator | 2026-01-02 00:02:34.804454 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-01-02 00:02:34.804458 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-01-02 00:02:34.804463 | orchestrator | + address = (known after apply) 2026-01-02 00:02:34.804468 | orchestrator | + all_tags = (known after apply) 2026-01-02 00:02:34.804476 | orchestrator | + dns_domain = (known after apply) 2026-01-02 00:02:34.804481 | orchestrator | + dns_name = (known after apply) 2026-01-02 00:02:34.804488 | orchestrator | + fixed_ip = (known after apply) 2026-01-02 00:02:34.804493 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.804498 | orchestrator | + pool = "public" 2026-01-02 00:02:34.804503 | orchestrator | + port_id = (known after apply) 2026-01-02 00:02:34.804508 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.804512 | orchestrator | + subnet_id = (known after apply) 2026-01-02 00:02:34.804517 | orchestrator | + tenant_id = (known after apply) 2026-01-02 00:02:34.804522 | orchestrator | } 2026-01-02 00:02:34.804527 | orchestrator | 2026-01-02 00:02:34.804532 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-01-02 00:02:34.804536 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-01-02 00:02:34.804541 | orchestrator | + admin_state_up = (known after apply) 2026-01-02 00:02:34.804546 | orchestrator | + all_tags = (known after apply) 2026-01-02 00:02:34.804551 | orchestrator | + availability_zone_hints = [ 2026-01-02 00:02:34.804556 | orchestrator | + "nova", 2026-01-02 00:02:34.804561 | orchestrator | ] 2026-01-02 00:02:34.804565 | orchestrator | + dns_domain = (known after apply) 2026-01-02 00:02:34.804570 | orchestrator | + external = (known after apply) 2026-01-02 00:02:34.804575 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.804580 | orchestrator | + mtu = (known after apply) 2026-01-02 00:02:34.804585 | orchestrator | + name = "net-testbed-management" 2026-01-02 00:02:34.804589 | orchestrator | + port_security_enabled = (known after apply) 2026-01-02 00:02:34.804626 | orchestrator | + qos_policy_id = (known after apply) 2026-01-02 00:02:34.804631 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.804636 | orchestrator | + shared = (known after apply) 2026-01-02 00:02:34.804641 | orchestrator | + tenant_id = (known after apply) 2026-01-02 00:02:34.804645 | orchestrator | + transparent_vlan = (known after apply) 2026-01-02 00:02:34.804650 | orchestrator | 2026-01-02 00:02:34.804655 | orchestrator | + segments (known after apply) 2026-01-02 00:02:34.804660 | orchestrator | } 2026-01-02 00:02:34.804665 | orchestrator | 2026-01-02 00:02:34.804669 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-01-02 00:02:34.804674 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-01-02 00:02:34.804679 | orchestrator | + admin_state_up = (known after apply) 2026-01-02 00:02:34.804684 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-02 00:02:34.804689 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-02 00:02:34.804694 | orchestrator | + all_tags = (known after apply) 2026-01-02 00:02:34.804698 | orchestrator | + device_id = (known after apply) 2026-01-02 00:02:34.804703 | orchestrator | + device_owner = (known after apply) 2026-01-02 00:02:34.804708 | orchestrator | + dns_assignment = (known after apply) 2026-01-02 00:02:34.804713 | orchestrator | + dns_name = (known after apply) 2026-01-02 00:02:34.804717 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.804722 | orchestrator | + mac_address = (known after apply) 2026-01-02 00:02:34.804727 | orchestrator | + network_id = (known after apply) 2026-01-02 00:02:34.804732 | orchestrator | + port_security_enabled = (known after apply) 2026-01-02 00:02:34.804736 | orchestrator | + qos_policy_id = (known after apply) 2026-01-02 00:02:34.804741 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.804746 | orchestrator | + security_group_ids = (known after apply) 2026-01-02 00:02:34.804751 | orchestrator | + tenant_id = (known after apply) 2026-01-02 00:02:34.804755 | orchestrator | 2026-01-02 00:02:34.804760 | orchestrator | + allowed_address_pairs { 2026-01-02 00:02:34.804765 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-02 00:02:34.804770 | orchestrator | } 2026-01-02 00:02:34.804775 | orchestrator | 2026-01-02 00:02:34.804779 | orchestrator | + binding (known after apply) 2026-01-02 00:02:34.804784 | orchestrator | 2026-01-02 00:02:34.804789 | orchestrator | + fixed_ip { 2026-01-02 00:02:34.804794 | orchestrator | + ip_address = "192.168.16.5" 2026-01-02 00:02:34.804799 | orchestrator | + subnet_id = (known after apply) 2026-01-02 00:02:34.804804 | orchestrator | } 2026-01-02 00:02:34.804808 | orchestrator | } 2026-01-02 00:02:34.804813 | orchestrator | 2026-01-02 00:02:34.804818 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-01-02 00:02:34.804823 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-02 00:02:34.804828 | orchestrator | + admin_state_up = (known after apply) 2026-01-02 00:02:34.804833 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-02 00:02:34.804837 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-02 00:02:34.804842 | orchestrator | + all_tags = (known after apply) 2026-01-02 00:02:34.804847 | orchestrator | + device_id = (known after apply) 2026-01-02 00:02:34.804852 | orchestrator | + device_owner = (known after apply) 2026-01-02 00:02:34.804857 | orchestrator | + dns_assignment = (known after apply) 2026-01-02 00:02:34.804861 | orchestrator | + dns_name = (known after apply) 2026-01-02 00:02:34.804866 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.804871 | orchestrator | + mac_address = (known after apply) 2026-01-02 00:02:34.804876 | orchestrator | + network_id = (known after apply) 2026-01-02 00:02:34.804881 | orchestrator | + port_security_enabled = (known after apply) 2026-01-02 00:02:34.804885 | orchestrator | + qos_policy_id = (known after apply) 2026-01-02 00:02:34.804890 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.804899 | orchestrator | + security_group_ids = (known after apply) 2026-01-02 00:02:34.804904 | orchestrator | + tenant_id = (known after apply) 2026-01-02 00:02:34.804908 | orchestrator | 2026-01-02 00:02:34.804913 | orchestrator | + allowed_address_pairs { 2026-01-02 00:02:34.804918 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-02 00:02:34.804923 | orchestrator | } 2026-01-02 00:02:34.804928 | orchestrator | + allowed_address_pairs { 2026-01-02 00:02:34.804933 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-02 00:02:34.804938 | orchestrator | } 2026-01-02 00:02:34.804942 | orchestrator | + allowed_address_pairs { 2026-01-02 00:02:34.804947 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-02 00:02:34.804952 | orchestrator | } 2026-01-02 00:02:34.804957 | orchestrator | 2026-01-02 00:02:34.804962 | orchestrator | + binding (known after apply) 2026-01-02 00:02:34.804966 | orchestrator | 2026-01-02 00:02:34.804971 | orchestrator | + fixed_ip { 2026-01-02 00:02:34.804976 | orchestrator | + ip_address = "192.168.16.10" 2026-01-02 00:02:34.804981 | orchestrator | + subnet_id = (known after apply) 2026-01-02 00:02:34.804986 | orchestrator | } 2026-01-02 00:02:34.804990 | orchestrator | } 2026-01-02 00:02:34.804995 | orchestrator | 2026-01-02 00:02:34.805000 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-01-02 00:02:34.805005 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-02 00:02:34.805013 | orchestrator | + admin_state_up = (known after apply) 2026-01-02 00:02:34.805018 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-02 00:02:34.805029 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-02 00:02:34.805034 | orchestrator | + all_tags = (known after apply) 2026-01-02 00:02:34.805039 | orchestrator | + device_id = (known after apply) 2026-01-02 00:02:34.805043 | orchestrator | + device_owner = (known after apply) 2026-01-02 00:02:34.805048 | orchestrator | + dns_assignment = (known after apply) 2026-01-02 00:02:34.805053 | orchestrator | + dns_name = (known after apply) 2026-01-02 00:02:34.805058 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.805063 | orchestrator | + mac_address = (known after apply) 2026-01-02 00:02:34.805068 | orchestrator | + network_id = (known after apply) 2026-01-02 00:02:34.805072 | orchestrator | + port_security_enabled = (known after apply) 2026-01-02 00:02:34.805077 | orchestrator | + qos_policy_id = (known after apply) 2026-01-02 00:02:34.805082 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.805087 | orchestrator | + security_group_ids = (known after apply) 2026-01-02 00:02:34.805092 | orchestrator | + tenant_id = (known after apply) 2026-01-02 00:02:34.805103 | orchestrator | 2026-01-02 00:02:34.805108 | orchestrator | + allowed_address_pairs { 2026-01-02 00:02:34.805112 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-02 00:02:34.805117 | orchestrator | } 2026-01-02 00:02:34.805122 | orchestrator | + allowed_address_pairs { 2026-01-02 00:02:34.805127 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-02 00:02:34.805132 | orchestrator | } 2026-01-02 00:02:34.805137 | orchestrator | + allowed_address_pairs { 2026-01-02 00:02:34.805141 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-02 00:02:34.805146 | orchestrator | } 2026-01-02 00:02:34.805151 | orchestrator | 2026-01-02 00:02:34.805156 | orchestrator | + binding (known after apply) 2026-01-02 00:02:34.805161 | orchestrator | 2026-01-02 00:02:34.805165 | orchestrator | + fixed_ip { 2026-01-02 00:02:34.805170 | orchestrator | + ip_address = "192.168.16.11" 2026-01-02 00:02:34.805175 | orchestrator | + subnet_id = (known after apply) 2026-01-02 00:02:34.805180 | orchestrator | } 2026-01-02 00:02:34.805185 | orchestrator | } 2026-01-02 00:02:34.805190 | orchestrator | 2026-01-02 00:02:34.805194 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-01-02 00:02:34.805199 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-02 00:02:34.805204 | orchestrator | + admin_state_up = (known after apply) 2026-01-02 00:02:34.805209 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-02 00:02:34.805214 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-02 00:02:34.805219 | orchestrator | + all_tags = (known after apply) 2026-01-02 00:02:34.805227 | orchestrator | + device_id = (known after apply) 2026-01-02 00:02:34.805232 | orchestrator | + device_owner = (known after apply) 2026-01-02 00:02:34.805237 | orchestrator | + dns_assignment = (known after apply) 2026-01-02 00:02:34.805242 | orchestrator | + dns_name = (known after apply) 2026-01-02 00:02:34.805247 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.805252 | orchestrator | + mac_address = (known after apply) 2026-01-02 00:02:34.805256 | orchestrator | + network_id = (known after apply) 2026-01-02 00:02:34.805261 | orchestrator | + port_security_enabled = (known after apply) 2026-01-02 00:02:34.805266 | orchestrator | + qos_policy_id = (known after apply) 2026-01-02 00:02:34.805271 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.805276 | orchestrator | + security_group_ids = (known after apply) 2026-01-02 00:02:34.805280 | orchestrator | + tenant_id = (known after apply) 2026-01-02 00:02:34.805285 | orchestrator | 2026-01-02 00:02:34.805290 | orchestrator | + allowed_address_pairs { 2026-01-02 00:02:34.805295 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-02 00:02:34.805300 | orchestrator | } 2026-01-02 00:02:34.805304 | orchestrator | + allowed_address_pairs { 2026-01-02 00:02:34.805309 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-02 00:02:34.805342 | orchestrator | } 2026-01-02 00:02:34.805347 | orchestrator | + allowed_address_pairs { 2026-01-02 00:02:34.805352 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-02 00:02:34.805357 | orchestrator | } 2026-01-02 00:02:34.805361 | orchestrator | 2026-01-02 00:02:34.805366 | orchestrator | + binding (known after apply) 2026-01-02 00:02:34.805371 | orchestrator | 2026-01-02 00:02:34.805376 | orchestrator | + fixed_ip { 2026-01-02 00:02:34.805381 | orchestrator | + ip_address = "192.168.16.12" 2026-01-02 00:02:34.805386 | orchestrator | + subnet_id = (known after apply) 2026-01-02 00:02:34.805390 | orchestrator | } 2026-01-02 00:02:34.805395 | orchestrator | } 2026-01-02 00:02:34.805400 | orchestrator | 2026-01-02 00:02:34.805405 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-01-02 00:02:34.805410 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-02 00:02:34.805414 | orchestrator | + admin_state_up = (known after apply) 2026-01-02 00:02:34.805419 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-02 00:02:34.805424 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-02 00:02:34.805429 | orchestrator | + all_tags = (known after apply) 2026-01-02 00:02:34.805434 | orchestrator | + device_id = (known after apply) 2026-01-02 00:02:34.805439 | orchestrator | + device_owner = (known after apply) 2026-01-02 00:02:34.805443 | orchestrator | + dns_assignment = (known after apply) 2026-01-02 00:02:34.805448 | orchestrator | + dns_name = (known after apply) 2026-01-02 00:02:34.805453 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.805458 | orchestrator | + mac_address = (known after apply) 2026-01-02 00:02:34.805463 | orchestrator | + network_id = (known after apply) 2026-01-02 00:02:34.805468 | orchestrator | + port_security_enabled = (known after apply) 2026-01-02 00:02:34.805472 | orchestrator | + qos_policy_id = (known after apply) 2026-01-02 00:02:34.805477 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.805482 | orchestrator | + security_group_ids = (known after apply) 2026-01-02 00:02:34.805487 | orchestrator | + tenant_id = (known after apply) 2026-01-02 00:02:34.805491 | orchestrator | 2026-01-02 00:02:34.805496 | orchestrator | + allowed_address_pairs { 2026-01-02 00:02:34.805501 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-02 00:02:34.805506 | orchestrator | } 2026-01-02 00:02:34.805511 | orchestrator | + allowed_address_pairs { 2026-01-02 00:02:34.805515 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-02 00:02:34.805520 | orchestrator | } 2026-01-02 00:02:34.805525 | orchestrator | + allowed_address_pairs { 2026-01-02 00:02:34.805530 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-02 00:02:34.805535 | orchestrator | } 2026-01-02 00:02:34.805539 | orchestrator | 2026-01-02 00:02:34.805548 | orchestrator | + binding (known after apply) 2026-01-02 00:02:34.805553 | orchestrator | 2026-01-02 00:02:34.805558 | orchestrator | + fixed_ip { 2026-01-02 00:02:34.805562 | orchestrator | + ip_address = "192.168.16.13" 2026-01-02 00:02:34.805567 | orchestrator | + subnet_id = (known after apply) 2026-01-02 00:02:34.805572 | orchestrator | } 2026-01-02 00:02:34.805577 | orchestrator | } 2026-01-02 00:02:34.805582 | orchestrator | 2026-01-02 00:02:34.805586 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-01-02 00:02:34.805594 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-02 00:02:34.805599 | orchestrator | + admin_state_up = (known after apply) 2026-01-02 00:02:34.805604 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-02 00:02:34.805609 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-02 00:02:34.805619 | orchestrator | + all_tags = (known after apply) 2026-01-02 00:02:34.805624 | orchestrator | + device_id = (known after apply) 2026-01-02 00:02:34.805629 | orchestrator | + device_owner = (known after apply) 2026-01-02 00:02:34.805633 | orchestrator | + dns_assignment = (known after apply) 2026-01-02 00:02:34.805638 | orchestrator | + dns_name = (known after apply) 2026-01-02 00:02:34.805646 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.805651 | orchestrator | + mac_address = (known after apply) 2026-01-02 00:02:34.805656 | orchestrator | + network_id = (known after apply) 2026-01-02 00:02:34.805661 | orchestrator | + port_security_enabled = (known after apply) 2026-01-02 00:02:34.805666 | orchestrator | + qos_policy_id = (known after apply) 2026-01-02 00:02:34.805671 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.805675 | orchestrator | + security_group_ids = (known after apply) 2026-01-02 00:02:34.805680 | orchestrator | + tenant_id = (known after apply) 2026-01-02 00:02:34.805686 | orchestrator | 2026-01-02 00:02:34.805690 | orchestrator | + allowed_address_pairs { 2026-01-02 00:02:34.805698 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-02 00:02:34.805703 | orchestrator | } 2026-01-02 00:02:34.805708 | orchestrator | + allowed_address_pairs { 2026-01-02 00:02:34.805713 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-02 00:02:34.805718 | orchestrator | } 2026-01-02 00:02:34.805723 | orchestrator | + allowed_address_pairs { 2026-01-02 00:02:34.805727 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-02 00:02:34.805732 | orchestrator | } 2026-01-02 00:02:34.805737 | orchestrator | 2026-01-02 00:02:34.805742 | orchestrator | + binding (known after apply) 2026-01-02 00:02:34.805747 | orchestrator | 2026-01-02 00:02:34.805751 | orchestrator | + fixed_ip { 2026-01-02 00:02:34.805756 | orchestrator | + ip_address = "192.168.16.14" 2026-01-02 00:02:34.805761 | orchestrator | + subnet_id = (known after apply) 2026-01-02 00:02:34.805766 | orchestrator | } 2026-01-02 00:02:34.805771 | orchestrator | } 2026-01-02 00:02:34.805775 | orchestrator | 2026-01-02 00:02:34.805780 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-01-02 00:02:34.805785 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-02 00:02:34.805790 | orchestrator | + admin_state_up = (known after apply) 2026-01-02 00:02:34.805795 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-02 00:02:34.805800 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-02 00:02:34.805804 | orchestrator | + all_tags = (known after apply) 2026-01-02 00:02:34.805809 | orchestrator | + device_id = (known after apply) 2026-01-02 00:02:34.805814 | orchestrator | + device_owner = (known after apply) 2026-01-02 00:02:34.805819 | orchestrator | + dns_assignment = (known after apply) 2026-01-02 00:02:34.805823 | orchestrator | + dns_name = (known after apply) 2026-01-02 00:02:34.805828 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.805833 | orchestrator | + mac_address = (known after apply) 2026-01-02 00:02:34.805838 | orchestrator | + network_id = (known after apply) 2026-01-02 00:02:34.805843 | orchestrator | + port_security_enabled = (known after apply) 2026-01-02 00:02:34.805847 | orchestrator | + qos_policy_id = (known after apply) 2026-01-02 00:02:34.805858 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.805863 | orchestrator | + security_group_ids = (known after apply) 2026-01-02 00:02:34.805868 | orchestrator | + tenant_id = (known after apply) 2026-01-02 00:02:34.805872 | orchestrator | 2026-01-02 00:02:34.805877 | orchestrator | + allowed_address_pairs { 2026-01-02 00:02:34.805882 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-02 00:02:34.805887 | orchestrator | } 2026-01-02 00:02:34.805892 | orchestrator | + allowed_address_pairs { 2026-01-02 00:02:34.805896 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-02 00:02:34.805901 | orchestrator | } 2026-01-02 00:02:34.805906 | orchestrator | + allowed_address_pairs { 2026-01-02 00:02:34.805911 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-02 00:02:34.805916 | orchestrator | } 2026-01-02 00:02:34.805920 | orchestrator | 2026-01-02 00:02:34.805925 | orchestrator | + binding (known after apply) 2026-01-02 00:02:34.805930 | orchestrator | 2026-01-02 00:02:34.805935 | orchestrator | + fixed_ip { 2026-01-02 00:02:34.805940 | orchestrator | + ip_address = "192.168.16.15" 2026-01-02 00:02:34.805944 | orchestrator | + subnet_id = (known after apply) 2026-01-02 00:02:34.805949 | orchestrator | } 2026-01-02 00:02:34.805954 | orchestrator | } 2026-01-02 00:02:34.805959 | orchestrator | 2026-01-02 00:02:34.805964 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-01-02 00:02:34.805968 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-01-02 00:02:34.805973 | orchestrator | + force_destroy = false 2026-01-02 00:02:34.805978 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.805983 | orchestrator | + port_id = (known after apply) 2026-01-02 00:02:34.805988 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.805992 | orchestrator | + router_id = (known after apply) 2026-01-02 00:02:34.805997 | orchestrator | + subnet_id = (known after apply) 2026-01-02 00:02:34.806002 | orchestrator | } 2026-01-02 00:02:34.806007 | orchestrator | 2026-01-02 00:02:34.806031 | orchestrator | # openstack_networking_router_v2.router will be created 2026-01-02 00:02:34.806037 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-01-02 00:02:34.806042 | orchestrator | + admin_state_up = (known after apply) 2026-01-02 00:02:34.806047 | orchestrator | + all_tags = (known after apply) 2026-01-02 00:02:34.806051 | orchestrator | + availability_zone_hints = [ 2026-01-02 00:02:34.806056 | orchestrator | + "nova", 2026-01-02 00:02:34.806061 | orchestrator | ] 2026-01-02 00:02:34.806066 | orchestrator | + distributed = (known after apply) 2026-01-02 00:02:34.806071 | orchestrator | + enable_snat = (known after apply) 2026-01-02 00:02:34.806075 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-01-02 00:02:34.806080 | orchestrator | + external_qos_policy_id = (known after apply) 2026-01-02 00:02:34.806085 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.806090 | orchestrator | + name = "testbed" 2026-01-02 00:02:34.806095 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.806099 | orchestrator | + tenant_id = (known after apply) 2026-01-02 00:02:34.806104 | orchestrator | 2026-01-02 00:02:34.806109 | orchestrator | + external_fixed_ip (known after apply) 2026-01-02 00:02:34.806114 | orchestrator | } 2026-01-02 00:02:34.806119 | orchestrator | 2026-01-02 00:02:34.806123 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-01-02 00:02:34.806128 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-01-02 00:02:34.806137 | orchestrator | + description = "ssh" 2026-01-02 00:02:34.806142 | orchestrator | + direction = "ingress" 2026-01-02 00:02:34.806147 | orchestrator | + ethertype = "IPv4" 2026-01-02 00:02:34.806152 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.806156 | orchestrator | + port_range_max = 22 2026-01-02 00:02:34.806161 | orchestrator | + port_range_min = 22 2026-01-02 00:02:34.806166 | orchestrator | + protocol = "tcp" 2026-01-02 00:02:34.806171 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.806179 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-02 00:02:34.806184 | orchestrator | + remote_group_id = (known after apply) 2026-01-02 00:02:34.806189 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-02 00:02:34.806194 | orchestrator | + security_group_id = (known after apply) 2026-01-02 00:02:34.806198 | orchestrator | + tenant_id = (known after apply) 2026-01-02 00:02:34.806203 | orchestrator | } 2026-01-02 00:02:34.806208 | orchestrator | 2026-01-02 00:02:34.806213 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-01-02 00:02:34.806218 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-01-02 00:02:34.806222 | orchestrator | + description = "wireguard" 2026-01-02 00:02:34.806227 | orchestrator | + direction = "ingress" 2026-01-02 00:02:34.806232 | orchestrator | + ethertype = "IPv4" 2026-01-02 00:02:34.806237 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.806242 | orchestrator | + port_range_max = 51820 2026-01-02 00:02:34.806246 | orchestrator | + port_range_min = 51820 2026-01-02 00:02:34.806251 | orchestrator | + protocol = "udp" 2026-01-02 00:02:34.806256 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.806261 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-02 00:02:34.806266 | orchestrator | + remote_group_id = (known after apply) 2026-01-02 00:02:34.806270 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-02 00:02:34.806275 | orchestrator | + security_group_id = (known after apply) 2026-01-02 00:02:34.806280 | orchestrator | + tenant_id = (known after apply) 2026-01-02 00:02:34.806285 | orchestrator | } 2026-01-02 00:02:34.806290 | orchestrator | 2026-01-02 00:02:34.806294 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-01-02 00:02:34.806299 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-01-02 00:02:34.806308 | orchestrator | + direction = "ingress" 2026-01-02 00:02:34.806338 | orchestrator | + ethertype = "IPv4" 2026-01-02 00:02:34.806343 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.806348 | orchestrator | + protocol = "tcp" 2026-01-02 00:02:34.806353 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.806358 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-02 00:02:34.806363 | orchestrator | + remote_group_id = (known after apply) 2026-01-02 00:02:34.806367 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-01-02 00:02:34.806372 | orchestrator | + security_group_id = (known after apply) 2026-01-02 00:02:34.806377 | orchestrator | + tenant_id = (known after apply) 2026-01-02 00:02:34.806382 | orchestrator | } 2026-01-02 00:02:34.806386 | orchestrator | 2026-01-02 00:02:34.806391 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-01-02 00:02:34.806396 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-01-02 00:02:34.806401 | orchestrator | + direction = "ingress" 2026-01-02 00:02:34.806406 | orchestrator | + ethertype = "IPv4" 2026-01-02 00:02:34.806411 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.806415 | orchestrator | + protocol = "udp" 2026-01-02 00:02:34.806420 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.806425 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-02 00:02:34.806430 | orchestrator | + remote_group_id = (known after apply) 2026-01-02 00:02:34.806435 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-01-02 00:02:34.806440 | orchestrator | + security_group_id = (known after apply) 2026-01-02 00:02:34.806444 | orchestrator | + tenant_id = (known after apply) 2026-01-02 00:02:34.806449 | orchestrator | } 2026-01-02 00:02:34.806454 | orchestrator | 2026-01-02 00:02:34.806459 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-01-02 00:02:34.806468 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-01-02 00:02:34.806473 | orchestrator | + direction = "ingress" 2026-01-02 00:02:34.806478 | orchestrator | + ethertype = "IPv4" 2026-01-02 00:02:34.806482 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.806487 | orchestrator | + protocol = "icmp" 2026-01-02 00:02:34.806492 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.806497 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-02 00:02:34.806502 | orchestrator | + remote_group_id = (known after apply) 2026-01-02 00:02:34.806506 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-02 00:02:34.806511 | orchestrator | + security_group_id = (known after apply) 2026-01-02 00:02:34.806516 | orchestrator | + tenant_id = (known after apply) 2026-01-02 00:02:34.806521 | orchestrator | } 2026-01-02 00:02:34.806526 | orchestrator | 2026-01-02 00:02:34.806530 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-01-02 00:02:34.806535 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-01-02 00:02:34.806540 | orchestrator | + direction = "ingress" 2026-01-02 00:02:34.806545 | orchestrator | + ethertype = "IPv4" 2026-01-02 00:02:34.806550 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.806554 | orchestrator | + protocol = "tcp" 2026-01-02 00:02:34.806559 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.806564 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-02 00:02:34.806569 | orchestrator | + remote_group_id = (known after apply) 2026-01-02 00:02:34.806574 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-02 00:02:34.806579 | orchestrator | + security_group_id = (known after apply) 2026-01-02 00:02:34.806586 | orchestrator | + tenant_id = (known after apply) 2026-01-02 00:02:34.806591 | orchestrator | } 2026-01-02 00:02:34.806596 | orchestrator | 2026-01-02 00:02:34.806600 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-01-02 00:02:34.806604 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-01-02 00:02:34.806609 | orchestrator | + direction = "ingress" 2026-01-02 00:02:34.806613 | orchestrator | + ethertype = "IPv4" 2026-01-02 00:02:34.806617 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.806622 | orchestrator | + protocol = "udp" 2026-01-02 00:02:34.806626 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.806630 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-02 00:02:34.806635 | orchestrator | + remote_group_id = (known after apply) 2026-01-02 00:02:34.806639 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-02 00:02:34.806643 | orchestrator | + security_group_id = (known after apply) 2026-01-02 00:02:34.806648 | orchestrator | + tenant_id = (known after apply) 2026-01-02 00:02:34.806652 | orchestrator | } 2026-01-02 00:02:34.806656 | orchestrator | 2026-01-02 00:02:34.806661 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-01-02 00:02:34.806665 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-01-02 00:02:34.806669 | orchestrator | + direction = "ingress" 2026-01-02 00:02:34.806674 | orchestrator | + ethertype = "IPv4" 2026-01-02 00:02:34.806678 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.806683 | orchestrator | + protocol = "icmp" 2026-01-02 00:02:34.806687 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.806691 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-02 00:02:34.806696 | orchestrator | + remote_group_id = (known after apply) 2026-01-02 00:02:34.806700 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-02 00:02:34.806704 | orchestrator | + security_group_id = (known after apply) 2026-01-02 00:02:34.806709 | orchestrator | + tenant_id = (known after apply) 2026-01-02 00:02:34.806718 | orchestrator | } 2026-01-02 00:02:34.806722 | orchestrator | 2026-01-02 00:02:34.806726 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-01-02 00:02:34.806731 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-01-02 00:02:34.806735 | orchestrator | + description = "vrrp" 2026-01-02 00:02:34.806740 | orchestrator | + direction = "ingress" 2026-01-02 00:02:34.806744 | orchestrator | + ethertype = "IPv4" 2026-01-02 00:02:34.806748 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.806753 | orchestrator | + protocol = "112" 2026-01-02 00:02:34.806757 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.806761 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-02 00:02:34.806766 | orchestrator | + remote_group_id = (known after apply) 2026-01-02 00:02:34.806770 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-02 00:02:34.806774 | orchestrator | + security_group_id = (known after apply) 2026-01-02 00:02:34.806779 | orchestrator | + tenant_id = (known after apply) 2026-01-02 00:02:34.806783 | orchestrator | } 2026-01-02 00:02:34.806787 | orchestrator | 2026-01-02 00:02:34.806792 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-01-02 00:02:34.806796 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-01-02 00:02:34.806801 | orchestrator | + all_tags = (known after apply) 2026-01-02 00:02:34.806805 | orchestrator | + description = "management security group" 2026-01-02 00:02:34.806809 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.806813 | orchestrator | + name = "testbed-management" 2026-01-02 00:02:34.806818 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.806822 | orchestrator | + stateful = (known after apply) 2026-01-02 00:02:34.806826 | orchestrator | + tenant_id = (known after apply) 2026-01-02 00:02:34.806831 | orchestrator | } 2026-01-02 00:02:34.806835 | orchestrator | 2026-01-02 00:02:34.806840 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-01-02 00:02:34.806844 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-01-02 00:02:34.806848 | orchestrator | + all_tags = (known after apply) 2026-01-02 00:02:34.806853 | orchestrator | + description = "node security group" 2026-01-02 00:02:34.806857 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.806861 | orchestrator | + name = "testbed-node" 2026-01-02 00:02:34.806866 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.806870 | orchestrator | + stateful = (known after apply) 2026-01-02 00:02:34.806874 | orchestrator | + tenant_id = (known after apply) 2026-01-02 00:02:34.806879 | orchestrator | } 2026-01-02 00:02:34.806883 | orchestrator | 2026-01-02 00:02:34.806887 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-01-02 00:02:34.806892 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-01-02 00:02:34.806896 | orchestrator | + all_tags = (known after apply) 2026-01-02 00:02:34.806900 | orchestrator | + cidr = "192.168.16.0/20" 2026-01-02 00:02:34.806905 | orchestrator | + dns_nameservers = [ 2026-01-02 00:02:34.806909 | orchestrator | + "8.8.8.8", 2026-01-02 00:02:34.806914 | orchestrator | + "9.9.9.9", 2026-01-02 00:02:34.806918 | orchestrator | ] 2026-01-02 00:02:34.806922 | orchestrator | + enable_dhcp = true 2026-01-02 00:02:34.806927 | orchestrator | + gateway_ip = (known after apply) 2026-01-02 00:02:34.806934 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.806939 | orchestrator | + ip_version = 4 2026-01-02 00:02:34.806943 | orchestrator | + ipv6_address_mode = (known after apply) 2026-01-02 00:02:34.806948 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-01-02 00:02:34.806952 | orchestrator | + name = "subnet-testbed-management" 2026-01-02 00:02:34.806956 | orchestrator | + network_id = (known after apply) 2026-01-02 00:02:34.806961 | orchestrator | + no_gateway = false 2026-01-02 00:02:34.806965 | orchestrator | + region = (known after apply) 2026-01-02 00:02:34.806969 | orchestrator | + service_types = (known after apply) 2026-01-02 00:02:34.806977 | orchestrator | + tenant_id = (known after apply) 2026-01-02 00:02:34.806981 | orchestrator | 2026-01-02 00:02:34.806985 | orchestrator | + allocation_pool { 2026-01-02 00:02:34.806990 | orchestrator | + end = "192.168.31.250" 2026-01-02 00:02:34.806994 | orchestrator | + start = "192.168.31.200" 2026-01-02 00:02:34.806998 | orchestrator | } 2026-01-02 00:02:34.807003 | orchestrator | } 2026-01-02 00:02:34.807007 | orchestrator | 2026-01-02 00:02:34.807011 | orchestrator | # terraform_data.image will be created 2026-01-02 00:02:34.807016 | orchestrator | + resource "terraform_data" "image" { 2026-01-02 00:02:34.807020 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.807026 | orchestrator | + input = "Ubuntu 24.04" 2026-01-02 00:02:34.807031 | orchestrator | + output = (known after apply) 2026-01-02 00:02:34.807035 | orchestrator | } 2026-01-02 00:02:34.807039 | orchestrator | 2026-01-02 00:02:34.807044 | orchestrator | # terraform_data.image_node will be created 2026-01-02 00:02:34.807048 | orchestrator | + resource "terraform_data" "image_node" { 2026-01-02 00:02:34.807052 | orchestrator | + id = (known after apply) 2026-01-02 00:02:34.807057 | orchestrator | + input = "Ubuntu 24.04" 2026-01-02 00:02:34.807061 | orchestrator | + output = (known after apply) 2026-01-02 00:02:34.807065 | orchestrator | } 2026-01-02 00:02:34.807070 | orchestrator | 2026-01-02 00:02:34.807074 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-01-02 00:02:34.807078 | orchestrator | 2026-01-02 00:02:34.807083 | orchestrator | Changes to Outputs: 2026-01-02 00:02:34.807087 | orchestrator | + manager_address = (sensitive value) 2026-01-02 00:02:34.807091 | orchestrator | + private_key = (sensitive value) 2026-01-02 00:02:35.037011 | orchestrator | terraform_data.image: Creating... 2026-01-02 00:02:35.037076 | orchestrator | terraform_data.image: Creation complete after 0s [id=0618a37c-3f59-9a77-429d-10ddf4092bd6] 2026-01-02 00:02:35.037083 | orchestrator | terraform_data.image_node: Creating... 2026-01-02 00:02:35.037089 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=cd892fbf-51bc-266f-ad90-ff3b9d7e7665] 2026-01-02 00:02:35.057854 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-01-02 00:02:35.061048 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-01-02 00:02:35.080129 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-01-02 00:02:35.080191 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-01-02 00:02:35.080209 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-01-02 00:02:35.080686 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-01-02 00:02:35.080843 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-01-02 00:02:35.098073 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-01-02 00:02:35.106087 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-01-02 00:02:35.126076 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-01-02 00:02:35.602699 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-01-02 00:02:35.609567 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-01-02 00:02:35.617462 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-01-02 00:02:35.631780 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-01-02 00:02:35.649958 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-01-02 00:02:35.655276 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-01-02 00:02:36.138936 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=80074def-cc26-4c4a-800e-bdfec881801d] 2026-01-02 00:02:36.146637 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-01-02 00:02:38.811503 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=3f193762-36b0-4c27-b28e-8efb206edc66] 2026-01-02 00:02:38.815830 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-01-02 00:02:38.822821 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=496b1234-da7e-4975-8125-a1f8cbe1a452] 2026-01-02 00:02:38.826746 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-01-02 00:02:38.831987 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=ace49a83-40fe-462c-82a5-a32ee72a9346] 2026-01-02 00:02:38.840195 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=26cdd52f-83be-4086-bce2-9cb6df4f24ab] 2026-01-02 00:02:38.846847 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-01-02 00:02:38.850297 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-01-02 00:02:38.864750 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=3a47a132-03ad-4adf-a37b-d405efe1a07c] 2026-01-02 00:02:38.865514 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=91cfe094-4682-4bfc-95e3-88354566cb8a] 2026-01-02 00:02:38.878145 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-01-02 00:02:38.886569 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-01-02 00:02:38.900687 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=84499345-a879-443a-82ee-40e5571fa8cd] 2026-01-02 00:02:38.910347 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-01-02 00:02:38.914573 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=d24ee98a65990ddd013e0af3d594029d20eeb6d6] 2026-01-02 00:02:38.927230 | orchestrator | local_file.id_rsa_pub: Creating... 2026-01-02 00:02:38.934946 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=6d9d2903-81fe-42d1-9111-d7d9a87231b0] 2026-01-02 00:02:38.936076 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=548a119083748b73324de756303fa09107c54a29] 2026-01-02 00:02:38.943860 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-01-02 00:02:38.950360 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=7a849538-9b89-4e07-840a-8a2ecc10a58d] 2026-01-02 00:02:39.536655 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=f4449738-099c-443f-90a1-9eef773d53ef] 2026-01-02 00:02:40.013837 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=19b61c2e-c527-4129-a3fd-617ff8c533e4] 2026-01-02 00:02:40.023473 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-01-02 00:02:42.314101 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=6849400f-5622-4491-9b67-d38598d17a9c] 2026-01-02 00:02:42.338612 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=817579c1-b31d-4bbf-8af4-60793d227397] 2026-01-02 00:02:42.369601 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=b48e1683-8cea-4971-bdc8-cd04d1d3aa28] 2026-01-02 00:02:42.375232 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=ac41253e-4ec4-41ef-b319-b223dc253c92] 2026-01-02 00:02:42.378707 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=3d352e57-4d3e-4622-b1e9-3f51c1a118c4] 2026-01-02 00:02:42.396121 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=cfac6910-579b-4d78-84a3-2d39a75847a6] 2026-01-02 00:02:43.546842 | orchestrator | openstack_networking_router_v2.router: Creation complete after 4s [id=fd5b5702-d56c-4b5a-a33f-4e904d59b45f] 2026-01-02 00:02:43.551124 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-01-02 00:02:43.554044 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-01-02 00:02:43.554103 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-01-02 00:02:43.782844 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=af3eb7ab-cf78-4625-a9a4-fd3b3869232a] 2026-01-02 00:02:43.794137 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-01-02 00:02:43.794222 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-01-02 00:02:43.794245 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-01-02 00:02:43.800887 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-01-02 00:02:43.800944 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-01-02 00:02:43.800954 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-01-02 00:02:43.938599 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=5ad9892c-3071-4395-b7c4-2e8601435e23] 2026-01-02 00:02:43.950186 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-01-02 00:02:43.952140 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-01-02 00:02:43.952658 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-01-02 00:02:43.986339 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=49fb326b-af40-42a8-aac6-2884df74edad] 2026-01-02 00:02:43.994954 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-01-02 00:02:44.320069 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=5df6d3dd-c6bd-4ce8-8c8b-9085572a89dd] 2026-01-02 00:02:44.331748 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-01-02 00:02:44.395460 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=c7145e24-1619-4a24-9b93-5a91040b51af] 2026-01-02 00:02:44.411365 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-01-02 00:02:44.605692 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=400e3786-3b3e-4b47-a5d2-26e29a732bde] 2026-01-02 00:02:44.617017 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-01-02 00:02:44.782968 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=730affe4-023c-4a50-addf-2c2ffca9b3a3] 2026-01-02 00:02:44.797926 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-01-02 00:02:44.893797 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=c9d91427-e236-4a8d-9975-08a42c99e6bb] 2026-01-02 00:02:44.906771 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-01-02 00:02:45.013649 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=6437d547-ef65-419a-ae1e-8a8967b900f7] 2026-01-02 00:02:45.019583 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-01-02 00:02:45.234152 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=9c9e7c90-775c-44f2-a251-d68da96a9d88] 2026-01-02 00:02:45.358990 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=dd51c691-1ac9-4fc0-90c9-9072a2ba2e86] 2026-01-02 00:02:45.495902 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=03db1444-050c-4a47-af1d-6993e17ed987] 2026-01-02 00:02:45.526813 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=0e727db7-2929-4020-8b7f-92e384b25372] 2026-01-02 00:02:45.547736 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=f49ab2b7-2c97-43ce-8f79-bb2de07f6e2f] 2026-01-02 00:02:45.657977 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=e9403b32-5af9-421d-84cc-11361b08ab96] 2026-01-02 00:02:45.834709 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 2s [id=3474328a-707a-44b7-9f43-55fb8149213a] 2026-01-02 00:02:45.849998 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 2s [id=13c2190b-6651-4b22-a92e-34036629c399] 2026-01-02 00:02:46.066780 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=9124dc2c-9701-420f-9ca2-58cad130a622] 2026-01-02 00:02:46.669994 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=c14d9bef-711a-4af4-a483-b15c8d0a01e3] 2026-01-02 00:02:46.697766 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-01-02 00:02:46.706689 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-01-02 00:02:46.707876 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-01-02 00:02:46.711686 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-01-02 00:02:46.716133 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-01-02 00:02:46.724096 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-01-02 00:02:46.733724 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-01-02 00:02:48.267928 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 1s [id=24cba70e-5bc8-45d4-af99-0b746a89babd] 2026-01-02 00:02:48.275830 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-01-02 00:02:48.281145 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-01-02 00:02:48.281218 | orchestrator | local_file.inventory: Creating... 2026-01-02 00:02:48.284218 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=220f79476b9920c6dfc55ec045135ac975f0378c] 2026-01-02 00:02:48.287981 | orchestrator | local_file.inventory: Creation complete after 0s [id=16dab1c529d49048747e9a9f13d160eee72634c0] 2026-01-02 00:02:49.474751 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=24cba70e-5bc8-45d4-af99-0b746a89babd] 2026-01-02 00:02:56.710954 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-01-02 00:02:56.713255 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-01-02 00:02:56.715668 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-01-02 00:02:56.722239 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-01-02 00:02:56.725536 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-01-02 00:02:56.737056 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-01-02 00:03:06.711761 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-01-02 00:03:06.713856 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-01-02 00:03:06.716289 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-01-02 00:03:06.722578 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-01-02 00:03:06.725808 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-01-02 00:03:06.738259 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-01-02 00:03:07.556184 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 21s [id=fc2d9969-f69d-4bff-8893-7b901f102fc7] 2026-01-02 00:03:16.719927 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-01-02 00:03:16.720055 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-01-02 00:03:16.723120 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-01-02 00:03:16.726697 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-01-02 00:03:16.739220 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-01-02 00:03:17.367363 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 30s [id=36d19e42-dd6e-42e5-8f4c-c3bbe4b0088e] 2026-01-02 00:03:17.674794 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=5db38a73-c2f9-4ad1-8431-eb77173920a2] 2026-01-02 00:03:18.042242 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=f686642c-9782-4c20-b936-aabffbdc3d4e] 2026-01-02 00:03:26.728867 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2026-01-02 00:03:26.729089 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-01-02 00:03:27.652582 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 41s [id=489389a6-7869-4651-99df-72f0470ddffa] 2026-01-02 00:03:29.070123 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 42s [id=fc6ed2e5-b68a-4a57-8219-a441051047bf] 2026-01-02 00:03:29.107708 | orchestrator | null_resource.node_semaphore: Creating... 2026-01-02 00:03:29.117554 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-01-02 00:03:29.117763 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=5966150795559557685] 2026-01-02 00:03:29.118493 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-01-02 00:03:29.118701 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-01-02 00:03:29.119131 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-01-02 00:03:29.119182 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-01-02 00:03:29.122436 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-01-02 00:03:29.148233 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-01-02 00:03:29.157552 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-01-02 00:03:29.158804 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-01-02 00:03:29.184627 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-01-02 00:03:32.645501 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 4s [id=489389a6-7869-4651-99df-72f0470ddffa/496b1234-da7e-4975-8125-a1f8cbe1a452] 2026-01-02 00:03:32.680490 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 4s [id=5db38a73-c2f9-4ad1-8431-eb77173920a2/3a47a132-03ad-4adf-a37b-d405efe1a07c] 2026-01-02 00:03:32.683986 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=36d19e42-dd6e-42e5-8f4c-c3bbe4b0088e/91cfe094-4682-4bfc-95e3-88354566cb8a] 2026-01-02 00:03:32.720866 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=489389a6-7869-4651-99df-72f0470ddffa/7a849538-9b89-4e07-840a-8a2ecc10a58d] 2026-01-02 00:03:32.729084 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=5db38a73-c2f9-4ad1-8431-eb77173920a2/26cdd52f-83be-4086-bce2-9cb6df4f24ab] 2026-01-02 00:03:32.753151 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=36d19e42-dd6e-42e5-8f4c-c3bbe4b0088e/ace49a83-40fe-462c-82a5-a32ee72a9346] 2026-01-02 00:03:38.839257 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 10s [id=5db38a73-c2f9-4ad1-8431-eb77173920a2/3f193762-36b0-4c27-b28e-8efb206edc66] 2026-01-02 00:03:38.839620 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 10s [id=36d19e42-dd6e-42e5-8f4c-c3bbe4b0088e/6d9d2903-81fe-42d1-9111-d7d9a87231b0] 2026-01-02 00:03:38.869249 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 10s [id=489389a6-7869-4651-99df-72f0470ddffa/84499345-a879-443a-82ee-40e5571fa8cd] 2026-01-02 00:03:39.187991 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-01-02 00:03:49.189139 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-01-02 00:03:49.576758 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=8a04d935-659d-4a2a-b044-282eb4573c5d] 2026-01-02 00:03:49.600957 | orchestrator | 2026-01-02 00:03:49.601052 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-01-02 00:03:49.601075 | orchestrator | 2026-01-02 00:03:49.601087 | orchestrator | Outputs: 2026-01-02 00:03:49.601097 | orchestrator | 2026-01-02 00:03:49.601106 | orchestrator | manager_address = 2026-01-02 00:03:49.601116 | orchestrator | private_key = 2026-01-02 00:03:49.960518 | orchestrator | ok: Runtime: 0:01:21.728930 2026-01-02 00:03:49.999561 | 2026-01-02 00:03:49.999743 | TASK [Create infrastructure (stable)] 2026-01-02 00:03:50.562237 | orchestrator | skipping: Conditional result was False 2026-01-02 00:03:50.584695 | 2026-01-02 00:03:50.585018 | TASK [Fetch manager address] 2026-01-02 00:03:51.073664 | orchestrator | ok 2026-01-02 00:03:51.086589 | 2026-01-02 00:03:51.086813 | TASK [Set manager_host address] 2026-01-02 00:03:51.161473 | orchestrator | ok 2026-01-02 00:03:51.172030 | 2026-01-02 00:03:51.172222 | LOOP [Update ansible collections] 2026-01-02 00:03:52.253215 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-01-02 00:03:52.253502 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-02 00:03:52.253543 | orchestrator | Starting galaxy collection install process 2026-01-02 00:03:52.253569 | orchestrator | Process install dependency map 2026-01-02 00:03:52.253592 | orchestrator | Starting collection install process 2026-01-02 00:03:52.253613 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons' 2026-01-02 00:03:52.253637 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons 2026-01-02 00:03:52.253668 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-01-02 00:03:52.253723 | orchestrator | ok: Item: commons Runtime: 0:00:00.726931 2026-01-02 00:03:53.334295 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-01-02 00:03:53.334433 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-02 00:03:53.334481 | orchestrator | Starting galaxy collection install process 2026-01-02 00:03:53.334519 | orchestrator | Process install dependency map 2026-01-02 00:03:53.334553 | orchestrator | Starting collection install process 2026-01-02 00:03:53.334587 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services' 2026-01-02 00:03:53.334621 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services 2026-01-02 00:03:53.334654 | orchestrator | osism.services:999.0.0 was installed successfully 2026-01-02 00:03:53.334703 | orchestrator | ok: Item: services Runtime: 0:00:00.816334 2026-01-02 00:03:53.351680 | 2026-01-02 00:03:53.351793 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-01-02 00:04:03.897779 | orchestrator | ok 2026-01-02 00:04:03.916576 | 2026-01-02 00:04:03.916764 | TASK [Wait a little longer for the manager so that everything is ready] 2026-01-02 00:05:03.971724 | orchestrator | ok 2026-01-02 00:05:03.984838 | 2026-01-02 00:05:03.984999 | TASK [Fetch manager ssh hostkey] 2026-01-02 00:05:05.570167 | orchestrator | Output suppressed because no_log was given 2026-01-02 00:05:05.587887 | 2026-01-02 00:05:05.588063 | TASK [Get ssh keypair from terraform environment] 2026-01-02 00:05:06.128843 | orchestrator | ok: Runtime: 0:00:00.006253 2026-01-02 00:05:06.139677 | 2026-01-02 00:05:06.139820 | TASK [Point out that the following task takes some time and does not give any output] 2026-01-02 00:05:06.173490 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-01-02 00:05:06.184914 | 2026-01-02 00:05:06.185069 | TASK [Run manager part 0] 2026-01-02 00:05:07.140730 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-02 00:05:07.191038 | orchestrator | 2026-01-02 00:05:07.191097 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-01-02 00:05:07.191105 | orchestrator | 2026-01-02 00:05:07.191120 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-01-02 00:05:09.163845 | orchestrator | ok: [testbed-manager] 2026-01-02 00:05:09.163925 | orchestrator | 2026-01-02 00:05:09.163958 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-01-02 00:05:09.163974 | orchestrator | 2026-01-02 00:05:09.163989 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-02 00:05:11.086150 | orchestrator | ok: [testbed-manager] 2026-01-02 00:05:11.086222 | orchestrator | 2026-01-02 00:05:11.086236 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-01-02 00:05:11.768518 | orchestrator | ok: [testbed-manager] 2026-01-02 00:05:11.768577 | orchestrator | 2026-01-02 00:05:11.768586 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-01-02 00:05:11.815365 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:05:11.815436 | orchestrator | 2026-01-02 00:05:11.815448 | orchestrator | TASK [Update package cache] **************************************************** 2026-01-02 00:05:11.843907 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:05:11.843966 | orchestrator | 2026-01-02 00:05:11.843974 | orchestrator | TASK [Install required packages] *********************************************** 2026-01-02 00:05:11.877454 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:05:11.877550 | orchestrator | 2026-01-02 00:05:11.877567 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-01-02 00:05:11.917221 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:05:11.917361 | orchestrator | 2026-01-02 00:05:11.917385 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-01-02 00:05:11.956998 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:05:11.957072 | orchestrator | 2026-01-02 00:05:11.957084 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-01-02 00:05:11.997515 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:05:11.997589 | orchestrator | 2026-01-02 00:05:11.997601 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-01-02 00:05:12.033350 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:05:12.033411 | orchestrator | 2026-01-02 00:05:12.033420 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-01-02 00:05:12.746182 | orchestrator | changed: [testbed-manager] 2026-01-02 00:05:12.746239 | orchestrator | 2026-01-02 00:05:12.746247 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-01-02 00:08:03.422871 | orchestrator | changed: [testbed-manager] 2026-01-02 00:08:03.422943 | orchestrator | 2026-01-02 00:08:03.422959 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-01-02 00:09:37.640409 | orchestrator | changed: [testbed-manager] 2026-01-02 00:09:37.640510 | orchestrator | 2026-01-02 00:09:37.640529 | orchestrator | TASK [Install required packages] *********************************************** 2026-01-02 00:10:01.147112 | orchestrator | changed: [testbed-manager] 2026-01-02 00:10:01.147201 | orchestrator | 2026-01-02 00:10:01.147212 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-01-02 00:10:11.993904 | orchestrator | changed: [testbed-manager] 2026-01-02 00:10:11.993987 | orchestrator | 2026-01-02 00:10:11.994010 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-01-02 00:10:12.042521 | orchestrator | ok: [testbed-manager] 2026-01-02 00:10:12.042716 | orchestrator | 2026-01-02 00:10:12.042734 | orchestrator | TASK [Get current user] ******************************************************** 2026-01-02 00:10:12.850639 | orchestrator | ok: [testbed-manager] 2026-01-02 00:10:12.850699 | orchestrator | 2026-01-02 00:10:12.850717 | orchestrator | TASK [Create venv directory] *************************************************** 2026-01-02 00:10:13.656246 | orchestrator | changed: [testbed-manager] 2026-01-02 00:10:13.656320 | orchestrator | 2026-01-02 00:10:13.656334 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-01-02 00:10:20.162361 | orchestrator | changed: [testbed-manager] 2026-01-02 00:10:20.162401 | orchestrator | 2026-01-02 00:10:20.162423 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-01-02 00:10:26.679093 | orchestrator | changed: [testbed-manager] 2026-01-02 00:10:26.679217 | orchestrator | 2026-01-02 00:10:26.679238 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-01-02 00:10:29.476890 | orchestrator | changed: [testbed-manager] 2026-01-02 00:10:29.476963 | orchestrator | 2026-01-02 00:10:29.476975 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-01-02 00:10:31.351864 | orchestrator | changed: [testbed-manager] 2026-01-02 00:10:31.352780 | orchestrator | 2026-01-02 00:10:31.352802 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-01-02 00:10:32.491203 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-01-02 00:10:32.491270 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-01-02 00:10:32.491284 | orchestrator | 2026-01-02 00:10:32.491297 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-01-02 00:10:32.536945 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-01-02 00:10:32.537021 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-01-02 00:10:32.537033 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-01-02 00:10:32.537043 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-01-02 00:10:39.160024 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-01-02 00:10:39.160123 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-01-02 00:10:39.160137 | orchestrator | 2026-01-02 00:10:39.160148 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-01-02 00:10:39.766324 | orchestrator | changed: [testbed-manager] 2026-01-02 00:10:39.766378 | orchestrator | 2026-01-02 00:10:39.766388 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-01-02 00:12:00.294597 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-01-02 00:12:00.294715 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-01-02 00:12:00.294734 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-01-02 00:12:00.294747 | orchestrator | 2026-01-02 00:12:00.294760 | orchestrator | TASK [Install local collections] *********************************************** 2026-01-02 00:12:02.710185 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-01-02 00:12:02.710257 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-01-02 00:12:02.710264 | orchestrator | 2026-01-02 00:12:02.710270 | orchestrator | PLAY [Create operator user] **************************************************** 2026-01-02 00:12:02.710277 | orchestrator | 2026-01-02 00:12:02.710282 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-02 00:12:04.164528 | orchestrator | ok: [testbed-manager] 2026-01-02 00:12:04.164595 | orchestrator | 2026-01-02 00:12:04.164610 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-01-02 00:12:04.213312 | orchestrator | ok: [testbed-manager] 2026-01-02 00:12:04.213358 | orchestrator | 2026-01-02 00:12:04.213366 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-01-02 00:12:04.288924 | orchestrator | ok: [testbed-manager] 2026-01-02 00:12:04.288967 | orchestrator | 2026-01-02 00:12:04.288975 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-01-02 00:12:05.102338 | orchestrator | changed: [testbed-manager] 2026-01-02 00:12:05.102383 | orchestrator | 2026-01-02 00:12:05.102393 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-01-02 00:12:05.882923 | orchestrator | changed: [testbed-manager] 2026-01-02 00:12:05.883029 | orchestrator | 2026-01-02 00:12:05.883048 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-01-02 00:12:07.309675 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-01-02 00:12:07.309768 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-01-02 00:12:07.309784 | orchestrator | 2026-01-02 00:12:07.309822 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-01-02 00:12:08.724315 | orchestrator | changed: [testbed-manager] 2026-01-02 00:12:08.724376 | orchestrator | 2026-01-02 00:12:08.724384 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-01-02 00:12:10.561624 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-01-02 00:12:10.561672 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-01-02 00:12:10.561681 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-01-02 00:12:10.561689 | orchestrator | 2026-01-02 00:12:10.561697 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-01-02 00:12:10.629015 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:12:10.629145 | orchestrator | 2026-01-02 00:12:10.629170 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-01-02 00:12:10.709247 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:12:10.709293 | orchestrator | 2026-01-02 00:12:10.709304 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-01-02 00:12:11.279083 | orchestrator | changed: [testbed-manager] 2026-01-02 00:12:11.279171 | orchestrator | 2026-01-02 00:12:11.279188 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-01-02 00:12:11.343565 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:12:11.343650 | orchestrator | 2026-01-02 00:12:11.343667 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-01-02 00:12:12.210792 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-02 00:12:12.210866 | orchestrator | changed: [testbed-manager] 2026-01-02 00:12:12.210879 | orchestrator | 2026-01-02 00:12:12.210889 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-01-02 00:12:12.250780 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:12:12.250860 | orchestrator | 2026-01-02 00:12:12.250878 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-01-02 00:12:12.281953 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:12:12.282123 | orchestrator | 2026-01-02 00:12:12.282145 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-01-02 00:12:12.311312 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:12:12.311389 | orchestrator | 2026-01-02 00:12:12.311406 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-01-02 00:12:12.385516 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:12:12.385602 | orchestrator | 2026-01-02 00:12:12.385617 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-01-02 00:12:13.154772 | orchestrator | ok: [testbed-manager] 2026-01-02 00:12:13.154864 | orchestrator | 2026-01-02 00:12:13.154882 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-01-02 00:12:13.154895 | orchestrator | 2026-01-02 00:12:13.154906 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-02 00:12:14.552851 | orchestrator | ok: [testbed-manager] 2026-01-02 00:12:14.552949 | orchestrator | 2026-01-02 00:12:14.552966 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-01-02 00:12:15.559731 | orchestrator | changed: [testbed-manager] 2026-01-02 00:12:15.560093 | orchestrator | 2026-01-02 00:12:15.560139 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:12:15.560366 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-01-02 00:12:15.560391 | orchestrator | 2026-01-02 00:12:15.745733 | orchestrator | ok: Runtime: 0:07:09.177263 2026-01-02 00:12:15.762241 | 2026-01-02 00:12:15.762402 | TASK [Point out that the log in on the manager is now possible] 2026-01-02 00:12:15.801680 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-01-02 00:12:15.813689 | 2026-01-02 00:12:15.813846 | TASK [Point out that the following task takes some time and does not give any output] 2026-01-02 00:12:15.862457 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-01-02 00:12:15.875246 | 2026-01-02 00:12:15.875397 | TASK [Run manager part 1 + 2] 2026-01-02 00:12:17.020093 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-02 00:12:17.080350 | orchestrator | 2026-01-02 00:12:17.080401 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-01-02 00:12:17.080408 | orchestrator | 2026-01-02 00:12:17.080422 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-02 00:12:20.193230 | orchestrator | ok: [testbed-manager] 2026-01-02 00:12:20.193333 | orchestrator | 2026-01-02 00:12:20.193392 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-01-02 00:12:20.231919 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:12:20.231984 | orchestrator | 2026-01-02 00:12:20.231997 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-01-02 00:12:20.270477 | orchestrator | ok: [testbed-manager] 2026-01-02 00:12:20.270528 | orchestrator | 2026-01-02 00:12:20.270536 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-02 00:12:20.325960 | orchestrator | ok: [testbed-manager] 2026-01-02 00:12:20.326106 | orchestrator | 2026-01-02 00:12:20.326127 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-02 00:12:20.398909 | orchestrator | ok: [testbed-manager] 2026-01-02 00:12:20.399006 | orchestrator | 2026-01-02 00:12:20.399025 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-02 00:12:20.455759 | orchestrator | ok: [testbed-manager] 2026-01-02 00:12:20.455836 | orchestrator | 2026-01-02 00:12:20.455849 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-02 00:12:20.511001 | orchestrator | included: /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-01-02 00:12:20.511092 | orchestrator | 2026-01-02 00:12:20.511102 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-02 00:12:21.263308 | orchestrator | ok: [testbed-manager] 2026-01-02 00:12:21.264501 | orchestrator | 2026-01-02 00:12:21.264528 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-02 00:12:21.316717 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:12:21.316782 | orchestrator | 2026-01-02 00:12:21.316790 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-02 00:12:22.738323 | orchestrator | changed: [testbed-manager] 2026-01-02 00:12:22.738431 | orchestrator | 2026-01-02 00:12:22.738451 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-02 00:12:23.367111 | orchestrator | ok: [testbed-manager] 2026-01-02 00:12:23.367177 | orchestrator | 2026-01-02 00:12:23.367185 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-02 00:12:24.534770 | orchestrator | changed: [testbed-manager] 2026-01-02 00:12:24.534867 | orchestrator | 2026-01-02 00:12:24.534885 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-02 00:12:39.856509 | orchestrator | changed: [testbed-manager] 2026-01-02 00:12:39.856613 | orchestrator | 2026-01-02 00:12:39.856629 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-01-02 00:12:40.627323 | orchestrator | ok: [testbed-manager] 2026-01-02 00:12:40.627388 | orchestrator | 2026-01-02 00:12:40.627406 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-01-02 00:12:40.677355 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:12:40.677391 | orchestrator | 2026-01-02 00:12:40.677398 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-01-02 00:12:41.674537 | orchestrator | changed: [testbed-manager] 2026-01-02 00:12:41.674590 | orchestrator | 2026-01-02 00:12:41.674601 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-01-02 00:12:42.658082 | orchestrator | changed: [testbed-manager] 2026-01-02 00:12:42.658164 | orchestrator | 2026-01-02 00:12:42.658177 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-01-02 00:12:43.232748 | orchestrator | changed: [testbed-manager] 2026-01-02 00:12:43.232792 | orchestrator | 2026-01-02 00:12:43.232798 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-01-02 00:12:43.277239 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-01-02 00:12:43.277365 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-01-02 00:12:43.277383 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-01-02 00:12:43.277396 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-01-02 00:12:46.191403 | orchestrator | changed: [testbed-manager] 2026-01-02 00:12:46.191475 | orchestrator | 2026-01-02 00:12:46.191487 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-01-02 00:12:55.567945 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-01-02 00:12:55.568125 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-01-02 00:12:55.568151 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-01-02 00:12:55.568166 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-01-02 00:12:55.568189 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-01-02 00:12:55.568205 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-01-02 00:12:55.568219 | orchestrator | 2026-01-02 00:12:55.568235 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-01-02 00:12:56.625215 | orchestrator | changed: [testbed-manager] 2026-01-02 00:12:56.625270 | orchestrator | 2026-01-02 00:12:56.625279 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-01-02 00:12:56.669345 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:12:56.669422 | orchestrator | 2026-01-02 00:12:56.669435 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-01-02 00:12:59.858821 | orchestrator | changed: [testbed-manager] 2026-01-02 00:12:59.859129 | orchestrator | 2026-01-02 00:12:59.859153 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-01-02 00:12:59.901833 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:12:59.901890 | orchestrator | 2026-01-02 00:12:59.901900 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-01-02 00:14:41.924320 | orchestrator | changed: [testbed-manager] 2026-01-02 00:14:41.924437 | orchestrator | 2026-01-02 00:14:41.924454 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-02 00:14:43.147921 | orchestrator | ok: [testbed-manager] 2026-01-02 00:14:43.148052 | orchestrator | 2026-01-02 00:14:43.148080 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:14:43.148102 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-01-02 00:14:43.148122 | orchestrator | 2026-01-02 00:14:43.518741 | orchestrator | ok: Runtime: 0:02:27.033189 2026-01-02 00:14:43.535610 | 2026-01-02 00:14:43.535765 | TASK [Reboot manager] 2026-01-02 00:14:45.072839 | orchestrator | ok: Runtime: 0:00:01.044611 2026-01-02 00:14:45.089601 | 2026-01-02 00:14:45.089768 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-01-02 00:15:01.628772 | orchestrator | ok 2026-01-02 00:15:01.641392 | 2026-01-02 00:15:01.641574 | TASK [Wait a little longer for the manager so that everything is ready] 2026-01-02 00:16:01.688172 | orchestrator | ok 2026-01-02 00:16:01.698908 | 2026-01-02 00:16:01.699063 | TASK [Deploy manager + bootstrap nodes] 2026-01-02 00:16:04.395590 | orchestrator | 2026-01-02 00:16:04.395699 | orchestrator | # DEPLOY MANAGER 2026-01-02 00:16:04.395710 | orchestrator | 2026-01-02 00:16:04.395732 | orchestrator | + set -e 2026-01-02 00:16:04.395738 | orchestrator | + echo 2026-01-02 00:16:04.395744 | orchestrator | + echo '# DEPLOY MANAGER' 2026-01-02 00:16:04.395750 | orchestrator | + echo 2026-01-02 00:16:04.395770 | orchestrator | + cat /opt/manager-vars.sh 2026-01-02 00:16:04.399436 | orchestrator | export NUMBER_OF_NODES=6 2026-01-02 00:16:04.399447 | orchestrator | 2026-01-02 00:16:04.399451 | orchestrator | export CEPH_VERSION=reef 2026-01-02 00:16:04.399457 | orchestrator | export CONFIGURATION_VERSION=main 2026-01-02 00:16:04.399462 | orchestrator | export MANAGER_VERSION=latest 2026-01-02 00:16:04.399471 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-01-02 00:16:04.399475 | orchestrator | 2026-01-02 00:16:04.399482 | orchestrator | export ARA=false 2026-01-02 00:16:04.399487 | orchestrator | export DEPLOY_MODE=manager 2026-01-02 00:16:04.399493 | orchestrator | export TEMPEST=true 2026-01-02 00:16:04.399498 | orchestrator | export IS_ZUUL=true 2026-01-02 00:16:04.399501 | orchestrator | 2026-01-02 00:16:04.399509 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.55 2026-01-02 00:16:04.399513 | orchestrator | export EXTERNAL_API=false 2026-01-02 00:16:04.399517 | orchestrator | 2026-01-02 00:16:04.399521 | orchestrator | export IMAGE_USER=ubuntu 2026-01-02 00:16:04.399527 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-01-02 00:16:04.399530 | orchestrator | 2026-01-02 00:16:04.399534 | orchestrator | export CEPH_STACK=ceph-ansible 2026-01-02 00:16:04.399621 | orchestrator | 2026-01-02 00:16:04.399635 | orchestrator | + echo 2026-01-02 00:16:04.399643 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-02 00:16:04.400856 | orchestrator | ++ export INTERACTIVE=false 2026-01-02 00:16:04.400864 | orchestrator | ++ INTERACTIVE=false 2026-01-02 00:16:04.400870 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-02 00:16:04.400876 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-02 00:16:04.401058 | orchestrator | + source /opt/manager-vars.sh 2026-01-02 00:16:04.401097 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-02 00:16:04.401103 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-02 00:16:04.401107 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-02 00:16:04.401112 | orchestrator | ++ CEPH_VERSION=reef 2026-01-02 00:16:04.401132 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-02 00:16:04.401137 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-02 00:16:04.401141 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-02 00:16:04.401146 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-02 00:16:04.401175 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-02 00:16:04.401184 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-02 00:16:04.401188 | orchestrator | ++ export ARA=false 2026-01-02 00:16:04.401193 | orchestrator | ++ ARA=false 2026-01-02 00:16:04.401197 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-02 00:16:04.401201 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-02 00:16:04.401205 | orchestrator | ++ export TEMPEST=true 2026-01-02 00:16:04.401209 | orchestrator | ++ TEMPEST=true 2026-01-02 00:16:04.401230 | orchestrator | ++ export IS_ZUUL=true 2026-01-02 00:16:04.401235 | orchestrator | ++ IS_ZUUL=true 2026-01-02 00:16:04.401240 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.55 2026-01-02 00:16:04.401244 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.55 2026-01-02 00:16:04.401248 | orchestrator | ++ export EXTERNAL_API=false 2026-01-02 00:16:04.401252 | orchestrator | ++ EXTERNAL_API=false 2026-01-02 00:16:04.401281 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-02 00:16:04.401286 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-02 00:16:04.401291 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-02 00:16:04.401295 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-02 00:16:04.401299 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-02 00:16:04.401303 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-02 00:16:04.401322 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-01-02 00:16:04.460929 | orchestrator | + docker version 2026-01-02 00:16:04.743620 | orchestrator | Client: Docker Engine - Community 2026-01-02 00:16:04.743703 | orchestrator | Version: 27.5.1 2026-01-02 00:16:04.743715 | orchestrator | API version: 1.47 2026-01-02 00:16:04.743728 | orchestrator | Go version: go1.22.11 2026-01-02 00:16:04.743737 | orchestrator | Git commit: 9f9e405 2026-01-02 00:16:04.743746 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-01-02 00:16:04.743756 | orchestrator | OS/Arch: linux/amd64 2026-01-02 00:16:04.743764 | orchestrator | Context: default 2026-01-02 00:16:04.743773 | orchestrator | 2026-01-02 00:16:04.743783 | orchestrator | Server: Docker Engine - Community 2026-01-02 00:16:04.743792 | orchestrator | Engine: 2026-01-02 00:16:04.743875 | orchestrator | Version: 27.5.1 2026-01-02 00:16:04.743895 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-01-02 00:16:04.743931 | orchestrator | Go version: go1.22.11 2026-01-02 00:16:04.743940 | orchestrator | Git commit: 4c9b3b0 2026-01-02 00:16:04.743949 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-01-02 00:16:04.743958 | orchestrator | OS/Arch: linux/amd64 2026-01-02 00:16:04.743967 | orchestrator | Experimental: false 2026-01-02 00:16:04.743975 | orchestrator | containerd: 2026-01-02 00:16:04.744012 | orchestrator | Version: v2.2.1 2026-01-02 00:16:04.744022 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-01-02 00:16:04.744031 | orchestrator | runc: 2026-01-02 00:16:04.744040 | orchestrator | Version: 1.3.4 2026-01-02 00:16:04.744049 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-01-02 00:16:04.744058 | orchestrator | docker-init: 2026-01-02 00:16:04.744066 | orchestrator | Version: 0.19.0 2026-01-02 00:16:04.744075 | orchestrator | GitCommit: de40ad0 2026-01-02 00:16:04.747530 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-01-02 00:16:04.755574 | orchestrator | + set -e 2026-01-02 00:16:04.755618 | orchestrator | + source /opt/manager-vars.sh 2026-01-02 00:16:04.755631 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-02 00:16:04.755645 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-02 00:16:04.755781 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-02 00:16:04.755796 | orchestrator | ++ CEPH_VERSION=reef 2026-01-02 00:16:04.755807 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-02 00:16:04.755819 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-02 00:16:04.755831 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-02 00:16:04.755842 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-02 00:16:04.755853 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-02 00:16:04.755864 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-02 00:16:04.755875 | orchestrator | ++ export ARA=false 2026-01-02 00:16:04.755886 | orchestrator | ++ ARA=false 2026-01-02 00:16:04.755896 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-02 00:16:04.755908 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-02 00:16:04.755919 | orchestrator | ++ export TEMPEST=true 2026-01-02 00:16:04.755930 | orchestrator | ++ TEMPEST=true 2026-01-02 00:16:04.755941 | orchestrator | ++ export IS_ZUUL=true 2026-01-02 00:16:04.755951 | orchestrator | ++ IS_ZUUL=true 2026-01-02 00:16:04.755963 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.55 2026-01-02 00:16:04.755974 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.55 2026-01-02 00:16:04.756006 | orchestrator | ++ export EXTERNAL_API=false 2026-01-02 00:16:04.756018 | orchestrator | ++ EXTERNAL_API=false 2026-01-02 00:16:04.756029 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-02 00:16:04.756040 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-02 00:16:04.756051 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-02 00:16:04.756061 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-02 00:16:04.756073 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-02 00:16:04.756083 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-02 00:16:04.756094 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-02 00:16:04.756105 | orchestrator | ++ export INTERACTIVE=false 2026-01-02 00:16:04.756116 | orchestrator | ++ INTERACTIVE=false 2026-01-02 00:16:04.756127 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-02 00:16:04.756142 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-02 00:16:04.756209 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-02 00:16:04.756223 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-02 00:16:04.756234 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-01-02 00:16:04.765020 | orchestrator | + set -e 2026-01-02 00:16:04.765055 | orchestrator | + VERSION=reef 2026-01-02 00:16:04.765352 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-01-02 00:16:04.771640 | orchestrator | + [[ -n ceph_version: reef ]] 2026-01-02 00:16:04.771682 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-01-02 00:16:04.777688 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2026-01-02 00:16:04.783480 | orchestrator | + set -e 2026-01-02 00:16:04.783528 | orchestrator | + VERSION=2024.2 2026-01-02 00:16:04.784522 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-01-02 00:16:04.788730 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-01-02 00:16:04.788781 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2026-01-02 00:16:04.794959 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-01-02 00:16:04.795547 | orchestrator | ++ semver latest 7.0.0 2026-01-02 00:16:04.866533 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-02 00:16:04.866620 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-02 00:16:04.866634 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-01-02 00:16:04.867286 | orchestrator | ++ semver latest 10.0.0-0 2026-01-02 00:16:04.933674 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-02 00:16:04.934145 | orchestrator | ++ semver 2024.2 2025.1 2026-01-02 00:16:04.996899 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-02 00:16:04.997027 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-01-02 00:16:05.102658 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-02 00:16:05.104282 | orchestrator | + source /opt/venv/bin/activate 2026-01-02 00:16:05.105677 | orchestrator | ++ deactivate nondestructive 2026-01-02 00:16:05.105700 | orchestrator | ++ '[' -n '' ']' 2026-01-02 00:16:05.105712 | orchestrator | ++ '[' -n '' ']' 2026-01-02 00:16:05.105723 | orchestrator | ++ hash -r 2026-01-02 00:16:05.105827 | orchestrator | ++ '[' -n '' ']' 2026-01-02 00:16:05.105854 | orchestrator | ++ unset VIRTUAL_ENV 2026-01-02 00:16:05.105872 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-01-02 00:16:05.105892 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-01-02 00:16:05.105969 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-01-02 00:16:05.106045 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-01-02 00:16:05.106173 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-01-02 00:16:05.106202 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-01-02 00:16:05.106222 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-02 00:16:05.106318 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-02 00:16:05.106342 | orchestrator | ++ export PATH 2026-01-02 00:16:05.106367 | orchestrator | ++ '[' -n '' ']' 2026-01-02 00:16:05.106550 | orchestrator | ++ '[' -z '' ']' 2026-01-02 00:16:05.106577 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-01-02 00:16:05.106594 | orchestrator | ++ PS1='(venv) ' 2026-01-02 00:16:05.106605 | orchestrator | ++ export PS1 2026-01-02 00:16:05.106616 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-01-02 00:16:05.106628 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-01-02 00:16:05.106639 | orchestrator | ++ hash -r 2026-01-02 00:16:05.106744 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-01-02 00:16:06.515504 | orchestrator | 2026-01-02 00:16:06.515612 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-01-02 00:16:06.515628 | orchestrator | 2026-01-02 00:16:06.515641 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-02 00:16:07.114282 | orchestrator | ok: [testbed-manager] 2026-01-02 00:16:07.114412 | orchestrator | 2026-01-02 00:16:07.114438 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-01-02 00:16:08.139343 | orchestrator | changed: [testbed-manager] 2026-01-02 00:16:08.139466 | orchestrator | 2026-01-02 00:16:08.139492 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-01-02 00:16:08.139512 | orchestrator | 2026-01-02 00:16:08.139531 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-02 00:16:10.629483 | orchestrator | ok: [testbed-manager] 2026-01-02 00:16:10.629600 | orchestrator | 2026-01-02 00:16:10.629626 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-01-02 00:16:10.684745 | orchestrator | ok: [testbed-manager] 2026-01-02 00:16:10.684828 | orchestrator | 2026-01-02 00:16:10.684842 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-01-02 00:16:11.173791 | orchestrator | changed: [testbed-manager] 2026-01-02 00:16:11.173889 | orchestrator | 2026-01-02 00:16:11.173905 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-01-02 00:16:11.216716 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:16:11.216799 | orchestrator | 2026-01-02 00:16:11.216813 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-01-02 00:16:11.600081 | orchestrator | changed: [testbed-manager] 2026-01-02 00:16:11.600176 | orchestrator | 2026-01-02 00:16:11.600194 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2026-01-02 00:16:11.660195 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:16:11.660294 | orchestrator | 2026-01-02 00:16:11.660315 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-01-02 00:16:12.021790 | orchestrator | ok: [testbed-manager] 2026-01-02 00:16:12.021875 | orchestrator | 2026-01-02 00:16:12.021906 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-01-02 00:16:12.158221 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:16:12.158316 | orchestrator | 2026-01-02 00:16:12.158332 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-01-02 00:16:12.158346 | orchestrator | 2026-01-02 00:16:12.158357 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-02 00:16:13.946221 | orchestrator | ok: [testbed-manager] 2026-01-02 00:16:13.946324 | orchestrator | 2026-01-02 00:16:13.946341 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-01-02 00:16:14.041893 | orchestrator | included: osism.services.traefik for testbed-manager 2026-01-02 00:16:14.042073 | orchestrator | 2026-01-02 00:16:14.042093 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-01-02 00:16:14.098613 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-01-02 00:16:14.098677 | orchestrator | 2026-01-02 00:16:14.098690 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-01-02 00:16:15.282208 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-01-02 00:16:15.282313 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-01-02 00:16:15.282330 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-01-02 00:16:15.282343 | orchestrator | 2026-01-02 00:16:15.282356 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-01-02 00:16:17.237742 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-01-02 00:16:17.237849 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-01-02 00:16:17.237868 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-01-02 00:16:17.237881 | orchestrator | 2026-01-02 00:16:17.237893 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-01-02 00:16:17.910463 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-02 00:16:17.910584 | orchestrator | changed: [testbed-manager] 2026-01-02 00:16:17.910610 | orchestrator | 2026-01-02 00:16:17.910631 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-01-02 00:16:18.594319 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-02 00:16:18.594413 | orchestrator | changed: [testbed-manager] 2026-01-02 00:16:18.594431 | orchestrator | 2026-01-02 00:16:18.594443 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-01-02 00:16:18.651117 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:16:18.651215 | orchestrator | 2026-01-02 00:16:18.651232 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-01-02 00:16:19.021442 | orchestrator | ok: [testbed-manager] 2026-01-02 00:16:19.021557 | orchestrator | 2026-01-02 00:16:19.021575 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-01-02 00:16:19.113159 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-01-02 00:16:19.113266 | orchestrator | 2026-01-02 00:16:19.113288 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-01-02 00:16:20.258820 | orchestrator | changed: [testbed-manager] 2026-01-02 00:16:20.258939 | orchestrator | 2026-01-02 00:16:20.258971 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-01-02 00:16:21.168580 | orchestrator | changed: [testbed-manager] 2026-01-02 00:16:21.168684 | orchestrator | 2026-01-02 00:16:21.168709 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-01-02 00:16:40.115975 | orchestrator | changed: [testbed-manager] 2026-01-02 00:16:40.116126 | orchestrator | 2026-01-02 00:16:40.116143 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-01-02 00:16:40.178590 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:16:40.178668 | orchestrator | 2026-01-02 00:16:40.178676 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-01-02 00:16:40.178681 | orchestrator | 2026-01-02 00:16:40.178708 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-02 00:16:42.039082 | orchestrator | ok: [testbed-manager] 2026-01-02 00:16:42.039187 | orchestrator | 2026-01-02 00:16:42.039204 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-01-02 00:16:42.164668 | orchestrator | included: osism.services.manager for testbed-manager 2026-01-02 00:16:42.164775 | orchestrator | 2026-01-02 00:16:42.164791 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-01-02 00:16:42.236530 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-01-02 00:16:42.236642 | orchestrator | 2026-01-02 00:16:42.236660 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-01-02 00:16:44.959064 | orchestrator | ok: [testbed-manager] 2026-01-02 00:16:44.959177 | orchestrator | 2026-01-02 00:16:44.959192 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-01-02 00:16:45.016871 | orchestrator | ok: [testbed-manager] 2026-01-02 00:16:45.017023 | orchestrator | 2026-01-02 00:16:45.017048 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-01-02 00:16:45.161739 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-01-02 00:16:45.161847 | orchestrator | 2026-01-02 00:16:45.161862 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-01-02 00:16:48.108828 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-01-02 00:16:48.108954 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-01-02 00:16:48.108969 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-01-02 00:16:48.109017 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-01-02 00:16:48.109029 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-01-02 00:16:48.109039 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-01-02 00:16:48.109049 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-01-02 00:16:48.109059 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-01-02 00:16:48.109069 | orchestrator | 2026-01-02 00:16:48.109080 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-01-02 00:16:48.773540 | orchestrator | changed: [testbed-manager] 2026-01-02 00:16:48.773685 | orchestrator | 2026-01-02 00:16:48.773704 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-01-02 00:16:49.451120 | orchestrator | changed: [testbed-manager] 2026-01-02 00:16:49.451222 | orchestrator | 2026-01-02 00:16:49.451239 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-01-02 00:16:49.528232 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-01-02 00:16:49.528342 | orchestrator | 2026-01-02 00:16:49.528359 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-01-02 00:16:50.830754 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-01-02 00:16:50.830857 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-01-02 00:16:50.830874 | orchestrator | 2026-01-02 00:16:50.830888 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-01-02 00:16:51.467244 | orchestrator | changed: [testbed-manager] 2026-01-02 00:16:51.467346 | orchestrator | 2026-01-02 00:16:51.467364 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-01-02 00:16:51.518259 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:16:51.518327 | orchestrator | 2026-01-02 00:16:51.518340 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-01-02 00:16:51.607820 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-01-02 00:16:51.607904 | orchestrator | 2026-01-02 00:16:51.607916 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-01-02 00:16:52.285311 | orchestrator | changed: [testbed-manager] 2026-01-02 00:16:52.285431 | orchestrator | 2026-01-02 00:16:52.285480 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-01-02 00:16:52.363538 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-01-02 00:16:52.363638 | orchestrator | 2026-01-02 00:16:52.363654 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-01-02 00:16:53.796778 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-02 00:16:53.796930 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-02 00:16:53.797712 | orchestrator | changed: [testbed-manager] 2026-01-02 00:16:53.797742 | orchestrator | 2026-01-02 00:16:53.797755 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-01-02 00:16:54.435970 | orchestrator | changed: [testbed-manager] 2026-01-02 00:16:54.436133 | orchestrator | 2026-01-02 00:16:54.436149 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-01-02 00:16:54.498212 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:16:54.498294 | orchestrator | 2026-01-02 00:16:54.498306 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-01-02 00:16:54.598283 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-01-02 00:16:54.598392 | orchestrator | 2026-01-02 00:16:54.598437 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-01-02 00:16:55.133359 | orchestrator | changed: [testbed-manager] 2026-01-02 00:16:55.133466 | orchestrator | 2026-01-02 00:16:55.133484 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-01-02 00:16:55.564755 | orchestrator | changed: [testbed-manager] 2026-01-02 00:16:55.564857 | orchestrator | 2026-01-02 00:16:55.564873 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-01-02 00:16:56.923377 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-01-02 00:16:56.923484 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-01-02 00:16:56.923500 | orchestrator | 2026-01-02 00:16:56.923514 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-01-02 00:16:57.600945 | orchestrator | changed: [testbed-manager] 2026-01-02 00:16:57.601094 | orchestrator | 2026-01-02 00:16:57.601111 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-01-02 00:16:58.005827 | orchestrator | ok: [testbed-manager] 2026-01-02 00:16:58.005929 | orchestrator | 2026-01-02 00:16:58.005946 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-01-02 00:16:58.393642 | orchestrator | changed: [testbed-manager] 2026-01-02 00:16:58.393748 | orchestrator | 2026-01-02 00:16:58.393765 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-01-02 00:16:58.444579 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:16:58.444679 | orchestrator | 2026-01-02 00:16:58.444696 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-01-02 00:16:58.522214 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-01-02 00:16:58.522304 | orchestrator | 2026-01-02 00:16:58.522320 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-01-02 00:16:58.569760 | orchestrator | ok: [testbed-manager] 2026-01-02 00:16:58.569821 | orchestrator | 2026-01-02 00:16:58.569834 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-01-02 00:17:00.670603 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-01-02 00:17:00.670721 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-01-02 00:17:00.670740 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-01-02 00:17:00.670753 | orchestrator | 2026-01-02 00:17:00.670766 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-01-02 00:17:01.414510 | orchestrator | changed: [testbed-manager] 2026-01-02 00:17:01.414615 | orchestrator | 2026-01-02 00:17:01.414631 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-01-02 00:17:02.145639 | orchestrator | changed: [testbed-manager] 2026-01-02 00:17:02.145711 | orchestrator | 2026-01-02 00:17:02.145720 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-01-02 00:17:02.857788 | orchestrator | changed: [testbed-manager] 2026-01-02 00:17:02.857890 | orchestrator | 2026-01-02 00:17:02.857905 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-01-02 00:17:02.951622 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-01-02 00:17:02.951782 | orchestrator | 2026-01-02 00:17:02.951802 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-01-02 00:17:03.001387 | orchestrator | ok: [testbed-manager] 2026-01-02 00:17:03.001437 | orchestrator | 2026-01-02 00:17:03.001449 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-01-02 00:17:03.748669 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-01-02 00:17:03.748786 | orchestrator | 2026-01-02 00:17:03.748803 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-01-02 00:17:03.834861 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-01-02 00:17:03.835016 | orchestrator | 2026-01-02 00:17:03.835044 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-01-02 00:17:04.585687 | orchestrator | changed: [testbed-manager] 2026-01-02 00:17:04.585787 | orchestrator | 2026-01-02 00:17:04.585807 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-01-02 00:17:05.240286 | orchestrator | ok: [testbed-manager] 2026-01-02 00:17:05.240387 | orchestrator | 2026-01-02 00:17:05.240403 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-01-02 00:17:05.292869 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:17:05.292965 | orchestrator | 2026-01-02 00:17:05.293010 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-01-02 00:17:05.350355 | orchestrator | ok: [testbed-manager] 2026-01-02 00:17:05.350468 | orchestrator | 2026-01-02 00:17:05.350484 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-01-02 00:17:06.254810 | orchestrator | changed: [testbed-manager] 2026-01-02 00:17:06.254920 | orchestrator | 2026-01-02 00:17:06.254945 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-01-02 00:18:19.271473 | orchestrator | changed: [testbed-manager] 2026-01-02 00:18:19.271595 | orchestrator | 2026-01-02 00:18:19.271614 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-01-02 00:18:20.314718 | orchestrator | ok: [testbed-manager] 2026-01-02 00:18:20.314828 | orchestrator | 2026-01-02 00:18:20.314845 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-01-02 00:18:20.378921 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:18:20.379060 | orchestrator | 2026-01-02 00:18:20.379077 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-01-02 00:18:24.273047 | orchestrator | changed: [testbed-manager] 2026-01-02 00:18:24.273119 | orchestrator | 2026-01-02 00:18:24.273141 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-01-02 00:18:24.329315 | orchestrator | ok: [testbed-manager] 2026-01-02 00:18:24.329406 | orchestrator | 2026-01-02 00:18:24.329420 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-01-02 00:18:24.329431 | orchestrator | 2026-01-02 00:18:24.329441 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-01-02 00:18:24.393552 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:18:24.393651 | orchestrator | 2026-01-02 00:18:24.393665 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-01-02 00:19:24.442009 | orchestrator | Pausing for 60 seconds 2026-01-02 00:19:24.442181 | orchestrator | changed: [testbed-manager] 2026-01-02 00:19:24.442200 | orchestrator | 2026-01-02 00:19:24.442214 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-01-02 00:19:28.015307 | orchestrator | changed: [testbed-manager] 2026-01-02 00:19:28.015415 | orchestrator | 2026-01-02 00:19:28.015433 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-01-02 00:20:30.194230 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-01-02 00:20:30.194395 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-01-02 00:20:30.194413 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-01-02 00:20:30.194425 | orchestrator | changed: [testbed-manager] 2026-01-02 00:20:30.194438 | orchestrator | 2026-01-02 00:20:30.194450 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-01-02 00:20:41.211426 | orchestrator | changed: [testbed-manager] 2026-01-02 00:20:41.211552 | orchestrator | 2026-01-02 00:20:41.211569 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-01-02 00:20:41.298149 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-01-02 00:20:41.298241 | orchestrator | 2026-01-02 00:20:41.298254 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-01-02 00:20:41.298266 | orchestrator | 2026-01-02 00:20:41.298278 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-01-02 00:20:41.360275 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:20:41.360369 | orchestrator | 2026-01-02 00:20:41.360380 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-01-02 00:20:41.423514 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-01-02 00:20:41.423594 | orchestrator | 2026-01-02 00:20:41.423608 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-01-02 00:20:42.242528 | orchestrator | changed: [testbed-manager] 2026-01-02 00:20:42.242663 | orchestrator | 2026-01-02 00:20:42.242694 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-01-02 00:20:45.474377 | orchestrator | ok: [testbed-manager] 2026-01-02 00:20:45.474485 | orchestrator | 2026-01-02 00:20:45.474501 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-01-02 00:20:45.550539 | orchestrator | ok: [testbed-manager] => { 2026-01-02 00:20:45.550638 | orchestrator | "version_check_result.stdout_lines": [ 2026-01-02 00:20:45.550655 | orchestrator | "=== OSISM Container Version Check ===", 2026-01-02 00:20:45.550665 | orchestrator | "Checking running containers against expected versions...", 2026-01-02 00:20:45.550676 | orchestrator | "", 2026-01-02 00:20:45.550686 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-01-02 00:20:45.550695 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-01-02 00:20:45.550704 | orchestrator | " Enabled: true", 2026-01-02 00:20:45.550713 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-01-02 00:20:45.550722 | orchestrator | " Status: ✅ MATCH", 2026-01-02 00:20:45.550731 | orchestrator | "", 2026-01-02 00:20:45.550740 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-01-02 00:20:45.550749 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-01-02 00:20:45.550758 | orchestrator | " Enabled: true", 2026-01-02 00:20:45.550767 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-01-02 00:20:45.550776 | orchestrator | " Status: ✅ MATCH", 2026-01-02 00:20:45.550784 | orchestrator | "", 2026-01-02 00:20:45.550793 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-01-02 00:20:45.550802 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-01-02 00:20:45.550811 | orchestrator | " Enabled: true", 2026-01-02 00:20:45.550820 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-01-02 00:20:45.550829 | orchestrator | " Status: ✅ MATCH", 2026-01-02 00:20:45.550838 | orchestrator | "", 2026-01-02 00:20:45.550846 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-01-02 00:20:45.550856 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-01-02 00:20:45.550865 | orchestrator | " Enabled: true", 2026-01-02 00:20:45.550874 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-01-02 00:20:45.550904 | orchestrator | " Status: ✅ MATCH", 2026-01-02 00:20:45.550914 | orchestrator | "", 2026-01-02 00:20:45.550922 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-01-02 00:20:45.550931 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-01-02 00:20:45.550940 | orchestrator | " Enabled: true", 2026-01-02 00:20:45.550981 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-01-02 00:20:45.550995 | orchestrator | " Status: ✅ MATCH", 2026-01-02 00:20:45.551007 | orchestrator | "", 2026-01-02 00:20:45.551016 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-01-02 00:20:45.551025 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-02 00:20:45.551034 | orchestrator | " Enabled: true", 2026-01-02 00:20:45.551042 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-02 00:20:45.551051 | orchestrator | " Status: ✅ MATCH", 2026-01-02 00:20:45.551060 | orchestrator | "", 2026-01-02 00:20:45.551069 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-01-02 00:20:45.551077 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-01-02 00:20:45.551086 | orchestrator | " Enabled: true", 2026-01-02 00:20:45.551095 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-01-02 00:20:45.551103 | orchestrator | " Status: ✅ MATCH", 2026-01-02 00:20:45.551112 | orchestrator | "", 2026-01-02 00:20:45.551120 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-01-02 00:20:45.551129 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-01-02 00:20:45.551137 | orchestrator | " Enabled: true", 2026-01-02 00:20:45.551154 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-01-02 00:20:45.551167 | orchestrator | " Status: ✅ MATCH", 2026-01-02 00:20:45.551176 | orchestrator | "", 2026-01-02 00:20:45.551185 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-01-02 00:20:45.551194 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-01-02 00:20:45.551203 | orchestrator | " Enabled: true", 2026-01-02 00:20:45.551212 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-01-02 00:20:45.551220 | orchestrator | " Status: ✅ MATCH", 2026-01-02 00:20:45.551229 | orchestrator | "", 2026-01-02 00:20:45.551238 | orchestrator | "Checking service: redis (Redis Cache)", 2026-01-02 00:20:45.551246 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-01-02 00:20:45.551255 | orchestrator | " Enabled: true", 2026-01-02 00:20:45.551263 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-01-02 00:20:45.551272 | orchestrator | " Status: ✅ MATCH", 2026-01-02 00:20:45.551280 | orchestrator | "", 2026-01-02 00:20:45.551289 | orchestrator | "Checking service: api (OSISM API Service)", 2026-01-02 00:20:45.551298 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-02 00:20:45.551307 | orchestrator | " Enabled: true", 2026-01-02 00:20:45.551315 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-02 00:20:45.551324 | orchestrator | " Status: ✅ MATCH", 2026-01-02 00:20:45.551332 | orchestrator | "", 2026-01-02 00:20:45.551341 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-01-02 00:20:45.551350 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-02 00:20:45.551358 | orchestrator | " Enabled: true", 2026-01-02 00:20:45.551367 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-02 00:20:45.551376 | orchestrator | " Status: ✅ MATCH", 2026-01-02 00:20:45.551384 | orchestrator | "", 2026-01-02 00:20:45.551393 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-01-02 00:20:45.551401 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-02 00:20:45.551410 | orchestrator | " Enabled: true", 2026-01-02 00:20:45.551419 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-02 00:20:45.551427 | orchestrator | " Status: ✅ MATCH", 2026-01-02 00:20:45.551436 | orchestrator | "", 2026-01-02 00:20:45.551445 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-01-02 00:20:45.551469 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-02 00:20:45.551483 | orchestrator | " Enabled: true", 2026-01-02 00:20:45.551498 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-02 00:20:45.551512 | orchestrator | " Status: ✅ MATCH", 2026-01-02 00:20:45.551526 | orchestrator | "", 2026-01-02 00:20:45.551542 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-01-02 00:20:45.551576 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-02 00:20:45.551591 | orchestrator | " Enabled: true", 2026-01-02 00:20:45.551605 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-02 00:20:45.551620 | orchestrator | " Status: ✅ MATCH", 2026-01-02 00:20:45.551635 | orchestrator | "", 2026-01-02 00:20:45.551649 | orchestrator | "=== Summary ===", 2026-01-02 00:20:45.551664 | orchestrator | "Errors (version mismatches): 0", 2026-01-02 00:20:45.551679 | orchestrator | "Warnings (expected containers not running): 0", 2026-01-02 00:20:45.551694 | orchestrator | "", 2026-01-02 00:20:45.551704 | orchestrator | "✅ All running containers match expected versions!" 2026-01-02 00:20:45.551713 | orchestrator | ] 2026-01-02 00:20:45.551722 | orchestrator | } 2026-01-02 00:20:45.551731 | orchestrator | 2026-01-02 00:20:45.551740 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-01-02 00:20:45.609868 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:20:45.610150 | orchestrator | 2026-01-02 00:20:45.610171 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:20:45.610183 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2026-01-02 00:20:45.610194 | orchestrator | 2026-01-02 00:20:45.689637 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-02 00:20:45.689722 | orchestrator | + deactivate 2026-01-02 00:20:45.689739 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-01-02 00:20:45.689753 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-02 00:20:45.689765 | orchestrator | + export PATH 2026-01-02 00:20:45.689777 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-01-02 00:20:45.689789 | orchestrator | + '[' -n '' ']' 2026-01-02 00:20:45.689800 | orchestrator | + hash -r 2026-01-02 00:20:45.689812 | orchestrator | + '[' -n '' ']' 2026-01-02 00:20:45.689823 | orchestrator | + unset VIRTUAL_ENV 2026-01-02 00:20:45.689834 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-01-02 00:20:45.689846 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-01-02 00:20:45.689857 | orchestrator | + unset -f deactivate 2026-01-02 00:20:45.689869 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-01-02 00:20:45.698166 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-02 00:20:45.698196 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-01-02 00:20:45.698207 | orchestrator | + local max_attempts=60 2026-01-02 00:20:45.698218 | orchestrator | + local name=ceph-ansible 2026-01-02 00:20:45.698229 | orchestrator | + local attempt_num=1 2026-01-02 00:20:45.698931 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-02 00:20:45.737253 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-02 00:20:45.737353 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-01-02 00:20:45.737365 | orchestrator | + local max_attempts=60 2026-01-02 00:20:45.737377 | orchestrator | + local name=kolla-ansible 2026-01-02 00:20:45.737388 | orchestrator | + local attempt_num=1 2026-01-02 00:20:45.737457 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-01-02 00:20:45.773680 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-02 00:20:45.773766 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-01-02 00:20:45.773783 | orchestrator | + local max_attempts=60 2026-01-02 00:20:45.773796 | orchestrator | + local name=osism-ansible 2026-01-02 00:20:45.773807 | orchestrator | + local attempt_num=1 2026-01-02 00:20:45.774681 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-01-02 00:20:45.810679 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-02 00:20:45.810754 | orchestrator | + [[ true == \t\r\u\e ]] 2026-01-02 00:20:45.810768 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-01-02 00:20:46.486454 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-01-02 00:20:46.653424 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-01-02 00:20:46.653534 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-01-02 00:20:46.653551 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-01-02 00:20:46.653563 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-01-02 00:20:46.653576 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-01-02 00:20:46.653588 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-01-02 00:20:46.653600 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-01-02 00:20:46.653611 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-01-02 00:20:46.653645 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-01-02 00:20:46.653657 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-01-02 00:20:46.653669 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-01-02 00:20:46.653680 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-01-02 00:20:46.653691 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-01-02 00:20:46.653702 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-01-02 00:20:46.653714 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-01-02 00:20:46.653725 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-01-02 00:20:46.659230 | orchestrator | ++ semver latest 7.0.0 2026-01-02 00:20:46.717547 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-02 00:20:46.717633 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-02 00:20:46.717645 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-01-02 00:20:46.722206 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-01-02 00:20:58.903567 | orchestrator | 2026-01-02 00:20:58 | INFO  | Task 10a1c476-1c4d-4123-9bee-9e100f25518a (resolvconf) was prepared for execution. 2026-01-02 00:20:58.903709 | orchestrator | 2026-01-02 00:20:58 | INFO  | It takes a moment until task 10a1c476-1c4d-4123-9bee-9e100f25518a (resolvconf) has been started and output is visible here. 2026-01-02 00:21:13.704460 | orchestrator | 2026-01-02 00:21:13.704566 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-01-02 00:21:13.704583 | orchestrator | 2026-01-02 00:21:13.704594 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-02 00:21:13.704605 | orchestrator | Friday 02 January 2026 00:21:03 +0000 (0:00:00.146) 0:00:00.146 ******** 2026-01-02 00:21:13.704615 | orchestrator | ok: [testbed-manager] 2026-01-02 00:21:13.704626 | orchestrator | 2026-01-02 00:21:13.704636 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-01-02 00:21:13.704647 | orchestrator | Friday 02 January 2026 00:21:07 +0000 (0:00:03.957) 0:00:04.103 ******** 2026-01-02 00:21:13.704656 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:21:13.704667 | orchestrator | 2026-01-02 00:21:13.704677 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-01-02 00:21:13.704687 | orchestrator | Friday 02 January 2026 00:21:07 +0000 (0:00:00.063) 0:00:04.167 ******** 2026-01-02 00:21:13.704697 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-01-02 00:21:13.704707 | orchestrator | 2026-01-02 00:21:13.704717 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-01-02 00:21:13.704726 | orchestrator | Friday 02 January 2026 00:21:07 +0000 (0:00:00.077) 0:00:04.244 ******** 2026-01-02 00:21:13.704736 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-01-02 00:21:13.704746 | orchestrator | 2026-01-02 00:21:13.704756 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-01-02 00:21:13.704775 | orchestrator | Friday 02 January 2026 00:21:07 +0000 (0:00:00.086) 0:00:04.330 ******** 2026-01-02 00:21:13.704786 | orchestrator | ok: [testbed-manager] 2026-01-02 00:21:13.704796 | orchestrator | 2026-01-02 00:21:13.704806 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-01-02 00:21:13.704816 | orchestrator | Friday 02 January 2026 00:21:08 +0000 (0:00:01.186) 0:00:05.517 ******** 2026-01-02 00:21:13.704825 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:21:13.704835 | orchestrator | 2026-01-02 00:21:13.704845 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-01-02 00:21:13.704854 | orchestrator | Friday 02 January 2026 00:21:08 +0000 (0:00:00.073) 0:00:05.591 ******** 2026-01-02 00:21:13.704864 | orchestrator | ok: [testbed-manager] 2026-01-02 00:21:13.704873 | orchestrator | 2026-01-02 00:21:13.704883 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-01-02 00:21:13.704893 | orchestrator | Friday 02 January 2026 00:21:09 +0000 (0:00:00.561) 0:00:06.152 ******** 2026-01-02 00:21:13.704902 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:21:13.704912 | orchestrator | 2026-01-02 00:21:13.704921 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-01-02 00:21:13.704932 | orchestrator | Friday 02 January 2026 00:21:09 +0000 (0:00:00.095) 0:00:06.248 ******** 2026-01-02 00:21:13.704969 | orchestrator | changed: [testbed-manager] 2026-01-02 00:21:13.704981 | orchestrator | 2026-01-02 00:21:13.704992 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-01-02 00:21:13.705004 | orchestrator | Friday 02 January 2026 00:21:09 +0000 (0:00:00.600) 0:00:06.849 ******** 2026-01-02 00:21:13.705015 | orchestrator | changed: [testbed-manager] 2026-01-02 00:21:13.705026 | orchestrator | 2026-01-02 00:21:13.705037 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-01-02 00:21:13.705049 | orchestrator | Friday 02 January 2026 00:21:11 +0000 (0:00:01.184) 0:00:08.033 ******** 2026-01-02 00:21:13.705080 | orchestrator | ok: [testbed-manager] 2026-01-02 00:21:13.705092 | orchestrator | 2026-01-02 00:21:13.705103 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-01-02 00:21:13.705114 | orchestrator | Friday 02 January 2026 00:21:12 +0000 (0:00:01.054) 0:00:09.087 ******** 2026-01-02 00:21:13.705125 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-01-02 00:21:13.705136 | orchestrator | 2026-01-02 00:21:13.705147 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-01-02 00:21:13.705158 | orchestrator | Friday 02 January 2026 00:21:12 +0000 (0:00:00.086) 0:00:09.173 ******** 2026-01-02 00:21:13.705169 | orchestrator | changed: [testbed-manager] 2026-01-02 00:21:13.705181 | orchestrator | 2026-01-02 00:21:13.705192 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:21:13.705204 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-02 00:21:13.705216 | orchestrator | 2026-01-02 00:21:13.705227 | orchestrator | 2026-01-02 00:21:13.705239 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:21:13.705250 | orchestrator | Friday 02 January 2026 00:21:13 +0000 (0:00:01.198) 0:00:10.372 ******** 2026-01-02 00:21:13.705262 | orchestrator | =============================================================================== 2026-01-02 00:21:13.705272 | orchestrator | Gathering Facts --------------------------------------------------------- 3.96s 2026-01-02 00:21:13.705283 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.20s 2026-01-02 00:21:13.705295 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.19s 2026-01-02 00:21:13.705306 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.18s 2026-01-02 00:21:13.705317 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.05s 2026-01-02 00:21:13.705327 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.60s 2026-01-02 00:21:13.705353 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.56s 2026-01-02 00:21:13.705363 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.10s 2026-01-02 00:21:13.705373 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2026-01-02 00:21:13.705382 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2026-01-02 00:21:13.705392 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2026-01-02 00:21:13.705401 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2026-01-02 00:21:13.705411 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2026-01-02 00:21:14.052135 | orchestrator | + osism apply sshconfig 2026-01-02 00:21:26.167765 | orchestrator | 2026-01-02 00:21:26 | INFO  | Task a630bdca-afa8-487d-bb14-d48cbba92f0b (sshconfig) was prepared for execution. 2026-01-02 00:21:26.167900 | orchestrator | 2026-01-02 00:21:26 | INFO  | It takes a moment until task a630bdca-afa8-487d-bb14-d48cbba92f0b (sshconfig) has been started and output is visible here. 2026-01-02 00:21:38.073535 | orchestrator | 2026-01-02 00:21:38.073679 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-01-02 00:21:38.073699 | orchestrator | 2026-01-02 00:21:38.073712 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-01-02 00:21:38.073724 | orchestrator | Friday 02 January 2026 00:21:30 +0000 (0:00:00.143) 0:00:00.143 ******** 2026-01-02 00:21:38.073735 | orchestrator | ok: [testbed-manager] 2026-01-02 00:21:38.073747 | orchestrator | 2026-01-02 00:21:38.073758 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-01-02 00:21:38.073769 | orchestrator | Friday 02 January 2026 00:21:30 +0000 (0:00:00.479) 0:00:00.623 ******** 2026-01-02 00:21:38.073807 | orchestrator | changed: [testbed-manager] 2026-01-02 00:21:38.073820 | orchestrator | 2026-01-02 00:21:38.073830 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-01-02 00:21:38.073841 | orchestrator | Friday 02 January 2026 00:21:31 +0000 (0:00:00.441) 0:00:01.064 ******** 2026-01-02 00:21:38.073852 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-01-02 00:21:38.073864 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-01-02 00:21:38.073875 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-01-02 00:21:38.073886 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-01-02 00:21:38.073897 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-01-02 00:21:38.073907 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-01-02 00:21:38.073918 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-01-02 00:21:38.073929 | orchestrator | 2026-01-02 00:21:38.073979 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-01-02 00:21:38.073991 | orchestrator | Friday 02 January 2026 00:21:37 +0000 (0:00:05.833) 0:00:06.898 ******** 2026-01-02 00:21:38.074002 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:21:38.074079 | orchestrator | 2026-01-02 00:21:38.074096 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-01-02 00:21:38.074110 | orchestrator | Friday 02 January 2026 00:21:37 +0000 (0:00:00.095) 0:00:06.994 ******** 2026-01-02 00:21:38.074124 | orchestrator | changed: [testbed-manager] 2026-01-02 00:21:38.074138 | orchestrator | 2026-01-02 00:21:38.074151 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:21:38.074166 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 00:21:38.074179 | orchestrator | 2026-01-02 00:21:38.074193 | orchestrator | 2026-01-02 00:21:38.074205 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:21:38.074219 | orchestrator | Friday 02 January 2026 00:21:37 +0000 (0:00:00.609) 0:00:07.603 ******** 2026-01-02 00:21:38.074232 | orchestrator | =============================================================================== 2026-01-02 00:21:38.074245 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.83s 2026-01-02 00:21:38.074258 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.61s 2026-01-02 00:21:38.074270 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.48s 2026-01-02 00:21:38.074283 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.44s 2026-01-02 00:21:38.074296 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.10s 2026-01-02 00:21:38.413780 | orchestrator | + osism apply known-hosts 2026-01-02 00:21:50.661252 | orchestrator | 2026-01-02 00:21:50 | INFO  | Task 657865bc-cecb-486c-b75f-e540a34071f9 (known-hosts) was prepared for execution. 2026-01-02 00:21:50.661374 | orchestrator | 2026-01-02 00:21:50 | INFO  | It takes a moment until task 657865bc-cecb-486c-b75f-e540a34071f9 (known-hosts) has been started and output is visible here. 2026-01-02 00:22:07.892175 | orchestrator | 2026-01-02 00:22:07.892301 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-01-02 00:22:07.892321 | orchestrator | 2026-01-02 00:22:07.892330 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-01-02 00:22:07.892339 | orchestrator | Friday 02 January 2026 00:21:54 +0000 (0:00:00.181) 0:00:00.181 ******** 2026-01-02 00:22:07.892348 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-01-02 00:22:07.892356 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-01-02 00:22:07.892364 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-01-02 00:22:07.892372 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-01-02 00:22:07.892414 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-01-02 00:22:07.892423 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-01-02 00:22:07.892438 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-01-02 00:22:07.892446 | orchestrator | 2026-01-02 00:22:07.892454 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-01-02 00:22:07.892462 | orchestrator | Friday 02 January 2026 00:22:00 +0000 (0:00:06.111) 0:00:06.293 ******** 2026-01-02 00:22:07.892471 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-01-02 00:22:07.892481 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-01-02 00:22:07.892497 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-01-02 00:22:07.892505 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-01-02 00:22:07.892512 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-01-02 00:22:07.892520 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-01-02 00:22:07.892527 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-01-02 00:22:07.892534 | orchestrator | 2026-01-02 00:22:07.892541 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-02 00:22:07.892549 | orchestrator | Friday 02 January 2026 00:22:01 +0000 (0:00:00.163) 0:00:06.457 ******** 2026-01-02 00:22:07.892557 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBH9oyU8SWkpoEfzO0lKVY+QY0aWNLcb8YsTazolWVWqvc1KvzSRnfdBHutfEvMM+7D41NzkXvVc84k9lhojkQH8=) 2026-01-02 00:22:07.892568 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCgxSyAkhcp+gwagGo+/UllcR/Lh8Luw3Fu61r+bgfkV4KEVjW5YPzv/+zmuPzfdNByR5IUnjRuy6KjNnblaMJfZ24q2hlq61jbvqQMqPJZQYOxH3NDugDvpYTEB3JZMg9cIaPdPurBShgcqysBvmw0GotmCpQzFAm6eMDP/4Bxg8+XvdeOK56wQXGv6cXAb/LQqNZYx7L+Wd071vppQ0tZ3ulICfM8PfSq+X8rV/9LUcFT5QzzmLfgVsxAEqjvw5ayy5vhqmjdFB8EMWmfqT2T294EuI99sJ3c/SyUtRO5H/lhN+fxRQyCSpwltfDvYjWCO87ujYAX5rVqwDSUxpnuQv5Uu0hJjWu5BTyZqQNAmdeUEdJkMqnF0cgxJAk1Y44DNUvom6j+bUBmz8CULw51tC76gOXvcaGH4Ic5rsaUjtMQTGFm8E7HPdODLlnvZSFkkHURLXOpN3WnnLqNq++1v5/TCr5lCdhq0hDzumzUCAY8CbsFb8eV+BhY7aL4T60=) 2026-01-02 00:22:07.892581 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINM6KYRlkUm5FeL+3Gw0/Oj+kiYo349e4bvG/orMJo8G) 2026-01-02 00:22:07.892591 | orchestrator | 2026-01-02 00:22:07.892598 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-02 00:22:07.892605 | orchestrator | Friday 02 January 2026 00:22:02 +0000 (0:00:01.204) 0:00:07.662 ******** 2026-01-02 00:22:07.892633 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDSDMQlh3jt/L7EMluXW0vN6pCHLONHLnPwLPPYG0oTvhM3GN6bOtMEm+CI+bLGuXgLjSXYFHi1+C/RBqC6HA9Uw4sedQWiOH4RM2fQAXBuLOHB9HJa1WLD+wBMpfM3DitIPSlOP8hmGDl9+ACzraIa3QYIVgNsLsXZwbJZGaAlRKSGsxzML+uoXj39IjQ5/vWSJ5QnDjczXpbIVOa7nbb0kAHUDlVFjUMOwOzrpXw8fj7vYp35ghPoJDo6h+sNptKcyuSzqdos7JOKZf0mLiz0066sJbf9iSwJZ83cVF2tsfviCWj08ISppQLDNpU5dAp/78g45UrJweTK3pSXdi8YTZGT7imvtV7L3bqRecneWZi64lnS2ZDIjpBb2/GlX+t6aqyDpHuTVJ3tSUih8pc9Gw4W1KWdcwqKf3cc/U0xBhu/oRbDlBzII0w9TJjY+ntklJ5+rf2DIipXuX354bN2BkWTmlrOZo3YGtdZNdimWQMbDY3lR3oU7GjujpdTij8=) 2026-01-02 00:22:07.892656 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPoUilvO3fHbc/zbxhqZ4HKbKbzLc3oGbCv3EBvRxe0nlLXRG7w48ahRgZJmPxD508KajjnEs/QgtrDg5mb5Ng4=) 2026-01-02 00:22:07.892668 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBG/b8GtPf06h9/NzYH/MGqAEwnuc/v9JSm0MrA64NZB) 2026-01-02 00:22:07.892681 | orchestrator | 2026-01-02 00:22:07.892694 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-02 00:22:07.892708 | orchestrator | Friday 02 January 2026 00:22:03 +0000 (0:00:01.140) 0:00:08.802 ******** 2026-01-02 00:22:07.892720 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDxYQ/0isgAowGaWV8E+N9y9yM2E12H69c6IdvYuvlLR) 2026-01-02 00:22:07.892734 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCSJY8LNwg6W0697/hpolScehFucdecYWFwNmao8/EsJP0bjq2Wu41eaqmsqN3IH6EkBzl98gOwU0MlNwyVtXpu7ojUMmeoB/RQNcgDBVCP5O55TKPFv34E3AIXr35ySaFEmGJinKesj98mR2mr+O30M8W0jsIyOjc+IE6+md6VdWt0WFzF0eR83L3BF8KnedVCmjHxAAlYENMNhBmFFmgWV/DwWBsCqqNDhFtKJDPBL54nRaeULbuLZRkeQp5spkHvQWzofWLMKYIL2maCdOV3klaLfhgzOG7MOo6oeLIGP63IgxbfyOyegkWDCEdyXZwAtSTHPlcpKLfIu2xzew3emDHCL7A0nqecIDEjI/C0TVPreMNzJDAAVapB1x2Q6RUsmYQqH/ziA2Pe9ZClpsvZNq3IAkXrOkyPII287jlt+eRn4zAARRX1tFMqi5muETUFkOrembsuPtM2R3b4ACeOo0KJ1GowYe+U1mumkEfAvr5YfPqODGK2Hb5kA7+0r8M=) 2026-01-02 00:22:07.892746 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNkNWRZDEZ9IwBbMense+Ua4YLP4pYnk5iV+cDHsI3xWBWjy2REPaNwhGQRGex4bTZccdUChUw3HcXquI640HIs=) 2026-01-02 00:22:07.892758 | orchestrator | 2026-01-02 00:22:07.892771 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-02 00:22:07.892785 | orchestrator | Friday 02 January 2026 00:22:04 +0000 (0:00:01.127) 0:00:09.929 ******** 2026-01-02 00:22:07.892862 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDp6SwTE2sBo+3gaeQIZ8ntKM2PIJinNdlU1+vVZKY+OkJpSCHxN6Wf9oZQ8zmzXaCfTasBm0Od6YAKSNWqMZb86/LzcdU5hnNf6ZGvQphbFhrgE466XYuWhHBiNWN7L0S8ywQCLPTIJlpdJRt++gdJNhOq7Pp2jX/k0qKy1rifghYusCwV2O1+Yt3Ra33Gb+Jd8uDP5OfKOKC06y2FHfor4riY8OzOUquAmpP62bGdpASoGuVBPYeSVcKibQXzs9DtagqNkQWXuAKPMzfdKsvj8fIHAregerqsbX4PYT++U9SzmsJ0v7L21BxCiRLODff1r1klXjNRkrsFbtKb4uGRVWq0C+6HDTz9wG88h1M776prhvT4zHL5GgpzJ/HhO7RPnh7KyJQ10C0W15iWYz1kQTtoTholUzA25MErxPYjsY+6XuuuQRxAxed5HLhcs5B6brPpxKdTw5aIEKT0lQBAapbHXaIQKTDZjexNn3mnkDNdeHyAWjudmJcYe3Rrt1U=) 2026-01-02 00:22:07.892874 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMYsOy2RMDmf29XaTpWF2V+NuaLuYYkz33ZmqFMQHAofJDVrZP7i6vvbDY9vguCV515+vviSmfeBFEOnDEVLgJA=) 2026-01-02 00:22:07.892883 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILkqQ6/KyI3kjqU95VeY08yw8mu+0W0gAwfkdwJWKfxK) 2026-01-02 00:22:07.892892 | orchestrator | 2026-01-02 00:22:07.892901 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-02 00:22:07.892910 | orchestrator | Friday 02 January 2026 00:22:05 +0000 (0:00:01.068) 0:00:10.998 ******** 2026-01-02 00:22:07.892919 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICU/IsUh1JmQO/XnxBTnFdflTBI69Xi16+ndoygD4tsp) 2026-01-02 00:22:07.892928 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDpLkc+igdJABSOfvy79/AK0wIHQWJ6qEzx8rSQdsAhYWMSnJgerb7xwFuD28c5BhAbjNKYspQh9GQTkFOwhy4M+AUrW3Loa/wGO55jjGMKlG4j3fIKQsqTbhpY2dSHLrBB9xPS6QNUagZWtuPG/oZheshPhZqcSDbUSTEpQKBBOXnCn/MwL4V0y0MTYE6sEZeEsoFhYAh2APFSh0HgbMJwAperWjqte0qra6SpTnr5lYaJQ/7LEjfji17qx4bUr9qhQ/vYpXl92f8+jeN5oRw1hiw6TCZG/4qq77jM0ppkeSlIL12rcxHjzyuM5CHszb0P1ARk4aY18FvUVqihPpzW5y68/oNLFS6RI2HXmZv9unl2aJ3KQksgv+TPvlYlHBDJKrsqtOr0dy5E/tFR+qAVEl5zN33NHqCHtPt8vAR/Yy5g6UNMSWEavqriq7MAEN0Bwuphceh/kWNT+3r0FjaGimhG5vNRcjtCohUygBXAJaeon7ELoyuqGESV6p2dWnE=) 2026-01-02 00:22:07.892994 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOnQF1gAhW7p0Tcgxui+6p1KnIk54f0sLQ23f9fWm9o0qauvtVbV4qQuTeBF0OlYf/HcJb+qzl+GMGJQqzDeXVs=) 2026-01-02 00:22:07.893005 | orchestrator | 2026-01-02 00:22:07.893014 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-02 00:22:07.893023 | orchestrator | Friday 02 January 2026 00:22:06 +0000 (0:00:01.078) 0:00:12.076 ******** 2026-01-02 00:22:07.893043 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDNez8YjMr9Mejy+NuO/f8xPtpPZgBw+n/9UCuYsmJlg8kXapl5YGIrili/QiGPftbtzDr6InWmDNfrauo80OsX4FbnLrclXGXOnuVv8w13OGAUAbdTifm4r3yUK2qMCGAUMkAl87t6QcoC3E0iecW9XqlRky189si88/9zF7pCvuAZYB98nBMAooiCyuWyeSS30ainXJVid77pgUaDXV0gWOxjszI8aY0yWWA+Uj52CerhrTlzO8Av9jb4aCLUzQZBtLZnJW9Ij/K7bMl7zrnF/QB9x5JvzudRZ3bTv+ab0VfrY3ZaYMxqS370t43yMmIpn0QnCD1BS00A6+eE6pZOnYYI4i4o9dI5NLbyB8LSdKK8sySC3Hj0JwYiXO+SY67rxwvqqyH4nPoQd9D1e1RzKVaCIV/b78ANxiHk9T9Ce6hXnpX4q84zhdg616B+DTsh6+rz3lGLc6lIlUkuXA5tnQmFmldKg70G1xarLE7iUlgma6gaQnj4BVNusaQ7/OU=) 2026-01-02 00:22:19.165719 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDb3YPWLBfLpVQXv8F2niWCr5wnuKVjiaxnTUAOmKuaP) 2026-01-02 00:22:19.165811 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMQCstIIjZFbyX8YjyomRWGCPylv9n7gvmZP1Wih6DquAP62c6CLA853cWljLrKN/maAC0oDYmejci3zXL9SyuQ=) 2026-01-02 00:22:19.165822 | orchestrator | 2026-01-02 00:22:19.165830 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-02 00:22:19.165839 | orchestrator | Friday 02 January 2026 00:22:07 +0000 (0:00:01.149) 0:00:13.225 ******** 2026-01-02 00:22:19.165847 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCaORLs/XPR8lNW0ZmaalDRd4fp5olyIYCB/PaLwYDxobATwWo9Pflu4vGxwPHMDdhWcDrnhDDfI9Lw7zhjMCnRtHZRmIYb5AqXjbcNYiNPBiKkfBR10Msntb2mNDLwUnvv/yGtyLKPHUFBmPzYv85sogLGDCgmjxXrAMxVXtW1mcaZ5c1MT2RkC2B1XqU5yn5gJtxazQ4fQjEYjVC34px8f2jVHfEzNE60MUAKWEOIyw2MQR+rj6CW+rf1OPHq2ukAtrAvCdwWMds2og2crGbGw6slQU0SMP9xdoZzakTvU0i+CKxbj85pYMxIQRDbnPLRdS/GLPvoxcAjtfsuLUzZkn/0aQVREd13pXJFo/oPa9/q87PJYeGuFouScWNkIQQl0VNUg9mtKP18gkQSG9ok0PDfGR7VDPASTra55CKSPWkqroOLfmbNfS57kFKBPqAf8qY8ivE4J/d2+8sZnVvVJIqXcsdc/0rz1XCQi6c/kl4SIJWfaFjFhgxGUozQ6M8=) 2026-01-02 00:22:19.165856 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBItT9UPvypLLa5Y8g9ajsr5ry6EdQoXhJIwlc9ghJ0YSVutSUSc1KvppeZv8GaBzemkPiDmEm1W2blyyfyNolmU=) 2026-01-02 00:22:19.165863 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGJGoEnTOLWnm9MpTU8iLEH8WTyLJLInHQ5BG8bDNDxN) 2026-01-02 00:22:19.165870 | orchestrator | 2026-01-02 00:22:19.165877 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-01-02 00:22:19.165884 | orchestrator | Friday 02 January 2026 00:22:08 +0000 (0:00:01.108) 0:00:14.334 ******** 2026-01-02 00:22:19.165893 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-01-02 00:22:19.165900 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-01-02 00:22:19.165906 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-01-02 00:22:19.165913 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-01-02 00:22:19.165920 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-01-02 00:22:19.165927 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-01-02 00:22:19.165933 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-01-02 00:22:19.165994 | orchestrator | 2026-01-02 00:22:19.166001 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-01-02 00:22:19.166069 | orchestrator | Friday 02 January 2026 00:22:14 +0000 (0:00:05.342) 0:00:19.677 ******** 2026-01-02 00:22:19.166078 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-01-02 00:22:19.166087 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-01-02 00:22:19.166093 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-01-02 00:22:19.166112 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-01-02 00:22:19.166119 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-01-02 00:22:19.166125 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-01-02 00:22:19.166131 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-01-02 00:22:19.166161 | orchestrator | 2026-01-02 00:22:19.166168 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-02 00:22:19.166175 | orchestrator | Friday 02 January 2026 00:22:14 +0000 (0:00:00.195) 0:00:19.873 ******** 2026-01-02 00:22:19.166181 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBH9oyU8SWkpoEfzO0lKVY+QY0aWNLcb8YsTazolWVWqvc1KvzSRnfdBHutfEvMM+7D41NzkXvVc84k9lhojkQH8=) 2026-01-02 00:22:19.166206 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCgxSyAkhcp+gwagGo+/UllcR/Lh8Luw3Fu61r+bgfkV4KEVjW5YPzv/+zmuPzfdNByR5IUnjRuy6KjNnblaMJfZ24q2hlq61jbvqQMqPJZQYOxH3NDugDvpYTEB3JZMg9cIaPdPurBShgcqysBvmw0GotmCpQzFAm6eMDP/4Bxg8+XvdeOK56wQXGv6cXAb/LQqNZYx7L+Wd071vppQ0tZ3ulICfM8PfSq+X8rV/9LUcFT5QzzmLfgVsxAEqjvw5ayy5vhqmjdFB8EMWmfqT2T294EuI99sJ3c/SyUtRO5H/lhN+fxRQyCSpwltfDvYjWCO87ujYAX5rVqwDSUxpnuQv5Uu0hJjWu5BTyZqQNAmdeUEdJkMqnF0cgxJAk1Y44DNUvom6j+bUBmz8CULw51tC76gOXvcaGH4Ic5rsaUjtMQTGFm8E7HPdODLlnvZSFkkHURLXOpN3WnnLqNq++1v5/TCr5lCdhq0hDzumzUCAY8CbsFb8eV+BhY7aL4T60=) 2026-01-02 00:22:19.166214 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINM6KYRlkUm5FeL+3Gw0/Oj+kiYo349e4bvG/orMJo8G) 2026-01-02 00:22:19.166220 | orchestrator | 2026-01-02 00:22:19.166227 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-02 00:22:19.166233 | orchestrator | Friday 02 January 2026 00:22:15 +0000 (0:00:01.124) 0:00:20.998 ******** 2026-01-02 00:22:19.166240 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDSDMQlh3jt/L7EMluXW0vN6pCHLONHLnPwLPPYG0oTvhM3GN6bOtMEm+CI+bLGuXgLjSXYFHi1+C/RBqC6HA9Uw4sedQWiOH4RM2fQAXBuLOHB9HJa1WLD+wBMpfM3DitIPSlOP8hmGDl9+ACzraIa3QYIVgNsLsXZwbJZGaAlRKSGsxzML+uoXj39IjQ5/vWSJ5QnDjczXpbIVOa7nbb0kAHUDlVFjUMOwOzrpXw8fj7vYp35ghPoJDo6h+sNptKcyuSzqdos7JOKZf0mLiz0066sJbf9iSwJZ83cVF2tsfviCWj08ISppQLDNpU5dAp/78g45UrJweTK3pSXdi8YTZGT7imvtV7L3bqRecneWZi64lnS2ZDIjpBb2/GlX+t6aqyDpHuTVJ3tSUih8pc9Gw4W1KWdcwqKf3cc/U0xBhu/oRbDlBzII0w9TJjY+ntklJ5+rf2DIipXuX354bN2BkWTmlrOZo3YGtdZNdimWQMbDY3lR3oU7GjujpdTij8=) 2026-01-02 00:22:19.166247 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPoUilvO3fHbc/zbxhqZ4HKbKbzLc3oGbCv3EBvRxe0nlLXRG7w48ahRgZJmPxD508KajjnEs/QgtrDg5mb5Ng4=) 2026-01-02 00:22:19.166259 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBG/b8GtPf06h9/NzYH/MGqAEwnuc/v9JSm0MrA64NZB) 2026-01-02 00:22:19.166266 | orchestrator | 2026-01-02 00:22:19.166273 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-02 00:22:19.166281 | orchestrator | Friday 02 January 2026 00:22:16 +0000 (0:00:01.243) 0:00:22.241 ******** 2026-01-02 00:22:19.166292 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNkNWRZDEZ9IwBbMense+Ua4YLP4pYnk5iV+cDHsI3xWBWjy2REPaNwhGQRGex4bTZccdUChUw3HcXquI640HIs=) 2026-01-02 00:22:19.166303 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDxYQ/0isgAowGaWV8E+N9y9yM2E12H69c6IdvYuvlLR) 2026-01-02 00:22:19.166313 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCSJY8LNwg6W0697/hpolScehFucdecYWFwNmao8/EsJP0bjq2Wu41eaqmsqN3IH6EkBzl98gOwU0MlNwyVtXpu7ojUMmeoB/RQNcgDBVCP5O55TKPFv34E3AIXr35ySaFEmGJinKesj98mR2mr+O30M8W0jsIyOjc+IE6+md6VdWt0WFzF0eR83L3BF8KnedVCmjHxAAlYENMNhBmFFmgWV/DwWBsCqqNDhFtKJDPBL54nRaeULbuLZRkeQp5spkHvQWzofWLMKYIL2maCdOV3klaLfhgzOG7MOo6oeLIGP63IgxbfyOyegkWDCEdyXZwAtSTHPlcpKLfIu2xzew3emDHCL7A0nqecIDEjI/C0TVPreMNzJDAAVapB1x2Q6RUsmYQqH/ziA2Pe9ZClpsvZNq3IAkXrOkyPII287jlt+eRn4zAARRX1tFMqi5muETUFkOrembsuPtM2R3b4ACeOo0KJ1GowYe+U1mumkEfAvr5YfPqODGK2Hb5kA7+0r8M=) 2026-01-02 00:22:19.166321 | orchestrator | 2026-01-02 00:22:19.166329 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-02 00:22:19.166337 | orchestrator | Friday 02 January 2026 00:22:18 +0000 (0:00:01.103) 0:00:23.345 ******** 2026-01-02 00:22:19.166344 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDp6SwTE2sBo+3gaeQIZ8ntKM2PIJinNdlU1+vVZKY+OkJpSCHxN6Wf9oZQ8zmzXaCfTasBm0Od6YAKSNWqMZb86/LzcdU5hnNf6ZGvQphbFhrgE466XYuWhHBiNWN7L0S8ywQCLPTIJlpdJRt++gdJNhOq7Pp2jX/k0qKy1rifghYusCwV2O1+Yt3Ra33Gb+Jd8uDP5OfKOKC06y2FHfor4riY8OzOUquAmpP62bGdpASoGuVBPYeSVcKibQXzs9DtagqNkQWXuAKPMzfdKsvj8fIHAregerqsbX4PYT++U9SzmsJ0v7L21BxCiRLODff1r1klXjNRkrsFbtKb4uGRVWq0C+6HDTz9wG88h1M776prhvT4zHL5GgpzJ/HhO7RPnh7KyJQ10C0W15iWYz1kQTtoTholUzA25MErxPYjsY+6XuuuQRxAxed5HLhcs5B6brPpxKdTw5aIEKT0lQBAapbHXaIQKTDZjexNn3mnkDNdeHyAWjudmJcYe3Rrt1U=) 2026-01-02 00:22:19.166352 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMYsOy2RMDmf29XaTpWF2V+NuaLuYYkz33ZmqFMQHAofJDVrZP7i6vvbDY9vguCV515+vviSmfeBFEOnDEVLgJA=) 2026-01-02 00:22:19.166369 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILkqQ6/KyI3kjqU95VeY08yw8mu+0W0gAwfkdwJWKfxK) 2026-01-02 00:22:23.794791 | orchestrator | 2026-01-02 00:22:23.794878 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-02 00:22:23.794897 | orchestrator | Friday 02 January 2026 00:22:19 +0000 (0:00:01.150) 0:00:24.495 ******** 2026-01-02 00:22:23.794933 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDpLkc+igdJABSOfvy79/AK0wIHQWJ6qEzx8rSQdsAhYWMSnJgerb7xwFuD28c5BhAbjNKYspQh9GQTkFOwhy4M+AUrW3Loa/wGO55jjGMKlG4j3fIKQsqTbhpY2dSHLrBB9xPS6QNUagZWtuPG/oZheshPhZqcSDbUSTEpQKBBOXnCn/MwL4V0y0MTYE6sEZeEsoFhYAh2APFSh0HgbMJwAperWjqte0qra6SpTnr5lYaJQ/7LEjfji17qx4bUr9qhQ/vYpXl92f8+jeN5oRw1hiw6TCZG/4qq77jM0ppkeSlIL12rcxHjzyuM5CHszb0P1ARk4aY18FvUVqihPpzW5y68/oNLFS6RI2HXmZv9unl2aJ3KQksgv+TPvlYlHBDJKrsqtOr0dy5E/tFR+qAVEl5zN33NHqCHtPt8vAR/Yy5g6UNMSWEavqriq7MAEN0Bwuphceh/kWNT+3r0FjaGimhG5vNRcjtCohUygBXAJaeon7ELoyuqGESV6p2dWnE=) 2026-01-02 00:22:23.794984 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOnQF1gAhW7p0Tcgxui+6p1KnIk54f0sLQ23f9fWm9o0qauvtVbV4qQuTeBF0OlYf/HcJb+qzl+GMGJQqzDeXVs=) 2026-01-02 00:22:23.794997 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICU/IsUh1JmQO/XnxBTnFdflTBI69Xi16+ndoygD4tsp) 2026-01-02 00:22:23.795031 | orchestrator | 2026-01-02 00:22:23.795043 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-02 00:22:23.795053 | orchestrator | Friday 02 January 2026 00:22:20 +0000 (0:00:01.117) 0:00:25.613 ******** 2026-01-02 00:22:23.795064 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMQCstIIjZFbyX8YjyomRWGCPylv9n7gvmZP1Wih6DquAP62c6CLA853cWljLrKN/maAC0oDYmejci3zXL9SyuQ=) 2026-01-02 00:22:23.795076 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDNez8YjMr9Mejy+NuO/f8xPtpPZgBw+n/9UCuYsmJlg8kXapl5YGIrili/QiGPftbtzDr6InWmDNfrauo80OsX4FbnLrclXGXOnuVv8w13OGAUAbdTifm4r3yUK2qMCGAUMkAl87t6QcoC3E0iecW9XqlRky189si88/9zF7pCvuAZYB98nBMAooiCyuWyeSS30ainXJVid77pgUaDXV0gWOxjszI8aY0yWWA+Uj52CerhrTlzO8Av9jb4aCLUzQZBtLZnJW9Ij/K7bMl7zrnF/QB9x5JvzudRZ3bTv+ab0VfrY3ZaYMxqS370t43yMmIpn0QnCD1BS00A6+eE6pZOnYYI4i4o9dI5NLbyB8LSdKK8sySC3Hj0JwYiXO+SY67rxwvqqyH4nPoQd9D1e1RzKVaCIV/b78ANxiHk9T9Ce6hXnpX4q84zhdg616B+DTsh6+rz3lGLc6lIlUkuXA5tnQmFmldKg70G1xarLE7iUlgma6gaQnj4BVNusaQ7/OU=) 2026-01-02 00:22:23.795088 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDb3YPWLBfLpVQXv8F2niWCr5wnuKVjiaxnTUAOmKuaP) 2026-01-02 00:22:23.795099 | orchestrator | 2026-01-02 00:22:23.795110 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-02 00:22:23.795121 | orchestrator | Friday 02 January 2026 00:22:21 +0000 (0:00:01.102) 0:00:26.715 ******** 2026-01-02 00:22:23.795132 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGJGoEnTOLWnm9MpTU8iLEH8WTyLJLInHQ5BG8bDNDxN) 2026-01-02 00:22:23.795143 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCaORLs/XPR8lNW0ZmaalDRd4fp5olyIYCB/PaLwYDxobATwWo9Pflu4vGxwPHMDdhWcDrnhDDfI9Lw7zhjMCnRtHZRmIYb5AqXjbcNYiNPBiKkfBR10Msntb2mNDLwUnvv/yGtyLKPHUFBmPzYv85sogLGDCgmjxXrAMxVXtW1mcaZ5c1MT2RkC2B1XqU5yn5gJtxazQ4fQjEYjVC34px8f2jVHfEzNE60MUAKWEOIyw2MQR+rj6CW+rf1OPHq2ukAtrAvCdwWMds2og2crGbGw6slQU0SMP9xdoZzakTvU0i+CKxbj85pYMxIQRDbnPLRdS/GLPvoxcAjtfsuLUzZkn/0aQVREd13pXJFo/oPa9/q87PJYeGuFouScWNkIQQl0VNUg9mtKP18gkQSG9ok0PDfGR7VDPASTra55CKSPWkqroOLfmbNfS57kFKBPqAf8qY8ivE4J/d2+8sZnVvVJIqXcsdc/0rz1XCQi6c/kl4SIJWfaFjFhgxGUozQ6M8=) 2026-01-02 00:22:23.795155 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBItT9UPvypLLa5Y8g9ajsr5ry6EdQoXhJIwlc9ghJ0YSVutSUSc1KvppeZv8GaBzemkPiDmEm1W2blyyfyNolmU=) 2026-01-02 00:22:23.795166 | orchestrator | 2026-01-02 00:22:23.795177 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-01-02 00:22:23.795188 | orchestrator | Friday 02 January 2026 00:22:22 +0000 (0:00:01.077) 0:00:27.793 ******** 2026-01-02 00:22:23.795199 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-01-02 00:22:23.795211 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-01-02 00:22:23.795222 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-01-02 00:22:23.795233 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-01-02 00:22:23.795244 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-02 00:22:23.795254 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-01-02 00:22:23.795265 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-01-02 00:22:23.795276 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:22:23.795287 | orchestrator | 2026-01-02 00:22:23.795315 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-01-02 00:22:23.795326 | orchestrator | Friday 02 January 2026 00:22:22 +0000 (0:00:00.193) 0:00:27.986 ******** 2026-01-02 00:22:23.795339 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:22:23.795352 | orchestrator | 2026-01-02 00:22:23.795372 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-01-02 00:22:23.795385 | orchestrator | Friday 02 January 2026 00:22:22 +0000 (0:00:00.070) 0:00:28.057 ******** 2026-01-02 00:22:23.795399 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:22:23.795411 | orchestrator | 2026-01-02 00:22:23.795424 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-01-02 00:22:23.795437 | orchestrator | Friday 02 January 2026 00:22:22 +0000 (0:00:00.060) 0:00:28.117 ******** 2026-01-02 00:22:23.795449 | orchestrator | changed: [testbed-manager] 2026-01-02 00:22:23.795461 | orchestrator | 2026-01-02 00:22:23.795475 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:22:23.795488 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-02 00:22:23.795502 | orchestrator | 2026-01-02 00:22:23.795514 | orchestrator | 2026-01-02 00:22:23.795527 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:22:23.795540 | orchestrator | Friday 02 January 2026 00:22:23 +0000 (0:00:00.754) 0:00:28.872 ******** 2026-01-02 00:22:23.795553 | orchestrator | =============================================================================== 2026-01-02 00:22:23.795565 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.11s 2026-01-02 00:22:23.795578 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.34s 2026-01-02 00:22:23.795591 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.24s 2026-01-02 00:22:23.795603 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.20s 2026-01-02 00:22:23.795616 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-01-02 00:22:23.795629 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-01-02 00:22:23.795641 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-01-02 00:22:23.795654 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-01-02 00:22:23.795667 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-01-02 00:22:23.795680 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-01-02 00:22:23.795693 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-01-02 00:22:23.795704 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-01-02 00:22:23.795715 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-01-02 00:22:23.795725 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-01-02 00:22:23.795736 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-01-02 00:22:23.795747 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-01-02 00:22:23.795758 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.75s 2026-01-02 00:22:23.795768 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.20s 2026-01-02 00:22:23.795779 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.19s 2026-01-02 00:22:23.795790 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2026-01-02 00:22:24.128992 | orchestrator | + osism apply squid 2026-01-02 00:22:36.308720 | orchestrator | 2026-01-02 00:22:36 | INFO  | Task 21d7a68f-e4da-4e0e-ae3c-41dc2b499af6 (squid) was prepared for execution. 2026-01-02 00:22:36.308793 | orchestrator | 2026-01-02 00:22:36 | INFO  | It takes a moment until task 21d7a68f-e4da-4e0e-ae3c-41dc2b499af6 (squid) has been started and output is visible here. 2026-01-02 00:24:38.738988 | orchestrator | 2026-01-02 00:24:38.739107 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-01-02 00:24:38.739151 | orchestrator | 2026-01-02 00:24:38.739165 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-01-02 00:24:38.739195 | orchestrator | Friday 02 January 2026 00:22:40 +0000 (0:00:00.173) 0:00:00.173 ******** 2026-01-02 00:24:38.739207 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-01-02 00:24:38.739219 | orchestrator | 2026-01-02 00:24:38.739230 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-01-02 00:24:38.739241 | orchestrator | Friday 02 January 2026 00:22:40 +0000 (0:00:00.095) 0:00:00.269 ******** 2026-01-02 00:24:38.739252 | orchestrator | ok: [testbed-manager] 2026-01-02 00:24:38.739265 | orchestrator | 2026-01-02 00:24:38.739276 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-01-02 00:24:38.739287 | orchestrator | Friday 02 January 2026 00:22:42 +0000 (0:00:01.556) 0:00:01.825 ******** 2026-01-02 00:24:38.739299 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-01-02 00:24:38.739310 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-01-02 00:24:38.739321 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-01-02 00:24:38.739331 | orchestrator | 2026-01-02 00:24:38.739342 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-01-02 00:24:38.739353 | orchestrator | Friday 02 January 2026 00:22:43 +0000 (0:00:01.176) 0:00:03.001 ******** 2026-01-02 00:24:38.739364 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-01-02 00:24:38.739375 | orchestrator | 2026-01-02 00:24:38.739386 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-01-02 00:24:38.739397 | orchestrator | Friday 02 January 2026 00:22:44 +0000 (0:00:01.074) 0:00:04.076 ******** 2026-01-02 00:24:38.739408 | orchestrator | ok: [testbed-manager] 2026-01-02 00:24:38.739419 | orchestrator | 2026-01-02 00:24:38.739430 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-01-02 00:24:38.739445 | orchestrator | Friday 02 January 2026 00:22:44 +0000 (0:00:00.381) 0:00:04.458 ******** 2026-01-02 00:24:38.739456 | orchestrator | changed: [testbed-manager] 2026-01-02 00:24:38.739467 | orchestrator | 2026-01-02 00:24:38.739478 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-01-02 00:24:38.739491 | orchestrator | Friday 02 January 2026 00:22:45 +0000 (0:00:00.989) 0:00:05.447 ******** 2026-01-02 00:24:38.739503 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-01-02 00:24:38.739517 | orchestrator | ok: [testbed-manager] 2026-01-02 00:24:38.739530 | orchestrator | 2026-01-02 00:24:38.739543 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-01-02 00:24:38.739555 | orchestrator | Friday 02 January 2026 00:23:25 +0000 (0:00:39.712) 0:00:45.160 ******** 2026-01-02 00:24:38.739569 | orchestrator | changed: [testbed-manager] 2026-01-02 00:24:38.739581 | orchestrator | 2026-01-02 00:24:38.739594 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-01-02 00:24:38.739607 | orchestrator | Friday 02 January 2026 00:23:37 +0000 (0:00:11.997) 0:00:57.157 ******** 2026-01-02 00:24:38.739621 | orchestrator | Pausing for 60 seconds 2026-01-02 00:24:38.739634 | orchestrator | changed: [testbed-manager] 2026-01-02 00:24:38.739647 | orchestrator | 2026-01-02 00:24:38.739660 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-01-02 00:24:38.739673 | orchestrator | Friday 02 January 2026 00:24:37 +0000 (0:01:00.087) 0:01:57.245 ******** 2026-01-02 00:24:38.739686 | orchestrator | ok: [testbed-manager] 2026-01-02 00:24:38.739699 | orchestrator | 2026-01-02 00:24:38.739712 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-01-02 00:24:38.739725 | orchestrator | Friday 02 January 2026 00:24:37 +0000 (0:00:00.076) 0:01:57.321 ******** 2026-01-02 00:24:38.739737 | orchestrator | changed: [testbed-manager] 2026-01-02 00:24:38.739751 | orchestrator | 2026-01-02 00:24:38.739763 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:24:38.739784 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:24:38.739797 | orchestrator | 2026-01-02 00:24:38.739809 | orchestrator | 2026-01-02 00:24:38.739822 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:24:38.739834 | orchestrator | Friday 02 January 2026 00:24:38 +0000 (0:00:00.675) 0:01:57.997 ******** 2026-01-02 00:24:38.739846 | orchestrator | =============================================================================== 2026-01-02 00:24:38.739857 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2026-01-02 00:24:38.739868 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 39.71s 2026-01-02 00:24:38.739879 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.00s 2026-01-02 00:24:38.739890 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.56s 2026-01-02 00:24:38.739900 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.18s 2026-01-02 00:24:38.739911 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.07s 2026-01-02 00:24:38.739922 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.99s 2026-01-02 00:24:38.739964 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.68s 2026-01-02 00:24:38.739975 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.38s 2026-01-02 00:24:38.739986 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2026-01-02 00:24:38.739997 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.08s 2026-01-02 00:24:39.046245 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-02 00:24:39.046363 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-01-02 00:24:39.052094 | orchestrator | + set -e 2026-01-02 00:24:39.052161 | orchestrator | + NAMESPACE=kolla 2026-01-02 00:24:39.052270 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-01-02 00:24:39.057569 | orchestrator | ++ semver latest 9.0.0 2026-01-02 00:24:39.124339 | orchestrator | + [[ -1 -lt 0 ]] 2026-01-02 00:24:39.124410 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-02 00:24:39.125454 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-01-02 00:24:51.291726 | orchestrator | 2026-01-02 00:24:51 | INFO  | Task 272940dc-c576-482b-ad98-202698c0bd55 (operator) was prepared for execution. 2026-01-02 00:24:51.291845 | orchestrator | 2026-01-02 00:24:51 | INFO  | It takes a moment until task 272940dc-c576-482b-ad98-202698c0bd55 (operator) has been started and output is visible here. 2026-01-02 00:25:07.469778 | orchestrator | 2026-01-02 00:25:07.469869 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-01-02 00:25:07.469879 | orchestrator | 2026-01-02 00:25:07.469885 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-02 00:25:07.469892 | orchestrator | Friday 02 January 2026 00:24:55 +0000 (0:00:00.162) 0:00:00.163 ******** 2026-01-02 00:25:07.469897 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:25:07.469904 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:25:07.469910 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:25:07.469915 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:25:07.469955 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:25:07.469961 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:25:07.469966 | orchestrator | 2026-01-02 00:25:07.469974 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-01-02 00:25:07.469980 | orchestrator | Friday 02 January 2026 00:24:59 +0000 (0:00:03.349) 0:00:03.512 ******** 2026-01-02 00:25:07.469986 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:25:07.469991 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:25:07.469996 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:25:07.470002 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:25:07.470007 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:25:07.470067 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:25:07.470074 | orchestrator | 2026-01-02 00:25:07.470080 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-01-02 00:25:07.470085 | orchestrator | 2026-01-02 00:25:07.470090 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-01-02 00:25:07.470095 | orchestrator | Friday 02 January 2026 00:24:59 +0000 (0:00:00.775) 0:00:04.287 ******** 2026-01-02 00:25:07.470101 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:25:07.470106 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:25:07.470111 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:25:07.470116 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:25:07.470121 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:25:07.470126 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:25:07.470131 | orchestrator | 2026-01-02 00:25:07.470136 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-01-02 00:25:07.470141 | orchestrator | Friday 02 January 2026 00:25:00 +0000 (0:00:00.161) 0:00:04.449 ******** 2026-01-02 00:25:07.470146 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:25:07.470151 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:25:07.470157 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:25:07.470162 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:25:07.470167 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:25:07.470172 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:25:07.470177 | orchestrator | 2026-01-02 00:25:07.470182 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-01-02 00:25:07.470187 | orchestrator | Friday 02 January 2026 00:25:00 +0000 (0:00:00.177) 0:00:04.627 ******** 2026-01-02 00:25:07.470192 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:25:07.470198 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:25:07.470203 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:25:07.470208 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:25:07.470213 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:25:07.470218 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:25:07.470223 | orchestrator | 2026-01-02 00:25:07.470229 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-01-02 00:25:07.470234 | orchestrator | Friday 02 January 2026 00:25:00 +0000 (0:00:00.606) 0:00:05.233 ******** 2026-01-02 00:25:07.470239 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:25:07.470244 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:25:07.470249 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:25:07.470254 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:25:07.470259 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:25:07.470264 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:25:07.470269 | orchestrator | 2026-01-02 00:25:07.470274 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-01-02 00:25:07.470279 | orchestrator | Friday 02 January 2026 00:25:01 +0000 (0:00:00.785) 0:00:06.018 ******** 2026-01-02 00:25:07.470285 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-01-02 00:25:07.470290 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-01-02 00:25:07.470295 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-01-02 00:25:07.470300 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-01-02 00:25:07.470305 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-01-02 00:25:07.470310 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-01-02 00:25:07.470315 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-01-02 00:25:07.470320 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-01-02 00:25:07.470339 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-01-02 00:25:07.470344 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-01-02 00:25:07.470351 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-01-02 00:25:07.470357 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-01-02 00:25:07.470363 | orchestrator | 2026-01-02 00:25:07.470369 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-01-02 00:25:07.470380 | orchestrator | Friday 02 January 2026 00:25:02 +0000 (0:00:01.205) 0:00:07.224 ******** 2026-01-02 00:25:07.470386 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:25:07.470392 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:25:07.470398 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:25:07.470403 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:25:07.470409 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:25:07.470415 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:25:07.470421 | orchestrator | 2026-01-02 00:25:07.470427 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-01-02 00:25:07.470434 | orchestrator | Friday 02 January 2026 00:25:04 +0000 (0:00:01.166) 0:00:08.390 ******** 2026-01-02 00:25:07.470440 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-01-02 00:25:07.470446 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-01-02 00:25:07.470452 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-01-02 00:25:07.470458 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-01-02 00:25:07.470476 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-01-02 00:25:07.470483 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-01-02 00:25:07.470490 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-01-02 00:25:07.470495 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-01-02 00:25:07.470500 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-01-02 00:25:07.470505 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-01-02 00:25:07.470510 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-01-02 00:25:07.470515 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-01-02 00:25:07.470520 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-01-02 00:25:07.470525 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-01-02 00:25:07.470530 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-01-02 00:25:07.470535 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-01-02 00:25:07.470540 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-01-02 00:25:07.470545 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-01-02 00:25:07.470554 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-01-02 00:25:07.470559 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-01-02 00:25:07.470564 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-01-02 00:25:07.470569 | orchestrator | 2026-01-02 00:25:07.470574 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-01-02 00:25:07.470580 | orchestrator | Friday 02 January 2026 00:25:05 +0000 (0:00:01.221) 0:00:09.612 ******** 2026-01-02 00:25:07.470585 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:25:07.470590 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:25:07.470596 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:25:07.470601 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:25:07.470606 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:25:07.470611 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:25:07.470616 | orchestrator | 2026-01-02 00:25:07.470621 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-01-02 00:25:07.470626 | orchestrator | Friday 02 January 2026 00:25:05 +0000 (0:00:00.174) 0:00:09.786 ******** 2026-01-02 00:25:07.470631 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:25:07.470636 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:25:07.470641 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:25:07.470646 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:25:07.470655 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:25:07.470660 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:25:07.470665 | orchestrator | 2026-01-02 00:25:07.470671 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-01-02 00:25:07.470676 | orchestrator | Friday 02 January 2026 00:25:05 +0000 (0:00:00.211) 0:00:09.997 ******** 2026-01-02 00:25:07.470681 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:25:07.470686 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:25:07.470691 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:25:07.470696 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:25:07.470701 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:25:07.470706 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:25:07.470711 | orchestrator | 2026-01-02 00:25:07.470716 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-01-02 00:25:07.470721 | orchestrator | Friday 02 January 2026 00:25:06 +0000 (0:00:00.583) 0:00:10.581 ******** 2026-01-02 00:25:07.470726 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:25:07.470731 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:25:07.470737 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:25:07.470742 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:25:07.470747 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:25:07.470752 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:25:07.470757 | orchestrator | 2026-01-02 00:25:07.470762 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-01-02 00:25:07.470767 | orchestrator | Friday 02 January 2026 00:25:06 +0000 (0:00:00.200) 0:00:10.781 ******** 2026-01-02 00:25:07.470772 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-02 00:25:07.470777 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:25:07.470782 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-02 00:25:07.470788 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:25:07.470793 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-02 00:25:07.470798 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-02 00:25:07.470803 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:25:07.470808 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:25:07.470813 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-01-02 00:25:07.470818 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:25:07.470823 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-02 00:25:07.470828 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:25:07.470833 | orchestrator | 2026-01-02 00:25:07.470838 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-01-02 00:25:07.470843 | orchestrator | Friday 02 January 2026 00:25:07 +0000 (0:00:00.690) 0:00:11.472 ******** 2026-01-02 00:25:07.470848 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:25:07.470853 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:25:07.470858 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:25:07.470863 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:25:07.470868 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:25:07.470873 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:25:07.470878 | orchestrator | 2026-01-02 00:25:07.470884 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-01-02 00:25:07.470889 | orchestrator | Friday 02 January 2026 00:25:07 +0000 (0:00:00.164) 0:00:11.636 ******** 2026-01-02 00:25:07.470894 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:25:07.470899 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:25:07.470904 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:25:07.470909 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:25:07.470918 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:25:08.773041 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:25:08.773156 | orchestrator | 2026-01-02 00:25:08.773172 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-01-02 00:25:08.773186 | orchestrator | Friday 02 January 2026 00:25:07 +0000 (0:00:00.163) 0:00:11.800 ******** 2026-01-02 00:25:08.773224 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:25:08.773236 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:25:08.773247 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:25:08.773258 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:25:08.773268 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:25:08.773279 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:25:08.773290 | orchestrator | 2026-01-02 00:25:08.773301 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-01-02 00:25:08.773312 | orchestrator | Friday 02 January 2026 00:25:07 +0000 (0:00:00.150) 0:00:11.950 ******** 2026-01-02 00:25:08.773323 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:25:08.773334 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:25:08.773345 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:25:08.773356 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:25:08.773366 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:25:08.773377 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:25:08.773388 | orchestrator | 2026-01-02 00:25:08.773398 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-01-02 00:25:08.773409 | orchestrator | Friday 02 January 2026 00:25:08 +0000 (0:00:00.633) 0:00:12.584 ******** 2026-01-02 00:25:08.773420 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:25:08.773431 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:25:08.773442 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:25:08.773452 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:25:08.773463 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:25:08.773473 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:25:08.773484 | orchestrator | 2026-01-02 00:25:08.773495 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:25:08.773507 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-02 00:25:08.773519 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-02 00:25:08.773530 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-02 00:25:08.773541 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-02 00:25:08.773554 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-02 00:25:08.773567 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-02 00:25:08.773580 | orchestrator | 2026-01-02 00:25:08.773593 | orchestrator | 2026-01-02 00:25:08.773606 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:25:08.773619 | orchestrator | Friday 02 January 2026 00:25:08 +0000 (0:00:00.261) 0:00:12.846 ******** 2026-01-02 00:25:08.773633 | orchestrator | =============================================================================== 2026-01-02 00:25:08.773646 | orchestrator | Gathering Facts --------------------------------------------------------- 3.35s 2026-01-02 00:25:08.773659 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.22s 2026-01-02 00:25:08.773673 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.21s 2026-01-02 00:25:08.773684 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.17s 2026-01-02 00:25:08.773694 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.79s 2026-01-02 00:25:08.773705 | orchestrator | Do not require tty for all users ---------------------------------------- 0.78s 2026-01-02 00:25:08.773716 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.69s 2026-01-02 00:25:08.773751 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.63s 2026-01-02 00:25:08.773763 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.61s 2026-01-02 00:25:08.773774 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.58s 2026-01-02 00:25:08.773785 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.26s 2026-01-02 00:25:08.773796 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.21s 2026-01-02 00:25:08.773807 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.20s 2026-01-02 00:25:08.773818 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.18s 2026-01-02 00:25:08.773829 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.17s 2026-01-02 00:25:08.773840 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.16s 2026-01-02 00:25:08.773850 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.16s 2026-01-02 00:25:08.773861 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.16s 2026-01-02 00:25:08.773872 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.15s 2026-01-02 00:25:09.118104 | orchestrator | + osism apply --environment custom facts 2026-01-02 00:25:11.129714 | orchestrator | 2026-01-02 00:25:11 | INFO  | Trying to run play facts in environment custom 2026-01-02 00:25:21.239231 | orchestrator | 2026-01-02 00:25:21 | INFO  | Task f599ccd3-4e95-498f-b611-6b64b91e0a89 (facts) was prepared for execution. 2026-01-02 00:25:21.239342 | orchestrator | 2026-01-02 00:25:21 | INFO  | It takes a moment until task f599ccd3-4e95-498f-b611-6b64b91e0a89 (facts) has been started and output is visible here. 2026-01-02 00:26:04.259119 | orchestrator | 2026-01-02 00:26:04.260196 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-01-02 00:26:04.260278 | orchestrator | 2026-01-02 00:26:04.260295 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-02 00:26:04.260308 | orchestrator | Friday 02 January 2026 00:25:25 +0000 (0:00:00.087) 0:00:00.087 ******** 2026-01-02 00:26:04.260319 | orchestrator | ok: [testbed-manager] 2026-01-02 00:26:04.260332 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:26:04.260344 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:26:04.260356 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:26:04.260367 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:26:04.260396 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:26:04.260408 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:26:04.260419 | orchestrator | 2026-01-02 00:26:04.260431 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-01-02 00:26:04.260442 | orchestrator | Friday 02 January 2026 00:25:26 +0000 (0:00:01.365) 0:00:01.452 ******** 2026-01-02 00:26:04.260453 | orchestrator | ok: [testbed-manager] 2026-01-02 00:26:04.260464 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:26:04.260476 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:26:04.260486 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:26:04.260497 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:26:04.260508 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:26:04.260520 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:26:04.260531 | orchestrator | 2026-01-02 00:26:04.260542 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-01-02 00:26:04.260553 | orchestrator | 2026-01-02 00:26:04.260564 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-02 00:26:04.260575 | orchestrator | Friday 02 January 2026 00:25:27 +0000 (0:00:01.165) 0:00:02.618 ******** 2026-01-02 00:26:04.260587 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:26:04.260597 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:26:04.260609 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:26:04.260644 | orchestrator | 2026-01-02 00:26:04.260656 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-02 00:26:04.260668 | orchestrator | Friday 02 January 2026 00:25:28 +0000 (0:00:00.101) 0:00:02.720 ******** 2026-01-02 00:26:04.260679 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:26:04.260690 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:26:04.260700 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:26:04.260711 | orchestrator | 2026-01-02 00:26:04.260722 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-02 00:26:04.260733 | orchestrator | Friday 02 January 2026 00:25:28 +0000 (0:00:00.214) 0:00:02.934 ******** 2026-01-02 00:26:04.260743 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:26:04.260754 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:26:04.260764 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:26:04.260775 | orchestrator | 2026-01-02 00:26:04.260786 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-02 00:26:04.260797 | orchestrator | Friday 02 January 2026 00:25:28 +0000 (0:00:00.255) 0:00:03.190 ******** 2026-01-02 00:26:04.260809 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:26:04.260822 | orchestrator | 2026-01-02 00:26:04.260832 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-02 00:26:04.260843 | orchestrator | Friday 02 January 2026 00:25:28 +0000 (0:00:00.150) 0:00:03.341 ******** 2026-01-02 00:26:04.260854 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:26:04.260865 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:26:04.260875 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:26:04.260886 | orchestrator | 2026-01-02 00:26:04.260897 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-02 00:26:04.260908 | orchestrator | Friday 02 January 2026 00:25:29 +0000 (0:00:00.427) 0:00:03.768 ******** 2026-01-02 00:26:04.260945 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:26:04.260957 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:26:04.260968 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:26:04.260978 | orchestrator | 2026-01-02 00:26:04.260989 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-02 00:26:04.261000 | orchestrator | Friday 02 January 2026 00:25:29 +0000 (0:00:00.130) 0:00:03.899 ******** 2026-01-02 00:26:04.261011 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:26:04.261022 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:26:04.261032 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:26:04.261043 | orchestrator | 2026-01-02 00:26:04.261054 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-02 00:26:04.261065 | orchestrator | Friday 02 January 2026 00:25:30 +0000 (0:00:01.049) 0:00:04.948 ******** 2026-01-02 00:26:04.261075 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:26:04.261086 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:26:04.261097 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:26:04.261107 | orchestrator | 2026-01-02 00:26:04.261118 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-02 00:26:04.261129 | orchestrator | Friday 02 January 2026 00:25:30 +0000 (0:00:00.459) 0:00:05.407 ******** 2026-01-02 00:26:04.261140 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:26:04.261151 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:26:04.261162 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:26:04.261172 | orchestrator | 2026-01-02 00:26:04.261183 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-02 00:26:04.261194 | orchestrator | Friday 02 January 2026 00:25:31 +0000 (0:00:01.059) 0:00:06.467 ******** 2026-01-02 00:26:04.261205 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:26:04.261216 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:26:04.261226 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:26:04.261237 | orchestrator | 2026-01-02 00:26:04.261248 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-01-02 00:26:04.261266 | orchestrator | Friday 02 January 2026 00:25:47 +0000 (0:00:15.991) 0:00:22.459 ******** 2026-01-02 00:26:04.261277 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:26:04.261288 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:26:04.261299 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:26:04.261310 | orchestrator | 2026-01-02 00:26:04.261321 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-01-02 00:26:04.261414 | orchestrator | Friday 02 January 2026 00:25:47 +0000 (0:00:00.104) 0:00:22.563 ******** 2026-01-02 00:26:04.261430 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:26:04.261441 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:26:04.261452 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:26:04.261463 | orchestrator | 2026-01-02 00:26:04.261473 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-02 00:26:04.261484 | orchestrator | Friday 02 January 2026 00:25:55 +0000 (0:00:07.808) 0:00:30.371 ******** 2026-01-02 00:26:04.261495 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:26:04.261506 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:26:04.261548 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:26:04.261562 | orchestrator | 2026-01-02 00:26:04.261573 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-01-02 00:26:04.261584 | orchestrator | Friday 02 January 2026 00:25:56 +0000 (0:00:00.426) 0:00:30.798 ******** 2026-01-02 00:26:04.261595 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-01-02 00:26:04.261606 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-01-02 00:26:04.261617 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-01-02 00:26:04.261628 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-01-02 00:26:04.261639 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-01-02 00:26:04.261650 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-01-02 00:26:04.261660 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-01-02 00:26:04.261671 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-01-02 00:26:04.261681 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-01-02 00:26:04.261692 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-01-02 00:26:04.261703 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-01-02 00:26:04.261714 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-01-02 00:26:04.261725 | orchestrator | 2026-01-02 00:26:04.261735 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-02 00:26:04.261746 | orchestrator | Friday 02 January 2026 00:25:59 +0000 (0:00:03.165) 0:00:33.963 ******** 2026-01-02 00:26:04.261756 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:26:04.261767 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:26:04.261778 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:26:04.261788 | orchestrator | 2026-01-02 00:26:04.261799 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-02 00:26:04.261810 | orchestrator | 2026-01-02 00:26:04.261821 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-02 00:26:04.261832 | orchestrator | Friday 02 January 2026 00:26:00 +0000 (0:00:01.323) 0:00:35.286 ******** 2026-01-02 00:26:04.261842 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:26:04.261853 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:26:04.261864 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:26:04.261874 | orchestrator | ok: [testbed-manager] 2026-01-02 00:26:04.261885 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:26:04.261896 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:26:04.261906 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:26:04.261936 | orchestrator | 2026-01-02 00:26:04.261948 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:26:04.261969 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:26:04.261981 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:26:04.261993 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:26:04.262004 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:26:04.262085 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:26:04.262112 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:26:04.262131 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:26:04.262149 | orchestrator | 2026-01-02 00:26:04.262167 | orchestrator | 2026-01-02 00:26:04.262187 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:26:04.262199 | orchestrator | Friday 02 January 2026 00:26:04 +0000 (0:00:03.650) 0:00:38.937 ******** 2026-01-02 00:26:04.262210 | orchestrator | =============================================================================== 2026-01-02 00:26:04.262221 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.99s 2026-01-02 00:26:04.262232 | orchestrator | Install required packages (Debian) -------------------------------------- 7.81s 2026-01-02 00:26:04.262242 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.65s 2026-01-02 00:26:04.262253 | orchestrator | Copy fact files --------------------------------------------------------- 3.17s 2026-01-02 00:26:04.262306 | orchestrator | Create custom facts directory ------------------------------------------- 1.37s 2026-01-02 00:26:04.262318 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.32s 2026-01-02 00:26:04.262341 | orchestrator | Copy fact file ---------------------------------------------------------- 1.17s 2026-01-02 00:26:04.520308 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.06s 2026-01-02 00:26:04.520441 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.05s 2026-01-02 00:26:04.520469 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.46s 2026-01-02 00:26:04.520491 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.43s 2026-01-02 00:26:04.520533 | orchestrator | Create custom facts directory ------------------------------------------- 0.43s 2026-01-02 00:26:04.520554 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.26s 2026-01-02 00:26:04.520574 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.21s 2026-01-02 00:26:04.520594 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.15s 2026-01-02 00:26:04.520614 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.13s 2026-01-02 00:26:04.520634 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2026-01-02 00:26:04.520652 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.10s 2026-01-02 00:26:04.859179 | orchestrator | + osism apply bootstrap 2026-01-02 00:26:17.026567 | orchestrator | 2026-01-02 00:26:17 | INFO  | Task a30a278e-fb73-4116-a577-b83b2a9e9b1f (bootstrap) was prepared for execution. 2026-01-02 00:26:17.026685 | orchestrator | 2026-01-02 00:26:17 | INFO  | It takes a moment until task a30a278e-fb73-4116-a577-b83b2a9e9b1f (bootstrap) has been started and output is visible here. 2026-01-02 00:26:33.468111 | orchestrator | 2026-01-02 00:26:33.468254 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-01-02 00:26:33.468272 | orchestrator | 2026-01-02 00:26:33.468284 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-01-02 00:26:33.468296 | orchestrator | Friday 02 January 2026 00:26:21 +0000 (0:00:00.165) 0:00:00.165 ******** 2026-01-02 00:26:33.468308 | orchestrator | ok: [testbed-manager] 2026-01-02 00:26:33.468320 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:26:33.468331 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:26:33.468342 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:26:33.468353 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:26:33.468364 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:26:33.468374 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:26:33.468385 | orchestrator | 2026-01-02 00:26:33.468396 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-02 00:26:33.468407 | orchestrator | 2026-01-02 00:26:33.468418 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-02 00:26:33.468429 | orchestrator | Friday 02 January 2026 00:26:21 +0000 (0:00:00.271) 0:00:00.436 ******** 2026-01-02 00:26:33.468440 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:26:33.468451 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:26:33.468461 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:26:33.468472 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:26:33.468483 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:26:33.468495 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:26:33.468505 | orchestrator | ok: [testbed-manager] 2026-01-02 00:26:33.468516 | orchestrator | 2026-01-02 00:26:33.468527 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-01-02 00:26:33.468538 | orchestrator | 2026-01-02 00:26:33.468549 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-02 00:26:33.468560 | orchestrator | Friday 02 January 2026 00:26:25 +0000 (0:00:03.875) 0:00:04.312 ******** 2026-01-02 00:26:33.468572 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-01-02 00:26:33.468583 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-01-02 00:26:33.468594 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-01-02 00:26:33.468604 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-01-02 00:26:33.468615 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-01-02 00:26:33.468626 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-02 00:26:33.468639 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-02 00:26:33.468651 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-02 00:26:33.468665 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-01-02 00:26:33.468678 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-01-02 00:26:33.468692 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-02 00:26:33.468704 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-02 00:26:33.468717 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-01-02 00:26:33.468730 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-01-02 00:26:33.468744 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-01-02 00:26:33.468756 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-02 00:26:33.468769 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:26:33.468780 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-02 00:26:33.468791 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-01-02 00:26:33.468801 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:26:33.468812 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-01-02 00:26:33.468823 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-01-02 00:26:33.468834 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-01-02 00:26:33.468852 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-01-02 00:26:33.468863 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-01-02 00:26:33.468873 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-01-02 00:26:33.468884 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-01-02 00:26:33.468894 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-01-02 00:26:33.468905 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-01-02 00:26:33.468937 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-02 00:26:33.468949 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-02 00:26:33.468959 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-01-02 00:26:33.468985 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-01-02 00:26:33.468996 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-02 00:26:33.469007 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-02 00:26:33.469019 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-01-02 00:26:33.469030 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-01-02 00:26:33.469041 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-02 00:26:33.469052 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:26:33.469063 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-02 00:26:33.469075 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-02 00:26:33.469086 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-01-02 00:26:33.469097 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-01-02 00:26:33.469108 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-02 00:26:33.469120 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:26:33.469131 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-02 00:26:33.469163 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-02 00:26:33.469174 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:26:33.469185 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-01-02 00:26:33.469196 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-02 00:26:33.469206 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-02 00:26:33.469217 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-02 00:26:33.469228 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:26:33.469238 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-02 00:26:33.469249 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-02 00:26:33.469259 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:26:33.469270 | orchestrator | 2026-01-02 00:26:33.469281 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-01-02 00:26:33.469292 | orchestrator | 2026-01-02 00:26:33.469303 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-01-02 00:26:33.469314 | orchestrator | Friday 02 January 2026 00:26:26 +0000 (0:00:00.486) 0:00:04.798 ******** 2026-01-02 00:26:33.469324 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:26:33.469335 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:26:33.469345 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:26:33.469356 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:26:33.469366 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:26:33.469377 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:26:33.469387 | orchestrator | ok: [testbed-manager] 2026-01-02 00:26:33.469398 | orchestrator | 2026-01-02 00:26:33.469409 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-01-02 00:26:33.469420 | orchestrator | Friday 02 January 2026 00:26:27 +0000 (0:00:01.222) 0:00:06.021 ******** 2026-01-02 00:26:33.469430 | orchestrator | ok: [testbed-manager] 2026-01-02 00:26:33.469441 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:26:33.469458 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:26:33.469469 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:26:33.469479 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:26:33.469490 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:26:33.469500 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:26:33.469512 | orchestrator | 2026-01-02 00:26:33.469523 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-01-02 00:26:33.469535 | orchestrator | Friday 02 January 2026 00:26:28 +0000 (0:00:01.348) 0:00:07.369 ******** 2026-01-02 00:26:33.469547 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:26:33.469560 | orchestrator | 2026-01-02 00:26:33.469571 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-01-02 00:26:33.469582 | orchestrator | Friday 02 January 2026 00:26:28 +0000 (0:00:00.289) 0:00:07.659 ******** 2026-01-02 00:26:33.469593 | orchestrator | changed: [testbed-manager] 2026-01-02 00:26:33.469604 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:26:33.469615 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:26:33.469625 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:26:33.469636 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:26:33.469647 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:26:33.469657 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:26:33.469668 | orchestrator | 2026-01-02 00:26:33.469678 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-01-02 00:26:33.469689 | orchestrator | Friday 02 January 2026 00:26:30 +0000 (0:00:02.078) 0:00:09.737 ******** 2026-01-02 00:26:33.469699 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:26:33.469712 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:26:33.469725 | orchestrator | 2026-01-02 00:26:33.469736 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-01-02 00:26:33.469746 | orchestrator | Friday 02 January 2026 00:26:31 +0000 (0:00:00.281) 0:00:10.019 ******** 2026-01-02 00:26:33.469757 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:26:33.469768 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:26:33.469778 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:26:33.469789 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:26:33.469799 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:26:33.469810 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:26:33.469820 | orchestrator | 2026-01-02 00:26:33.469831 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-01-02 00:26:33.469841 | orchestrator | Friday 02 January 2026 00:26:32 +0000 (0:00:01.037) 0:00:11.056 ******** 2026-01-02 00:26:33.469852 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:26:33.469863 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:26:33.469873 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:26:33.469884 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:26:33.469894 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:26:33.469905 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:26:33.469943 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:26:33.469955 | orchestrator | 2026-01-02 00:26:33.469966 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-01-02 00:26:33.469977 | orchestrator | Friday 02 January 2026 00:26:32 +0000 (0:00:00.569) 0:00:11.626 ******** 2026-01-02 00:26:33.469987 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:26:33.469998 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:26:33.470009 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:26:33.470083 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:26:33.470095 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:26:33.470114 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:26:33.470124 | orchestrator | ok: [testbed-manager] 2026-01-02 00:26:33.470135 | orchestrator | 2026-01-02 00:26:33.470146 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-01-02 00:26:33.470159 | orchestrator | Friday 02 January 2026 00:26:33 +0000 (0:00:00.453) 0:00:12.079 ******** 2026-01-02 00:26:33.470169 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:26:33.470180 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:26:33.470200 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:26:45.978412 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:26:45.978509 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:26:45.978524 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:26:45.978531 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:26:45.978539 | orchestrator | 2026-01-02 00:26:45.978547 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-01-02 00:26:45.978555 | orchestrator | Friday 02 January 2026 00:26:33 +0000 (0:00:00.237) 0:00:12.317 ******** 2026-01-02 00:26:45.978564 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:26:45.978585 | orchestrator | 2026-01-02 00:26:45.978592 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-01-02 00:26:45.978600 | orchestrator | Friday 02 January 2026 00:26:33 +0000 (0:00:00.317) 0:00:12.634 ******** 2026-01-02 00:26:45.978617 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:26:45.978624 | orchestrator | 2026-01-02 00:26:45.978631 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-01-02 00:26:45.978638 | orchestrator | Friday 02 January 2026 00:26:34 +0000 (0:00:00.308) 0:00:12.942 ******** 2026-01-02 00:26:45.978645 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:26:45.978653 | orchestrator | ok: [testbed-manager] 2026-01-02 00:26:45.978659 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:26:45.978666 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:26:45.978672 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:26:45.978679 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:26:45.978685 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:26:45.978692 | orchestrator | 2026-01-02 00:26:45.978699 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-01-02 00:26:45.978705 | orchestrator | Friday 02 January 2026 00:26:35 +0000 (0:00:01.352) 0:00:14.295 ******** 2026-01-02 00:26:45.978712 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:26:45.978719 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:26:45.978725 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:26:45.978733 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:26:45.978739 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:26:45.978746 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:26:45.978753 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:26:45.978759 | orchestrator | 2026-01-02 00:26:45.978766 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-01-02 00:26:45.978773 | orchestrator | Friday 02 January 2026 00:26:35 +0000 (0:00:00.233) 0:00:14.528 ******** 2026-01-02 00:26:45.978779 | orchestrator | ok: [testbed-manager] 2026-01-02 00:26:45.978786 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:26:45.978793 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:26:45.978799 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:26:45.978806 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:26:45.978812 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:26:45.978818 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:26:45.978825 | orchestrator | 2026-01-02 00:26:45.978832 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-01-02 00:26:45.978857 | orchestrator | Friday 02 January 2026 00:26:36 +0000 (0:00:00.566) 0:00:15.094 ******** 2026-01-02 00:26:45.978864 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:26:45.978871 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:26:45.978878 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:26:45.978884 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:26:45.978890 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:26:45.978897 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:26:45.978903 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:26:45.978910 | orchestrator | 2026-01-02 00:26:45.978944 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-01-02 00:26:45.978957 | orchestrator | Friday 02 January 2026 00:26:36 +0000 (0:00:00.319) 0:00:15.414 ******** 2026-01-02 00:26:45.978966 | orchestrator | ok: [testbed-manager] 2026-01-02 00:26:45.978973 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:26:45.978981 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:26:45.978989 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:26:45.978997 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:26:45.979005 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:26:45.979013 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:26:45.979021 | orchestrator | 2026-01-02 00:26:45.979029 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-01-02 00:26:45.979040 | orchestrator | Friday 02 January 2026 00:26:37 +0000 (0:00:00.604) 0:00:16.018 ******** 2026-01-02 00:26:45.979048 | orchestrator | ok: [testbed-manager] 2026-01-02 00:26:45.979056 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:26:45.979064 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:26:45.979071 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:26:45.979078 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:26:45.979084 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:26:45.979091 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:26:45.979097 | orchestrator | 2026-01-02 00:26:45.979104 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-01-02 00:26:45.979111 | orchestrator | Friday 02 January 2026 00:26:38 +0000 (0:00:01.136) 0:00:17.155 ******** 2026-01-02 00:26:45.979117 | orchestrator | ok: [testbed-manager] 2026-01-02 00:26:45.979124 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:26:45.979130 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:26:45.979137 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:26:45.979143 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:26:45.979150 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:26:45.979156 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:26:45.979163 | orchestrator | 2026-01-02 00:26:45.979169 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-01-02 00:26:45.979176 | orchestrator | Friday 02 January 2026 00:26:39 +0000 (0:00:01.247) 0:00:18.402 ******** 2026-01-02 00:26:45.979198 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:26:45.979205 | orchestrator | 2026-01-02 00:26:45.979212 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-01-02 00:26:45.979219 | orchestrator | Friday 02 January 2026 00:26:39 +0000 (0:00:00.324) 0:00:18.728 ******** 2026-01-02 00:26:45.979225 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:26:45.979232 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:26:45.979239 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:26:45.979245 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:26:45.979252 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:26:45.979258 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:26:45.979265 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:26:45.979271 | orchestrator | 2026-01-02 00:26:45.979278 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-02 00:26:45.979290 | orchestrator | Friday 02 January 2026 00:26:41 +0000 (0:00:01.287) 0:00:20.015 ******** 2026-01-02 00:26:45.979297 | orchestrator | ok: [testbed-manager] 2026-01-02 00:26:45.979304 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:26:45.979310 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:26:45.979317 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:26:45.979324 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:26:45.979330 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:26:45.979337 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:26:45.979343 | orchestrator | 2026-01-02 00:26:45.979350 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-02 00:26:45.979357 | orchestrator | Friday 02 January 2026 00:26:41 +0000 (0:00:00.250) 0:00:20.266 ******** 2026-01-02 00:26:45.979363 | orchestrator | ok: [testbed-manager] 2026-01-02 00:26:45.979370 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:26:45.979377 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:26:45.979383 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:26:45.979390 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:26:45.979396 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:26:45.979403 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:26:45.979409 | orchestrator | 2026-01-02 00:26:45.979416 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-02 00:26:45.979423 | orchestrator | Friday 02 January 2026 00:26:41 +0000 (0:00:00.226) 0:00:20.493 ******** 2026-01-02 00:26:45.979429 | orchestrator | ok: [testbed-manager] 2026-01-02 00:26:45.979436 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:26:45.979443 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:26:45.979449 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:26:45.979455 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:26:45.979462 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:26:45.979468 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:26:45.979475 | orchestrator | 2026-01-02 00:26:45.979482 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-02 00:26:45.979488 | orchestrator | Friday 02 January 2026 00:26:41 +0000 (0:00:00.245) 0:00:20.738 ******** 2026-01-02 00:26:45.979496 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:26:45.979504 | orchestrator | 2026-01-02 00:26:45.979510 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-02 00:26:45.979517 | orchestrator | Friday 02 January 2026 00:26:42 +0000 (0:00:00.283) 0:00:21.022 ******** 2026-01-02 00:26:45.979524 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:26:45.979530 | orchestrator | ok: [testbed-manager] 2026-01-02 00:26:45.979537 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:26:45.979543 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:26:45.979550 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:26:45.979556 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:26:45.979563 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:26:45.979569 | orchestrator | 2026-01-02 00:26:45.979576 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-02 00:26:45.979583 | orchestrator | Friday 02 January 2026 00:26:42 +0000 (0:00:00.546) 0:00:21.569 ******** 2026-01-02 00:26:45.979590 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:26:45.979596 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:26:45.979603 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:26:45.979609 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:26:45.979616 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:26:45.979622 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:26:45.979629 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:26:45.979636 | orchestrator | 2026-01-02 00:26:45.979642 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-02 00:26:45.979649 | orchestrator | Friday 02 January 2026 00:26:43 +0000 (0:00:00.283) 0:00:21.852 ******** 2026-01-02 00:26:45.979656 | orchestrator | ok: [testbed-manager] 2026-01-02 00:26:45.979670 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:26:45.979677 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:26:45.979684 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:26:45.979690 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:26:45.979697 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:26:45.979703 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:26:45.979710 | orchestrator | 2026-01-02 00:26:45.979717 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-02 00:26:45.979723 | orchestrator | Friday 02 January 2026 00:26:44 +0000 (0:00:01.138) 0:00:22.990 ******** 2026-01-02 00:26:45.979730 | orchestrator | ok: [testbed-manager] 2026-01-02 00:26:45.979737 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:26:45.979743 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:26:45.979750 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:26:45.979756 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:26:45.979763 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:26:45.979769 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:26:45.979776 | orchestrator | 2026-01-02 00:26:45.979783 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-02 00:26:45.979789 | orchestrator | Friday 02 January 2026 00:26:44 +0000 (0:00:00.564) 0:00:23.555 ******** 2026-01-02 00:26:45.979796 | orchestrator | ok: [testbed-manager] 2026-01-02 00:26:45.979803 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:26:45.979809 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:26:45.979816 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:26:45.979827 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:27:26.462791 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:27:26.462908 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:27:26.462957 | orchestrator | 2026-01-02 00:27:26.462972 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-02 00:27:26.462985 | orchestrator | Friday 02 January 2026 00:26:45 +0000 (0:00:01.169) 0:00:24.725 ******** 2026-01-02 00:27:26.462997 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:27:26.463008 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:27:26.463020 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:27:26.463030 | orchestrator | changed: [testbed-manager] 2026-01-02 00:27:26.463042 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:27:26.463053 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:27:26.463064 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:27:26.463075 | orchestrator | 2026-01-02 00:27:26.463087 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-01-02 00:27:26.463098 | orchestrator | Friday 02 January 2026 00:27:01 +0000 (0:00:15.666) 0:00:40.391 ******** 2026-01-02 00:27:26.463109 | orchestrator | ok: [testbed-manager] 2026-01-02 00:27:26.463120 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:27:26.463131 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:27:26.463143 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:27:26.463154 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:27:26.463165 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:27:26.463175 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:27:26.463186 | orchestrator | 2026-01-02 00:27:26.463197 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-01-02 00:27:26.463208 | orchestrator | Friday 02 January 2026 00:27:01 +0000 (0:00:00.236) 0:00:40.627 ******** 2026-01-02 00:27:26.463219 | orchestrator | ok: [testbed-manager] 2026-01-02 00:27:26.463230 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:27:26.463241 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:27:26.463252 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:27:26.463263 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:27:26.463273 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:27:26.463284 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:27:26.463295 | orchestrator | 2026-01-02 00:27:26.463306 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-01-02 00:27:26.463317 | orchestrator | Friday 02 January 2026 00:27:02 +0000 (0:00:00.257) 0:00:40.885 ******** 2026-01-02 00:27:26.463352 | orchestrator | ok: [testbed-manager] 2026-01-02 00:27:26.463363 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:27:26.463374 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:27:26.463385 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:27:26.463396 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:27:26.463406 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:27:26.463417 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:27:26.463428 | orchestrator | 2026-01-02 00:27:26.463439 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-01-02 00:27:26.463450 | orchestrator | Friday 02 January 2026 00:27:02 +0000 (0:00:00.245) 0:00:41.131 ******** 2026-01-02 00:27:26.463462 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:27:26.463476 | orchestrator | 2026-01-02 00:27:26.463487 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-01-02 00:27:26.463498 | orchestrator | Friday 02 January 2026 00:27:02 +0000 (0:00:00.315) 0:00:41.447 ******** 2026-01-02 00:27:26.463509 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:27:26.463520 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:27:26.463531 | orchestrator | ok: [testbed-manager] 2026-01-02 00:27:26.463542 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:27:26.463552 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:27:26.463563 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:27:26.463573 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:27:26.463584 | orchestrator | 2026-01-02 00:27:26.463595 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-01-02 00:27:26.463606 | orchestrator | Friday 02 January 2026 00:27:04 +0000 (0:00:01.656) 0:00:43.103 ******** 2026-01-02 00:27:26.463617 | orchestrator | changed: [testbed-manager] 2026-01-02 00:27:26.463628 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:27:26.463639 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:27:26.463650 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:27:26.463660 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:27:26.463671 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:27:26.463682 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:27:26.463693 | orchestrator | 2026-01-02 00:27:26.463704 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-01-02 00:27:26.463715 | orchestrator | Friday 02 January 2026 00:27:05 +0000 (0:00:01.099) 0:00:44.203 ******** 2026-01-02 00:27:26.463726 | orchestrator | ok: [testbed-manager] 2026-01-02 00:27:26.463737 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:27:26.463748 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:27:26.463759 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:27:26.463771 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:27:26.463781 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:27:26.463792 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:27:26.463802 | orchestrator | 2026-01-02 00:27:26.463813 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-01-02 00:27:26.463824 | orchestrator | Friday 02 January 2026 00:27:06 +0000 (0:00:00.948) 0:00:45.151 ******** 2026-01-02 00:27:26.463836 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:27:26.463849 | orchestrator | 2026-01-02 00:27:26.463860 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-01-02 00:27:26.463871 | orchestrator | Friday 02 January 2026 00:27:06 +0000 (0:00:00.306) 0:00:45.457 ******** 2026-01-02 00:27:26.463882 | orchestrator | changed: [testbed-manager] 2026-01-02 00:27:26.463893 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:27:26.463904 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:27:26.463992 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:27:26.464008 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:27:26.464028 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:27:26.464039 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:27:26.464051 | orchestrator | 2026-01-02 00:27:26.464080 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-01-02 00:27:26.464092 | orchestrator | Friday 02 January 2026 00:27:07 +0000 (0:00:00.957) 0:00:46.414 ******** 2026-01-02 00:27:26.464103 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:27:26.464114 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:27:26.464125 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:27:26.464136 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:27:26.464147 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:27:26.464158 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:27:26.464169 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:27:26.464180 | orchestrator | 2026-01-02 00:27:26.464190 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-01-02 00:27:26.464214 | orchestrator | Friday 02 January 2026 00:27:07 +0000 (0:00:00.247) 0:00:46.662 ******** 2026-01-02 00:27:26.464226 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:27:26.464238 | orchestrator | 2026-01-02 00:27:26.464254 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-01-02 00:27:26.464272 | orchestrator | Friday 02 January 2026 00:27:08 +0000 (0:00:00.344) 0:00:47.006 ******** 2026-01-02 00:27:26.464283 | orchestrator | ok: [testbed-manager] 2026-01-02 00:27:26.464294 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:27:26.464305 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:27:26.464316 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:27:26.464327 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:27:26.464338 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:27:26.464348 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:27:26.464359 | orchestrator | 2026-01-02 00:27:26.464370 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-01-02 00:27:26.464381 | orchestrator | Friday 02 January 2026 00:27:09 +0000 (0:00:01.577) 0:00:48.584 ******** 2026-01-02 00:27:26.464392 | orchestrator | changed: [testbed-manager] 2026-01-02 00:27:26.464403 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:27:26.464414 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:27:26.464425 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:27:26.464435 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:27:26.464446 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:27:26.464457 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:27:26.464467 | orchestrator | 2026-01-02 00:27:26.464478 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-01-02 00:27:26.464489 | orchestrator | Friday 02 January 2026 00:27:10 +0000 (0:00:01.137) 0:00:49.721 ******** 2026-01-02 00:27:26.464500 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:27:26.464510 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:27:26.464521 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:27:26.464532 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:27:26.464543 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:27:26.464554 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:27:26.464564 | orchestrator | changed: [testbed-manager] 2026-01-02 00:27:26.464575 | orchestrator | 2026-01-02 00:27:26.464586 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-01-02 00:27:26.464597 | orchestrator | Friday 02 January 2026 00:27:23 +0000 (0:00:12.562) 0:01:02.284 ******** 2026-01-02 00:27:26.464608 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:27:26.464619 | orchestrator | ok: [testbed-manager] 2026-01-02 00:27:26.464630 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:27:26.464640 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:27:26.464651 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:27:26.464662 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:27:26.464680 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:27:26.464691 | orchestrator | 2026-01-02 00:27:26.464702 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-01-02 00:27:26.464713 | orchestrator | Friday 02 January 2026 00:27:24 +0000 (0:00:01.137) 0:01:03.422 ******** 2026-01-02 00:27:26.464723 | orchestrator | ok: [testbed-manager] 2026-01-02 00:27:26.464734 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:27:26.464745 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:27:26.464756 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:27:26.464766 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:27:26.464777 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:27:26.464788 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:27:26.464799 | orchestrator | 2026-01-02 00:27:26.464809 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-01-02 00:27:26.464820 | orchestrator | Friday 02 January 2026 00:27:25 +0000 (0:00:00.927) 0:01:04.350 ******** 2026-01-02 00:27:26.464831 | orchestrator | ok: [testbed-manager] 2026-01-02 00:27:26.464842 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:27:26.464853 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:27:26.464864 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:27:26.464874 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:27:26.464885 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:27:26.464900 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:27:26.464911 | orchestrator | 2026-01-02 00:27:26.464958 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-01-02 00:27:26.464979 | orchestrator | Friday 02 January 2026 00:27:25 +0000 (0:00:00.256) 0:01:04.606 ******** 2026-01-02 00:27:26.464998 | orchestrator | ok: [testbed-manager] 2026-01-02 00:27:26.465018 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:27:26.465030 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:27:26.465041 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:27:26.465051 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:27:26.465062 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:27:26.465072 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:27:26.465083 | orchestrator | 2026-01-02 00:27:26.465094 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-01-02 00:27:26.465105 | orchestrator | Friday 02 January 2026 00:27:26 +0000 (0:00:00.275) 0:01:04.881 ******** 2026-01-02 00:27:26.465116 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:27:26.465128 | orchestrator | 2026-01-02 00:27:26.465148 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-01-02 00:29:44.858526 | orchestrator | Friday 02 January 2026 00:27:26 +0000 (0:00:00.330) 0:01:05.212 ******** 2026-01-02 00:29:44.858648 | orchestrator | ok: [testbed-manager] 2026-01-02 00:29:44.858665 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:29:44.858677 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:29:44.858688 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:29:44.858699 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:29:44.858711 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:29:44.858722 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:29:44.858733 | orchestrator | 2026-01-02 00:29:44.858745 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-01-02 00:29:44.858756 | orchestrator | Friday 02 January 2026 00:27:28 +0000 (0:00:01.638) 0:01:06.850 ******** 2026-01-02 00:29:44.858768 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:29:44.858780 | orchestrator | changed: [testbed-manager] 2026-01-02 00:29:44.858791 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:29:44.858802 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:29:44.858864 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:29:44.858876 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:29:44.858887 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:29:44.858898 | orchestrator | 2026-01-02 00:29:44.858935 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-01-02 00:29:44.858947 | orchestrator | Friday 02 January 2026 00:27:28 +0000 (0:00:00.578) 0:01:07.428 ******** 2026-01-02 00:29:44.858958 | orchestrator | ok: [testbed-manager] 2026-01-02 00:29:44.858969 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:29:44.858980 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:29:44.858990 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:29:44.859001 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:29:44.859011 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:29:44.859022 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:29:44.859033 | orchestrator | 2026-01-02 00:29:44.859044 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-01-02 00:29:44.859054 | orchestrator | Friday 02 January 2026 00:27:28 +0000 (0:00:00.297) 0:01:07.726 ******** 2026-01-02 00:29:44.859067 | orchestrator | ok: [testbed-manager] 2026-01-02 00:29:44.859080 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:29:44.859092 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:29:44.859104 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:29:44.859116 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:29:44.859128 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:29:44.859141 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:29:44.859153 | orchestrator | 2026-01-02 00:29:44.859166 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-01-02 00:29:44.859179 | orchestrator | Friday 02 January 2026 00:27:30 +0000 (0:00:01.168) 0:01:08.894 ******** 2026-01-02 00:29:44.859192 | orchestrator | changed: [testbed-manager] 2026-01-02 00:29:44.859208 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:29:44.859227 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:29:44.859245 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:29:44.859265 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:29:44.859284 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:29:44.859303 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:29:44.859322 | orchestrator | 2026-01-02 00:29:44.859341 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-01-02 00:29:44.859360 | orchestrator | Friday 02 January 2026 00:27:31 +0000 (0:00:01.707) 0:01:10.602 ******** 2026-01-02 00:29:44.859379 | orchestrator | ok: [testbed-manager] 2026-01-02 00:29:44.859398 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:29:44.859410 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:29:44.859421 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:29:44.859432 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:29:44.859442 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:29:44.859453 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:29:44.859464 | orchestrator | 2026-01-02 00:29:44.859474 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-01-02 00:29:44.859485 | orchestrator | Friday 02 January 2026 00:27:34 +0000 (0:00:02.377) 0:01:12.979 ******** 2026-01-02 00:29:44.859496 | orchestrator | ok: [testbed-manager] 2026-01-02 00:29:44.859507 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:29:44.859517 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:29:44.859528 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:29:44.859538 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:29:44.859549 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:29:44.859559 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:29:44.859570 | orchestrator | 2026-01-02 00:29:44.859581 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-01-02 00:29:44.859591 | orchestrator | Friday 02 January 2026 00:28:12 +0000 (0:00:38.371) 0:01:51.351 ******** 2026-01-02 00:29:44.859602 | orchestrator | changed: [testbed-manager] 2026-01-02 00:29:44.859613 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:29:44.859623 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:29:44.859634 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:29:44.859644 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:29:44.859655 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:29:44.859665 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:29:44.859685 | orchestrator | 2026-01-02 00:29:44.859712 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-01-02 00:29:44.859724 | orchestrator | Friday 02 January 2026 00:29:26 +0000 (0:01:14.025) 0:03:05.376 ******** 2026-01-02 00:29:44.859734 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:29:44.859745 | orchestrator | ok: [testbed-manager] 2026-01-02 00:29:44.859756 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:29:44.859767 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:29:44.859777 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:29:44.859788 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:29:44.859798 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:29:44.859809 | orchestrator | 2026-01-02 00:29:44.859862 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-01-02 00:29:44.859874 | orchestrator | Friday 02 January 2026 00:29:28 +0000 (0:00:02.047) 0:03:07.423 ******** 2026-01-02 00:29:44.859885 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:29:44.859895 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:29:44.859906 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:29:44.859917 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:29:44.859927 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:29:44.859938 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:29:44.859949 | orchestrator | changed: [testbed-manager] 2026-01-02 00:29:44.859959 | orchestrator | 2026-01-02 00:29:44.859970 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-01-02 00:29:44.859981 | orchestrator | Friday 02 January 2026 00:29:42 +0000 (0:00:13.894) 0:03:21.318 ******** 2026-01-02 00:29:44.860023 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-01-02 00:29:44.860056 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-01-02 00:29:44.860080 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-01-02 00:29:44.860100 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-01-02 00:29:44.860112 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-01-02 00:29:44.860123 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-01-02 00:29:44.860144 | orchestrator | 2026-01-02 00:29:44.860156 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-01-02 00:29:44.860166 | orchestrator | Friday 02 January 2026 00:29:43 +0000 (0:00:00.443) 0:03:21.762 ******** 2026-01-02 00:29:44.860181 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-02 00:29:44.860192 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:29:44.860203 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-02 00:29:44.860214 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:29:44.860225 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-02 00:29:44.860236 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-02 00:29:44.860246 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:29:44.860258 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:29:44.860277 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-02 00:29:44.860296 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-02 00:29:44.860315 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-02 00:29:44.860335 | orchestrator | 2026-01-02 00:29:44.860354 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-01-02 00:29:44.860366 | orchestrator | Friday 02 January 2026 00:29:44 +0000 (0:00:01.763) 0:03:23.525 ******** 2026-01-02 00:29:44.860376 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-02 00:29:44.860389 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-02 00:29:44.860400 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-02 00:29:44.860415 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-02 00:29:44.860434 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-02 00:29:44.860465 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-02 00:29:52.052428 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-02 00:29:52.052544 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-02 00:29:52.052566 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-02 00:29:52.052586 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-02 00:29:52.052608 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-02 00:29:52.052627 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-02 00:29:52.052648 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-02 00:29:52.052668 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-02 00:29:52.052681 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-02 00:29:52.052692 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-02 00:29:52.052703 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-02 00:29:52.052721 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-02 00:29:52.052741 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-02 00:29:52.052789 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-02 00:29:52.052848 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:29:52.052862 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-02 00:29:52.052891 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-02 00:29:52.052902 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-02 00:29:52.052913 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-02 00:29:52.052924 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-02 00:29:52.052935 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-02 00:29:52.052945 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-02 00:29:52.052959 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-02 00:29:52.052972 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-02 00:29:52.052985 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-02 00:29:52.052998 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-02 00:29:52.053011 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-02 00:29:52.053025 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-02 00:29:52.053038 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:29:52.053051 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-02 00:29:52.053063 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-02 00:29:52.053076 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-02 00:29:52.053093 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-02 00:29:52.053107 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-02 00:29:52.053120 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-02 00:29:52.053139 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-02 00:29:52.053159 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:29:52.053179 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:29:52.053199 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-02 00:29:52.053218 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-02 00:29:52.053239 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-02 00:29:52.053258 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-02 00:29:52.053276 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-02 00:29:52.053307 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-02 00:29:52.053319 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-02 00:29:52.053331 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-02 00:29:52.053403 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-02 00:29:52.053417 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-02 00:29:52.053428 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-02 00:29:52.053438 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-02 00:29:52.053449 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-02 00:29:52.053465 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-02 00:29:52.053484 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-02 00:29:52.053504 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-02 00:29:52.053564 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-02 00:29:52.053576 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-02 00:29:52.053587 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-02 00:29:52.053598 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-02 00:29:52.053609 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-02 00:29:52.053620 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-02 00:29:52.053630 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-02 00:29:52.053641 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-02 00:29:52.053652 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-02 00:29:52.053664 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-02 00:29:52.053685 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-02 00:29:52.053706 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-02 00:29:52.053757 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-02 00:29:52.053769 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-02 00:29:52.053781 | orchestrator | 2026-01-02 00:29:52.053793 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-01-02 00:29:52.053830 | orchestrator | Friday 02 January 2026 00:29:49 +0000 (0:00:05.189) 0:03:28.715 ******** 2026-01-02 00:29:52.053844 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-02 00:29:52.053855 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-02 00:29:52.053866 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-02 00:29:52.053876 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-02 00:29:52.053887 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-02 00:29:52.053898 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-02 00:29:52.053909 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-02 00:29:52.053920 | orchestrator | 2026-01-02 00:29:52.053938 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-01-02 00:29:52.053950 | orchestrator | Friday 02 January 2026 00:29:51 +0000 (0:00:01.555) 0:03:30.271 ******** 2026-01-02 00:29:52.053960 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-02 00:29:52.053984 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:29:52.054003 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-02 00:29:52.054130 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-02 00:29:52.054148 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:29:52.054159 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:29:52.054170 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-02 00:29:52.054187 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:29:52.054207 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-02 00:29:52.054228 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-02 00:29:52.054259 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-02 00:30:06.435708 | orchestrator | 2026-01-02 00:30:06.435874 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-01-02 00:30:06.435897 | orchestrator | Friday 02 January 2026 00:29:52 +0000 (0:00:00.529) 0:03:30.800 ******** 2026-01-02 00:30:06.435911 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-02 00:30:06.435921 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:30:06.435930 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-02 00:30:06.435942 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-02 00:30:06.435955 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:30:06.435968 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-02 00:30:06.435979 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:30:06.435992 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:30:06.436000 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-02 00:30:06.436008 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-02 00:30:06.436015 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-02 00:30:06.436023 | orchestrator | 2026-01-02 00:30:06.436030 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-01-02 00:30:06.436038 | orchestrator | Friday 02 January 2026 00:29:53 +0000 (0:00:01.625) 0:03:32.426 ******** 2026-01-02 00:30:06.436045 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-02 00:30:06.436053 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:30:06.436060 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-02 00:30:06.436068 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-02 00:30:06.436075 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:30:06.436082 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:30:06.436091 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-02 00:30:06.436103 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:30:06.436116 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-02 00:30:06.436129 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-02 00:30:06.436141 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-02 00:30:06.436179 | orchestrator | 2026-01-02 00:30:06.436193 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-01-02 00:30:06.436204 | orchestrator | Friday 02 January 2026 00:29:54 +0000 (0:00:00.569) 0:03:32.995 ******** 2026-01-02 00:30:06.436215 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:30:06.436227 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:30:06.436239 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:30:06.436253 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:30:06.436265 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:30:06.436280 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:30:06.436291 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:30:06.436300 | orchestrator | 2026-01-02 00:30:06.436309 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-01-02 00:30:06.436317 | orchestrator | Friday 02 January 2026 00:29:54 +0000 (0:00:00.320) 0:03:33.316 ******** 2026-01-02 00:30:06.436328 | orchestrator | ok: [testbed-manager] 2026-01-02 00:30:06.436342 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:30:06.436356 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:30:06.436368 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:30:06.436380 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:30:06.436389 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:30:06.436401 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:30:06.436414 | orchestrator | 2026-01-02 00:30:06.436444 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-01-02 00:30:06.436454 | orchestrator | Friday 02 January 2026 00:30:00 +0000 (0:00:05.940) 0:03:39.257 ******** 2026-01-02 00:30:06.436462 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-01-02 00:30:06.436470 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-01-02 00:30:06.436477 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:30:06.436484 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-01-02 00:30:06.436491 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:30:06.436498 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-01-02 00:30:06.436505 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:30:06.436513 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-01-02 00:30:06.436520 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:30:06.436527 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-01-02 00:30:06.436534 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:30:06.436541 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:30:06.436548 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-01-02 00:30:06.436555 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:30:06.436562 | orchestrator | 2026-01-02 00:30:06.436570 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-01-02 00:30:06.436577 | orchestrator | Friday 02 January 2026 00:30:00 +0000 (0:00:00.261) 0:03:39.519 ******** 2026-01-02 00:30:06.436584 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-01-02 00:30:06.436592 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-01-02 00:30:06.436599 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-01-02 00:30:06.436627 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-01-02 00:30:06.436640 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-01-02 00:30:06.436652 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-01-02 00:30:06.436664 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-01-02 00:30:06.436676 | orchestrator | 2026-01-02 00:30:06.436688 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-01-02 00:30:06.436700 | orchestrator | Friday 02 January 2026 00:30:01 +0000 (0:00:01.092) 0:03:40.611 ******** 2026-01-02 00:30:06.436712 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:30:06.436722 | orchestrator | 2026-01-02 00:30:06.436737 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-01-02 00:30:06.436744 | orchestrator | Friday 02 January 2026 00:30:02 +0000 (0:00:00.429) 0:03:41.041 ******** 2026-01-02 00:30:06.436751 | orchestrator | ok: [testbed-manager] 2026-01-02 00:30:06.436759 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:30:06.436766 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:30:06.436773 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:30:06.436780 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:30:06.436787 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:30:06.436830 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:30:06.436839 | orchestrator | 2026-01-02 00:30:06.436846 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-01-02 00:30:06.436853 | orchestrator | Friday 02 January 2026 00:30:03 +0000 (0:00:01.183) 0:03:42.224 ******** 2026-01-02 00:30:06.436861 | orchestrator | ok: [testbed-manager] 2026-01-02 00:30:06.436868 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:30:06.436875 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:30:06.436882 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:30:06.436892 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:30:06.436904 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:30:06.436917 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:30:06.436929 | orchestrator | 2026-01-02 00:30:06.436941 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-01-02 00:30:06.436950 | orchestrator | Friday 02 January 2026 00:30:04 +0000 (0:00:00.706) 0:03:42.931 ******** 2026-01-02 00:30:06.436958 | orchestrator | changed: [testbed-manager] 2026-01-02 00:30:06.436965 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:30:06.436972 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:30:06.436979 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:30:06.436986 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:30:06.436994 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:30:06.437001 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:30:06.437008 | orchestrator | 2026-01-02 00:30:06.437015 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-01-02 00:30:06.437023 | orchestrator | Friday 02 January 2026 00:30:04 +0000 (0:00:00.647) 0:03:43.578 ******** 2026-01-02 00:30:06.437031 | orchestrator | ok: [testbed-manager] 2026-01-02 00:30:06.437043 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:30:06.437056 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:30:06.437068 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:30:06.437081 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:30:06.437088 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:30:06.437095 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:30:06.437102 | orchestrator | 2026-01-02 00:30:06.437111 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-01-02 00:30:06.437124 | orchestrator | Friday 02 January 2026 00:30:05 +0000 (0:00:00.601) 0:03:44.180 ******** 2026-01-02 00:30:06.437142 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767312307.516215, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-02 00:30:06.437158 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767312324.9827268, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-02 00:30:06.437178 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767312325.2628176, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-02 00:30:06.437212 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767312323.9553132, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-02 00:30:11.438915 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767312334.2130866, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-02 00:30:11.439009 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767312333.19013, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-02 00:30:11.439025 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767312314.4131496, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-02 00:30:11.439038 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-02 00:30:11.439065 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-02 00:30:11.439098 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-02 00:30:11.439110 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-02 00:30:11.439146 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-02 00:30:11.439159 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-02 00:30:11.439171 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-02 00:30:11.439183 | orchestrator | 2026-01-02 00:30:11.439196 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-01-02 00:30:11.439208 | orchestrator | Friday 02 January 2026 00:30:06 +0000 (0:00:01.004) 0:03:45.185 ******** 2026-01-02 00:30:11.439220 | orchestrator | changed: [testbed-manager] 2026-01-02 00:30:11.439232 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:30:11.439243 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:30:11.439254 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:30:11.439265 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:30:11.439275 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:30:11.439286 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:30:11.439297 | orchestrator | 2026-01-02 00:30:11.439309 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-01-02 00:30:11.439320 | orchestrator | Friday 02 January 2026 00:30:07 +0000 (0:00:01.105) 0:03:46.291 ******** 2026-01-02 00:30:11.439331 | orchestrator | changed: [testbed-manager] 2026-01-02 00:30:11.439342 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:30:11.439360 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:30:11.439370 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:30:11.439381 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:30:11.439392 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:30:11.439402 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:30:11.439413 | orchestrator | 2026-01-02 00:30:11.439424 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-01-02 00:30:11.439437 | orchestrator | Friday 02 January 2026 00:30:08 +0000 (0:00:01.283) 0:03:47.574 ******** 2026-01-02 00:30:11.439450 | orchestrator | changed: [testbed-manager] 2026-01-02 00:30:11.439468 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:30:11.439481 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:30:11.439493 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:30:11.439507 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:30:11.439520 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:30:11.439532 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:30:11.439544 | orchestrator | 2026-01-02 00:30:11.439558 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-01-02 00:30:11.439570 | orchestrator | Friday 02 January 2026 00:30:09 +0000 (0:00:01.183) 0:03:48.758 ******** 2026-01-02 00:30:11.439583 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:30:11.439596 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:30:11.439608 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:30:11.439621 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:30:11.439634 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:30:11.439646 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:30:11.439659 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:30:11.439670 | orchestrator | 2026-01-02 00:30:11.439681 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-01-02 00:30:11.439692 | orchestrator | Friday 02 January 2026 00:30:10 +0000 (0:00:00.289) 0:03:49.047 ******** 2026-01-02 00:30:11.439703 | orchestrator | ok: [testbed-manager] 2026-01-02 00:30:11.439714 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:30:11.439725 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:30:11.439736 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:30:11.439746 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:30:11.439757 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:30:11.439768 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:30:11.439778 | orchestrator | 2026-01-02 00:30:11.439809 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-01-02 00:30:11.439821 | orchestrator | Friday 02 January 2026 00:30:11 +0000 (0:00:00.726) 0:03:49.774 ******** 2026-01-02 00:30:11.439833 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:30:11.439846 | orchestrator | 2026-01-02 00:30:11.439858 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-01-02 00:30:11.439876 | orchestrator | Friday 02 January 2026 00:30:11 +0000 (0:00:00.413) 0:03:50.187 ******** 2026-01-02 00:31:29.912261 | orchestrator | ok: [testbed-manager] 2026-01-02 00:31:29.912402 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:31:29.912428 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:31:29.912447 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:31:29.912466 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:31:29.912486 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:31:29.912505 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:31:29.912526 | orchestrator | 2026-01-02 00:31:29.912548 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-01-02 00:31:29.912569 | orchestrator | Friday 02 January 2026 00:30:19 +0000 (0:00:08.044) 0:03:58.232 ******** 2026-01-02 00:31:29.912589 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:31:29.912610 | orchestrator | ok: [testbed-manager] 2026-01-02 00:31:29.912631 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:31:29.912683 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:31:29.912703 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:31:29.912801 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:31:29.912827 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:31:29.912847 | orchestrator | 2026-01-02 00:31:29.912867 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-01-02 00:31:29.912887 | orchestrator | Friday 02 January 2026 00:30:20 +0000 (0:00:01.166) 0:03:59.399 ******** 2026-01-02 00:31:29.912908 | orchestrator | ok: [testbed-manager] 2026-01-02 00:31:29.912927 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:31:29.912946 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:31:29.912965 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:31:29.912984 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:31:29.913003 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:31:29.913022 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:31:29.913041 | orchestrator | 2026-01-02 00:31:29.913060 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-01-02 00:31:29.913077 | orchestrator | Friday 02 January 2026 00:30:21 +0000 (0:00:01.196) 0:04:00.596 ******** 2026-01-02 00:31:29.913093 | orchestrator | ok: [testbed-manager] 2026-01-02 00:31:29.913110 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:31:29.913128 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:31:29.913145 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:31:29.913163 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:31:29.913181 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:31:29.913199 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:31:29.913217 | orchestrator | 2026-01-02 00:31:29.913236 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-01-02 00:31:29.913254 | orchestrator | Friday 02 January 2026 00:30:22 +0000 (0:00:00.326) 0:04:00.922 ******** 2026-01-02 00:31:29.913270 | orchestrator | ok: [testbed-manager] 2026-01-02 00:31:29.913286 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:31:29.913302 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:31:29.913319 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:31:29.913335 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:31:29.913350 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:31:29.913367 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:31:29.913383 | orchestrator | 2026-01-02 00:31:29.913401 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-01-02 00:31:29.913411 | orchestrator | Friday 02 January 2026 00:30:22 +0000 (0:00:00.335) 0:04:01.258 ******** 2026-01-02 00:31:29.913421 | orchestrator | ok: [testbed-manager] 2026-01-02 00:31:29.913431 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:31:29.913440 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:31:29.913450 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:31:29.913459 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:31:29.913468 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:31:29.913477 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:31:29.913487 | orchestrator | 2026-01-02 00:31:29.913497 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-01-02 00:31:29.913506 | orchestrator | Friday 02 January 2026 00:30:22 +0000 (0:00:00.317) 0:04:01.576 ******** 2026-01-02 00:31:29.913516 | orchestrator | ok: [testbed-manager] 2026-01-02 00:31:29.913525 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:31:29.913535 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:31:29.913560 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:31:29.913570 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:31:29.913579 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:31:29.913588 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:31:29.913598 | orchestrator | 2026-01-02 00:31:29.913607 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-01-02 00:31:29.913617 | orchestrator | Friday 02 January 2026 00:30:28 +0000 (0:00:05.749) 0:04:07.325 ******** 2026-01-02 00:31:29.913628 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:31:29.913654 | orchestrator | 2026-01-02 00:31:29.913664 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-01-02 00:31:29.913674 | orchestrator | Friday 02 January 2026 00:30:28 +0000 (0:00:00.428) 0:04:07.754 ******** 2026-01-02 00:31:29.913683 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-01-02 00:31:29.913693 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-01-02 00:31:29.913703 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-01-02 00:31:29.913714 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-01-02 00:31:29.913760 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:31:29.913774 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:31:29.913784 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-01-02 00:31:29.913793 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-01-02 00:31:29.913803 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-01-02 00:31:29.913812 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:31:29.913822 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-01-02 00:31:29.913832 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-01-02 00:31:29.913841 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-01-02 00:31:29.913851 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:31:29.913860 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-01-02 00:31:29.913870 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-01-02 00:31:29.913901 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:31:29.913911 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:31:29.913921 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-01-02 00:31:29.913931 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-01-02 00:31:29.913941 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:31:29.913950 | orchestrator | 2026-01-02 00:31:29.913960 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-01-02 00:31:29.913970 | orchestrator | Friday 02 January 2026 00:30:29 +0000 (0:00:00.357) 0:04:08.111 ******** 2026-01-02 00:31:29.913980 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:31:29.913990 | orchestrator | 2026-01-02 00:31:29.914000 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-01-02 00:31:29.914010 | orchestrator | Friday 02 January 2026 00:30:29 +0000 (0:00:00.418) 0:04:08.530 ******** 2026-01-02 00:31:29.914078 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-01-02 00:31:29.914088 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:31:29.914133 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-01-02 00:31:29.914144 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:31:29.914154 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-01-02 00:31:29.914163 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-01-02 00:31:29.914173 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:31:29.914182 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-01-02 00:31:29.914192 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:31:29.914201 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-01-02 00:31:29.914211 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:31:29.914220 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:31:29.914231 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-01-02 00:31:29.914247 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:31:29.914265 | orchestrator | 2026-01-02 00:31:29.914295 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-01-02 00:31:29.914306 | orchestrator | Friday 02 January 2026 00:30:30 +0000 (0:00:00.327) 0:04:08.858 ******** 2026-01-02 00:31:29.914316 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:31:29.914326 | orchestrator | 2026-01-02 00:31:29.914336 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-01-02 00:31:29.914345 | orchestrator | Friday 02 January 2026 00:30:30 +0000 (0:00:00.455) 0:04:09.313 ******** 2026-01-02 00:31:29.914355 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:31:29.914364 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:31:29.914374 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:31:29.914383 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:31:29.914393 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:31:29.914402 | orchestrator | changed: [testbed-manager] 2026-01-02 00:31:29.914412 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:31:29.914422 | orchestrator | 2026-01-02 00:31:29.914431 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-01-02 00:31:29.914442 | orchestrator | Friday 02 January 2026 00:31:05 +0000 (0:00:35.070) 0:04:44.384 ******** 2026-01-02 00:31:29.914451 | orchestrator | changed: [testbed-manager] 2026-01-02 00:31:29.914461 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:31:29.914471 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:31:29.914480 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:31:29.914490 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:31:29.914499 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:31:29.914509 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:31:29.914518 | orchestrator | 2026-01-02 00:31:29.914528 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-01-02 00:31:29.914537 | orchestrator | Friday 02 January 2026 00:31:14 +0000 (0:00:09.007) 0:04:53.392 ******** 2026-01-02 00:31:29.914547 | orchestrator | changed: [testbed-manager] 2026-01-02 00:31:29.914556 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:31:29.914566 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:31:29.914575 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:31:29.914585 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:31:29.914594 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:31:29.914604 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:31:29.914613 | orchestrator | 2026-01-02 00:31:29.914623 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-01-02 00:31:29.914632 | orchestrator | Friday 02 January 2026 00:31:22 +0000 (0:00:07.474) 0:05:00.867 ******** 2026-01-02 00:31:29.914642 | orchestrator | ok: [testbed-manager] 2026-01-02 00:31:29.914652 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:31:29.914661 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:31:29.914671 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:31:29.914680 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:31:29.914690 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:31:29.914699 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:31:29.914709 | orchestrator | 2026-01-02 00:31:29.914718 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-01-02 00:31:29.914782 | orchestrator | Friday 02 January 2026 00:31:23 +0000 (0:00:01.740) 0:05:02.608 ******** 2026-01-02 00:31:29.914793 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:31:29.914802 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:31:29.914812 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:31:29.914822 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:31:29.914831 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:31:29.914894 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:31:29.914906 | orchestrator | changed: [testbed-manager] 2026-01-02 00:31:29.914916 | orchestrator | 2026-01-02 00:31:29.914938 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-01-02 00:31:41.575221 | orchestrator | Friday 02 January 2026 00:31:29 +0000 (0:00:06.046) 0:05:08.654 ******** 2026-01-02 00:31:41.575336 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:31:41.575362 | orchestrator | 2026-01-02 00:31:41.575380 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-01-02 00:31:41.575397 | orchestrator | Friday 02 January 2026 00:31:30 +0000 (0:00:00.556) 0:05:09.211 ******** 2026-01-02 00:31:41.575436 | orchestrator | changed: [testbed-manager] 2026-01-02 00:31:41.575458 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:31:41.575475 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:31:41.575493 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:31:41.575511 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:31:41.575529 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:31:41.575542 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:31:41.575552 | orchestrator | 2026-01-02 00:31:41.575562 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-01-02 00:31:41.575572 | orchestrator | Friday 02 January 2026 00:31:31 +0000 (0:00:00.731) 0:05:09.942 ******** 2026-01-02 00:31:41.575582 | orchestrator | ok: [testbed-manager] 2026-01-02 00:31:41.575593 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:31:41.575603 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:31:41.575613 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:31:41.575623 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:31:41.575632 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:31:41.575642 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:31:41.575652 | orchestrator | 2026-01-02 00:31:41.575662 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-01-02 00:31:41.575671 | orchestrator | Friday 02 January 2026 00:31:32 +0000 (0:00:01.771) 0:05:11.714 ******** 2026-01-02 00:31:41.575681 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:31:41.575691 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:31:41.575700 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:31:41.575710 | orchestrator | changed: [testbed-manager] 2026-01-02 00:31:41.575789 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:31:41.575801 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:31:41.575813 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:31:41.575825 | orchestrator | 2026-01-02 00:31:41.575837 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-01-02 00:31:41.575848 | orchestrator | Friday 02 January 2026 00:31:33 +0000 (0:00:00.826) 0:05:12.540 ******** 2026-01-02 00:31:41.575860 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:31:41.575871 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:31:41.575882 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:31:41.575892 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:31:41.575902 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:31:41.575911 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:31:41.575921 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:31:41.575931 | orchestrator | 2026-01-02 00:31:41.575941 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-01-02 00:31:41.575950 | orchestrator | Friday 02 January 2026 00:31:34 +0000 (0:00:00.292) 0:05:12.833 ******** 2026-01-02 00:31:41.575960 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:31:41.575970 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:31:41.575979 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:31:41.575989 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:31:41.575999 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:31:41.576008 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:31:41.576018 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:31:41.576028 | orchestrator | 2026-01-02 00:31:41.576044 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-01-02 00:31:41.576077 | orchestrator | Friday 02 January 2026 00:31:34 +0000 (0:00:00.432) 0:05:13.265 ******** 2026-01-02 00:31:41.576087 | orchestrator | ok: [testbed-manager] 2026-01-02 00:31:41.576097 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:31:41.576106 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:31:41.576116 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:31:41.576125 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:31:41.576135 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:31:41.576144 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:31:41.576154 | orchestrator | 2026-01-02 00:31:41.576164 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-01-02 00:31:41.576174 | orchestrator | Friday 02 January 2026 00:31:34 +0000 (0:00:00.302) 0:05:13.568 ******** 2026-01-02 00:31:41.576184 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:31:41.576193 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:31:41.576203 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:31:41.576212 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:31:41.576222 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:31:41.576231 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:31:41.576241 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:31:41.576250 | orchestrator | 2026-01-02 00:31:41.576260 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-01-02 00:31:41.576271 | orchestrator | Friday 02 January 2026 00:31:35 +0000 (0:00:00.339) 0:05:13.908 ******** 2026-01-02 00:31:41.576280 | orchestrator | ok: [testbed-manager] 2026-01-02 00:31:41.576290 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:31:41.576299 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:31:41.576309 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:31:41.576319 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:31:41.576328 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:31:41.576337 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:31:41.576347 | orchestrator | 2026-01-02 00:31:41.576356 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-01-02 00:31:41.576366 | orchestrator | Friday 02 January 2026 00:31:35 +0000 (0:00:00.343) 0:05:14.251 ******** 2026-01-02 00:31:41.576376 | orchestrator | ok: [testbed-manager] =>  2026-01-02 00:31:41.576385 | orchestrator |  docker_version: 5:27.5.1 2026-01-02 00:31:41.576395 | orchestrator | ok: [testbed-node-3] =>  2026-01-02 00:31:41.576404 | orchestrator |  docker_version: 5:27.5.1 2026-01-02 00:31:41.576414 | orchestrator | ok: [testbed-node-4] =>  2026-01-02 00:31:41.576423 | orchestrator |  docker_version: 5:27.5.1 2026-01-02 00:31:41.576433 | orchestrator | ok: [testbed-node-5] =>  2026-01-02 00:31:41.576442 | orchestrator |  docker_version: 5:27.5.1 2026-01-02 00:31:41.576471 | orchestrator | ok: [testbed-node-0] =>  2026-01-02 00:31:41.576481 | orchestrator |  docker_version: 5:27.5.1 2026-01-02 00:31:41.576491 | orchestrator | ok: [testbed-node-1] =>  2026-01-02 00:31:41.576501 | orchestrator |  docker_version: 5:27.5.1 2026-01-02 00:31:41.576510 | orchestrator | ok: [testbed-node-2] =>  2026-01-02 00:31:41.576520 | orchestrator |  docker_version: 5:27.5.1 2026-01-02 00:31:41.576529 | orchestrator | 2026-01-02 00:31:41.576539 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-01-02 00:31:41.576549 | orchestrator | Friday 02 January 2026 00:31:35 +0000 (0:00:00.301) 0:05:14.553 ******** 2026-01-02 00:31:41.576558 | orchestrator | ok: [testbed-manager] =>  2026-01-02 00:31:41.576568 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-02 00:31:41.576577 | orchestrator | ok: [testbed-node-3] =>  2026-01-02 00:31:41.576587 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-02 00:31:41.576596 | orchestrator | ok: [testbed-node-4] =>  2026-01-02 00:31:41.576605 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-02 00:31:41.576615 | orchestrator | ok: [testbed-node-5] =>  2026-01-02 00:31:41.576624 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-02 00:31:41.576634 | orchestrator | ok: [testbed-node-0] =>  2026-01-02 00:31:41.576643 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-02 00:31:41.576653 | orchestrator | ok: [testbed-node-1] =>  2026-01-02 00:31:41.576671 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-02 00:31:41.576680 | orchestrator | ok: [testbed-node-2] =>  2026-01-02 00:31:41.576690 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-02 00:31:41.576699 | orchestrator | 2026-01-02 00:31:41.576709 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-01-02 00:31:41.576762 | orchestrator | Friday 02 January 2026 00:31:36 +0000 (0:00:00.333) 0:05:14.886 ******** 2026-01-02 00:31:41.576780 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:31:41.576797 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:31:41.576813 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:31:41.576829 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:31:41.576845 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:31:41.576861 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:31:41.576877 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:31:41.576893 | orchestrator | 2026-01-02 00:31:41.576911 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-01-02 00:31:41.576925 | orchestrator | Friday 02 January 2026 00:31:36 +0000 (0:00:00.294) 0:05:15.180 ******** 2026-01-02 00:31:41.576935 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:31:41.576945 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:31:41.576954 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:31:41.576964 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:31:41.576973 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:31:41.576983 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:31:41.576992 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:31:41.577001 | orchestrator | 2026-01-02 00:31:41.577011 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-01-02 00:31:41.577020 | orchestrator | Friday 02 January 2026 00:31:36 +0000 (0:00:00.291) 0:05:15.472 ******** 2026-01-02 00:31:41.577032 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:31:41.577044 | orchestrator | 2026-01-02 00:31:41.577054 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-01-02 00:31:41.577064 | orchestrator | Friday 02 January 2026 00:31:37 +0000 (0:00:00.474) 0:05:15.946 ******** 2026-01-02 00:31:41.577073 | orchestrator | ok: [testbed-manager] 2026-01-02 00:31:41.577083 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:31:41.577103 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:31:41.577119 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:31:41.577136 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:31:41.577151 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:31:41.577167 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:31:41.577182 | orchestrator | 2026-01-02 00:31:41.577199 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-01-02 00:31:41.577216 | orchestrator | Friday 02 January 2026 00:31:38 +0000 (0:00:01.021) 0:05:16.968 ******** 2026-01-02 00:31:41.577234 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:31:41.577250 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:31:41.577260 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:31:41.577270 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:31:41.577284 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:31:41.577300 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:31:41.577317 | orchestrator | ok: [testbed-manager] 2026-01-02 00:31:41.577332 | orchestrator | 2026-01-02 00:31:41.577348 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-01-02 00:31:41.577364 | orchestrator | Friday 02 January 2026 00:31:41 +0000 (0:00:02.967) 0:05:19.936 ******** 2026-01-02 00:31:41.577378 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-01-02 00:31:41.577395 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-01-02 00:31:41.577411 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-01-02 00:31:41.577440 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-01-02 00:31:41.577459 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-01-02 00:31:41.577473 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-01-02 00:31:41.577483 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:31:41.577493 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-01-02 00:31:41.577502 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-01-02 00:31:41.577511 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:31:41.577521 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-01-02 00:31:41.577530 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-01-02 00:31:41.577540 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:31:41.577549 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-01-02 00:31:41.577559 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-01-02 00:31:41.577568 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-01-02 00:31:41.577588 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-01-02 00:32:42.475204 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-01-02 00:32:42.475340 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:32:42.475357 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-01-02 00:32:42.475369 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-01-02 00:32:42.475379 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:32:42.475388 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-01-02 00:32:42.475398 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:32:42.475408 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-01-02 00:32:42.475417 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-01-02 00:32:42.475427 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-01-02 00:32:42.475437 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:32:42.475447 | orchestrator | 2026-01-02 00:32:42.475459 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-01-02 00:32:42.475470 | orchestrator | Friday 02 January 2026 00:31:41 +0000 (0:00:00.590) 0:05:20.527 ******** 2026-01-02 00:32:42.475480 | orchestrator | ok: [testbed-manager] 2026-01-02 00:32:42.475490 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:32:42.475499 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:32:42.475509 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:32:42.475518 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:32:42.475528 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:32:42.475537 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:32:42.475547 | orchestrator | 2026-01-02 00:32:42.475557 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-01-02 00:32:42.475566 | orchestrator | Friday 02 January 2026 00:31:49 +0000 (0:00:07.519) 0:05:28.046 ******** 2026-01-02 00:32:42.475576 | orchestrator | ok: [testbed-manager] 2026-01-02 00:32:42.475586 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:32:42.475595 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:32:42.475606 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:32:42.475623 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:32:42.475634 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:32:42.475644 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:32:42.475654 | orchestrator | 2026-01-02 00:32:42.475748 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-01-02 00:32:42.475761 | orchestrator | Friday 02 January 2026 00:31:50 +0000 (0:00:01.090) 0:05:29.137 ******** 2026-01-02 00:32:42.475773 | orchestrator | ok: [testbed-manager] 2026-01-02 00:32:42.475785 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:32:42.475796 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:32:42.475807 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:32:42.475817 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:32:42.475853 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:32:42.475865 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:32:42.475876 | orchestrator | 2026-01-02 00:32:42.475887 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-01-02 00:32:42.475898 | orchestrator | Friday 02 January 2026 00:31:58 +0000 (0:00:07.967) 0:05:37.104 ******** 2026-01-02 00:32:42.475910 | orchestrator | changed: [testbed-manager] 2026-01-02 00:32:42.475921 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:32:42.475931 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:32:42.475942 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:32:42.475954 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:32:42.475965 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:32:42.475975 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:32:42.475987 | orchestrator | 2026-01-02 00:32:42.475998 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-01-02 00:32:42.476009 | orchestrator | Friday 02 January 2026 00:32:01 +0000 (0:00:03.486) 0:05:40.591 ******** 2026-01-02 00:32:42.476034 | orchestrator | ok: [testbed-manager] 2026-01-02 00:32:42.476045 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:32:42.476056 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:32:42.476067 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:32:42.476078 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:32:42.476090 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:32:42.476101 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:32:42.476111 | orchestrator | 2026-01-02 00:32:42.476120 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-01-02 00:32:42.476130 | orchestrator | Friday 02 January 2026 00:32:03 +0000 (0:00:01.369) 0:05:41.960 ******** 2026-01-02 00:32:42.476140 | orchestrator | ok: [testbed-manager] 2026-01-02 00:32:42.476149 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:32:42.476159 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:32:42.476170 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:32:42.476187 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:32:42.476198 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:32:42.476208 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:32:42.476217 | orchestrator | 2026-01-02 00:32:42.476227 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-01-02 00:32:42.476236 | orchestrator | Friday 02 January 2026 00:32:04 +0000 (0:00:01.574) 0:05:43.534 ******** 2026-01-02 00:32:42.476246 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:32:42.476255 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:32:42.476264 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:32:42.476274 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:32:42.476283 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:32:42.476293 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:32:42.476303 | orchestrator | changed: [testbed-manager] 2026-01-02 00:32:42.476312 | orchestrator | 2026-01-02 00:32:42.476322 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-01-02 00:32:42.476331 | orchestrator | Friday 02 January 2026 00:32:05 +0000 (0:00:00.635) 0:05:44.169 ******** 2026-01-02 00:32:42.476341 | orchestrator | ok: [testbed-manager] 2026-01-02 00:32:42.476350 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:32:42.476360 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:32:42.476369 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:32:42.476378 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:32:42.476387 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:32:42.476397 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:32:42.476406 | orchestrator | 2026-01-02 00:32:42.476416 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-01-02 00:32:42.476442 | orchestrator | Friday 02 January 2026 00:32:14 +0000 (0:00:09.503) 0:05:53.673 ******** 2026-01-02 00:32:42.476452 | orchestrator | changed: [testbed-manager] 2026-01-02 00:32:42.476462 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:32:42.476471 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:32:42.476490 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:32:42.476499 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:32:42.476508 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:32:42.476518 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:32:42.476527 | orchestrator | 2026-01-02 00:32:42.476537 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-01-02 00:32:42.476546 | orchestrator | Friday 02 January 2026 00:32:15 +0000 (0:00:00.970) 0:05:54.643 ******** 2026-01-02 00:32:42.476556 | orchestrator | ok: [testbed-manager] 2026-01-02 00:32:42.476565 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:32:42.476575 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:32:42.476584 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:32:42.476594 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:32:42.476603 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:32:42.476613 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:32:42.476629 | orchestrator | 2026-01-02 00:32:42.476653 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-01-02 00:32:42.476697 | orchestrator | Friday 02 January 2026 00:32:24 +0000 (0:00:08.663) 0:06:03.307 ******** 2026-01-02 00:32:42.476714 | orchestrator | ok: [testbed-manager] 2026-01-02 00:32:42.476729 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:32:42.476743 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:32:42.476757 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:32:42.476771 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:32:42.476786 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:32:42.476800 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:32:42.476814 | orchestrator | 2026-01-02 00:32:42.476830 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-01-02 00:32:42.476846 | orchestrator | Friday 02 January 2026 00:32:35 +0000 (0:00:11.301) 0:06:14.609 ******** 2026-01-02 00:32:42.476864 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-01-02 00:32:42.476882 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-01-02 00:32:42.476899 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-01-02 00:32:42.476913 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-01-02 00:32:42.476922 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-01-02 00:32:42.476932 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-01-02 00:32:42.476941 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-01-02 00:32:42.476951 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-01-02 00:32:42.476960 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-01-02 00:32:42.476970 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-01-02 00:32:42.476979 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-01-02 00:32:42.476989 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-01-02 00:32:42.476998 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-01-02 00:32:42.477007 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-01-02 00:32:42.477017 | orchestrator | 2026-01-02 00:32:42.477026 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-01-02 00:32:42.477036 | orchestrator | Friday 02 January 2026 00:32:37 +0000 (0:00:01.226) 0:06:15.835 ******** 2026-01-02 00:32:42.477045 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:32:42.477055 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:32:42.477064 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:32:42.477073 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:32:42.477083 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:32:42.477100 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:32:42.477109 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:32:42.477119 | orchestrator | 2026-01-02 00:32:42.477128 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-01-02 00:32:42.477138 | orchestrator | Friday 02 January 2026 00:32:37 +0000 (0:00:00.545) 0:06:16.381 ******** 2026-01-02 00:32:42.477156 | orchestrator | ok: [testbed-manager] 2026-01-02 00:32:42.477166 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:32:42.477175 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:32:42.477185 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:32:42.477194 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:32:42.477204 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:32:42.477213 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:32:42.477222 | orchestrator | 2026-01-02 00:32:42.477232 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-01-02 00:32:42.477243 | orchestrator | Friday 02 January 2026 00:32:41 +0000 (0:00:03.794) 0:06:20.176 ******** 2026-01-02 00:32:42.477253 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:32:42.477262 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:32:42.477274 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:32:42.477291 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:32:42.477300 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:32:42.477310 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:32:42.477319 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:32:42.477329 | orchestrator | 2026-01-02 00:32:42.477339 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-01-02 00:32:42.477349 | orchestrator | Friday 02 January 2026 00:32:41 +0000 (0:00:00.560) 0:06:20.736 ******** 2026-01-02 00:32:42.477358 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-01-02 00:32:42.477368 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-01-02 00:32:42.477377 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:32:42.477387 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-01-02 00:32:42.477396 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-01-02 00:32:42.477405 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:32:42.477415 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-01-02 00:32:42.477424 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-01-02 00:32:42.477434 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:32:42.477453 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-01-02 00:33:02.410332 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-01-02 00:33:02.410455 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:33:02.410472 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-01-02 00:33:02.410484 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-01-02 00:33:02.410496 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:33:02.410507 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-01-02 00:33:02.410518 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-01-02 00:33:02.410529 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:33:02.410540 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-01-02 00:33:02.410557 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-01-02 00:33:02.410576 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:33:02.410594 | orchestrator | 2026-01-02 00:33:02.410615 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-01-02 00:33:02.410635 | orchestrator | Friday 02 January 2026 00:32:42 +0000 (0:00:00.753) 0:06:21.490 ******** 2026-01-02 00:33:02.410681 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:33:02.410700 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:33:02.410719 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:33:02.410738 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:33:02.410757 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:33:02.410775 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:33:02.410787 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:33:02.410798 | orchestrator | 2026-01-02 00:33:02.410811 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-01-02 00:33:02.410863 | orchestrator | Friday 02 January 2026 00:32:43 +0000 (0:00:00.534) 0:06:22.024 ******** 2026-01-02 00:33:02.410883 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:33:02.410902 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:33:02.410920 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:33:02.410938 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:33:02.410957 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:33:02.410975 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:33:02.410993 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:33:02.411009 | orchestrator | 2026-01-02 00:33:02.411027 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-01-02 00:33:02.411045 | orchestrator | Friday 02 January 2026 00:32:43 +0000 (0:00:00.565) 0:06:22.590 ******** 2026-01-02 00:33:02.411063 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:33:02.411082 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:33:02.411100 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:33:02.411118 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:33:02.411137 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:33:02.411156 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:33:02.411174 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:33:02.411192 | orchestrator | 2026-01-02 00:33:02.411211 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-01-02 00:33:02.411230 | orchestrator | Friday 02 January 2026 00:32:44 +0000 (0:00:00.545) 0:06:23.135 ******** 2026-01-02 00:33:02.411247 | orchestrator | ok: [testbed-manager] 2026-01-02 00:33:02.411266 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:33:02.411284 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:33:02.411302 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:33:02.411320 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:33:02.411337 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:33:02.411355 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:33:02.411373 | orchestrator | 2026-01-02 00:33:02.411391 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-01-02 00:33:02.411408 | orchestrator | Friday 02 January 2026 00:32:46 +0000 (0:00:01.865) 0:06:25.001 ******** 2026-01-02 00:33:02.411428 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:33:02.411449 | orchestrator | 2026-01-02 00:33:02.411468 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-01-02 00:33:02.411486 | orchestrator | Friday 02 January 2026 00:32:47 +0000 (0:00:00.921) 0:06:25.923 ******** 2026-01-02 00:33:02.411505 | orchestrator | ok: [testbed-manager] 2026-01-02 00:33:02.411524 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:33:02.411543 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:33:02.411561 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:33:02.411580 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:33:02.411599 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:33:02.411618 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:33:02.411637 | orchestrator | 2026-01-02 00:33:02.411767 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-01-02 00:33:02.411792 | orchestrator | Friday 02 January 2026 00:32:48 +0000 (0:00:00.846) 0:06:26.769 ******** 2026-01-02 00:33:02.411811 | orchestrator | ok: [testbed-manager] 2026-01-02 00:33:02.411831 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:33:02.411848 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:33:02.411865 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:33:02.411881 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:33:02.411897 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:33:02.411913 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:33:02.411929 | orchestrator | 2026-01-02 00:33:02.411947 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-01-02 00:33:02.411985 | orchestrator | Friday 02 January 2026 00:32:48 +0000 (0:00:00.883) 0:06:27.653 ******** 2026-01-02 00:33:02.412005 | orchestrator | ok: [testbed-manager] 2026-01-02 00:33:02.412022 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:33:02.412039 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:33:02.412051 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:33:02.412062 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:33:02.412073 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:33:02.412083 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:33:02.412094 | orchestrator | 2026-01-02 00:33:02.412105 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-01-02 00:33:02.412143 | orchestrator | Friday 02 January 2026 00:32:50 +0000 (0:00:01.580) 0:06:29.233 ******** 2026-01-02 00:33:02.412154 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:33:02.412165 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:33:02.412176 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:33:02.412187 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:33:02.412198 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:33:02.412209 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:33:02.412220 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:33:02.412230 | orchestrator | 2026-01-02 00:33:02.412241 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-01-02 00:33:02.412253 | orchestrator | Friday 02 January 2026 00:32:51 +0000 (0:00:01.379) 0:06:30.612 ******** 2026-01-02 00:33:02.412264 | orchestrator | ok: [testbed-manager] 2026-01-02 00:33:02.412274 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:33:02.412285 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:33:02.412296 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:33:02.412307 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:33:02.412318 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:33:02.412328 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:33:02.412339 | orchestrator | 2026-01-02 00:33:02.412350 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-01-02 00:33:02.412361 | orchestrator | Friday 02 January 2026 00:32:53 +0000 (0:00:01.333) 0:06:31.946 ******** 2026-01-02 00:33:02.412372 | orchestrator | changed: [testbed-manager] 2026-01-02 00:33:02.412383 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:33:02.412393 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:33:02.412404 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:33:02.412415 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:33:02.412425 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:33:02.412436 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:33:02.412447 | orchestrator | 2026-01-02 00:33:02.412457 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-01-02 00:33:02.412468 | orchestrator | Friday 02 January 2026 00:32:54 +0000 (0:00:01.375) 0:06:33.322 ******** 2026-01-02 00:33:02.412480 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:33:02.412492 | orchestrator | 2026-01-02 00:33:02.412503 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-01-02 00:33:02.412514 | orchestrator | Friday 02 January 2026 00:32:55 +0000 (0:00:01.092) 0:06:34.415 ******** 2026-01-02 00:33:02.412525 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:33:02.412536 | orchestrator | ok: [testbed-manager] 2026-01-02 00:33:02.412546 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:33:02.412557 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:33:02.412569 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:33:02.412587 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:33:02.412606 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:33:02.412624 | orchestrator | 2026-01-02 00:33:02.412642 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-01-02 00:33:02.412741 | orchestrator | Friday 02 January 2026 00:32:57 +0000 (0:00:01.420) 0:06:35.836 ******** 2026-01-02 00:33:02.412779 | orchestrator | ok: [testbed-manager] 2026-01-02 00:33:02.412799 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:33:02.412811 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:33:02.412822 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:33:02.412832 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:33:02.412843 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:33:02.412853 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:33:02.412864 | orchestrator | 2026-01-02 00:33:02.412875 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-01-02 00:33:02.412893 | orchestrator | Friday 02 January 2026 00:32:58 +0000 (0:00:01.354) 0:06:37.191 ******** 2026-01-02 00:33:02.412905 | orchestrator | ok: [testbed-manager] 2026-01-02 00:33:02.412915 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:33:02.412928 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:33:02.412947 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:33:02.412965 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:33:02.412981 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:33:02.412999 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:33:02.413015 | orchestrator | 2026-01-02 00:33:02.413032 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-01-02 00:33:02.413050 | orchestrator | Friday 02 January 2026 00:32:59 +0000 (0:00:01.223) 0:06:38.414 ******** 2026-01-02 00:33:02.413066 | orchestrator | ok: [testbed-manager] 2026-01-02 00:33:02.413083 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:33:02.413100 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:33:02.413119 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:33:02.413137 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:33:02.413155 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:33:02.413174 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:33:02.413192 | orchestrator | 2026-01-02 00:33:02.413209 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-01-02 00:33:02.413227 | orchestrator | Friday 02 January 2026 00:33:01 +0000 (0:00:01.505) 0:06:39.919 ******** 2026-01-02 00:33:02.413247 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:33:02.413266 | orchestrator | 2026-01-02 00:33:02.413284 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-02 00:33:02.413301 | orchestrator | Friday 02 January 2026 00:33:02 +0000 (0:00:00.915) 0:06:40.835 ******** 2026-01-02 00:33:02.413321 | orchestrator | 2026-01-02 00:33:02.413339 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-02 00:33:02.413358 | orchestrator | Friday 02 January 2026 00:33:02 +0000 (0:00:00.040) 0:06:40.876 ******** 2026-01-02 00:33:02.413376 | orchestrator | 2026-01-02 00:33:02.413395 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-02 00:33:02.413413 | orchestrator | Friday 02 January 2026 00:33:02 +0000 (0:00:00.056) 0:06:40.932 ******** 2026-01-02 00:33:02.413430 | orchestrator | 2026-01-02 00:33:02.413447 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-02 00:33:02.413484 | orchestrator | Friday 02 January 2026 00:33:02 +0000 (0:00:00.049) 0:06:40.982 ******** 2026-01-02 00:33:28.666924 | orchestrator | 2026-01-02 00:33:28.667043 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-02 00:33:28.667061 | orchestrator | Friday 02 January 2026 00:33:02 +0000 (0:00:00.040) 0:06:41.023 ******** 2026-01-02 00:33:28.667074 | orchestrator | 2026-01-02 00:33:28.667086 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-02 00:33:28.667097 | orchestrator | Friday 02 January 2026 00:33:02 +0000 (0:00:00.042) 0:06:41.065 ******** 2026-01-02 00:33:28.667108 | orchestrator | 2026-01-02 00:33:28.667119 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-02 00:33:28.667130 | orchestrator | Friday 02 January 2026 00:33:02 +0000 (0:00:00.046) 0:06:41.111 ******** 2026-01-02 00:33:28.667168 | orchestrator | 2026-01-02 00:33:28.667180 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-02 00:33:28.667191 | orchestrator | Friday 02 January 2026 00:33:02 +0000 (0:00:00.040) 0:06:41.152 ******** 2026-01-02 00:33:28.667202 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:33:28.667215 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:33:28.667225 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:33:28.667236 | orchestrator | 2026-01-02 00:33:28.667248 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-01-02 00:33:28.667259 | orchestrator | Friday 02 January 2026 00:33:03 +0000 (0:00:01.186) 0:06:42.338 ******** 2026-01-02 00:33:28.667270 | orchestrator | changed: [testbed-manager] 2026-01-02 00:33:28.667282 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:33:28.667293 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:33:28.667304 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:33:28.667315 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:33:28.667326 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:33:28.667336 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:33:28.667347 | orchestrator | 2026-01-02 00:33:28.667358 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-01-02 00:33:28.667370 | orchestrator | Friday 02 January 2026 00:33:05 +0000 (0:00:01.520) 0:06:43.858 ******** 2026-01-02 00:33:28.667381 | orchestrator | changed: [testbed-manager] 2026-01-02 00:33:28.667392 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:33:28.667402 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:33:28.667413 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:33:28.667424 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:33:28.667435 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:33:28.667447 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:33:28.667460 | orchestrator | 2026-01-02 00:33:28.667473 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-01-02 00:33:28.667486 | orchestrator | Friday 02 January 2026 00:33:06 +0000 (0:00:01.222) 0:06:45.081 ******** 2026-01-02 00:33:28.667499 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:33:28.667513 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:33:28.667525 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:33:28.667538 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:33:28.667550 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:33:28.667563 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:33:28.667576 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:33:28.667589 | orchestrator | 2026-01-02 00:33:28.667601 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-01-02 00:33:28.667614 | orchestrator | Friday 02 January 2026 00:33:08 +0000 (0:00:02.387) 0:06:47.469 ******** 2026-01-02 00:33:28.667653 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:33:28.667666 | orchestrator | 2026-01-02 00:33:28.667679 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-01-02 00:33:28.667691 | orchestrator | Friday 02 January 2026 00:33:08 +0000 (0:00:00.115) 0:06:47.584 ******** 2026-01-02 00:33:28.667704 | orchestrator | ok: [testbed-manager] 2026-01-02 00:33:28.667717 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:33:28.667731 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:33:28.667743 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:33:28.667756 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:33:28.667768 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:33:28.667782 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:33:28.667795 | orchestrator | 2026-01-02 00:33:28.667807 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-01-02 00:33:28.667819 | orchestrator | Friday 02 January 2026 00:33:09 +0000 (0:00:01.058) 0:06:48.643 ******** 2026-01-02 00:33:28.667830 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:33:28.667841 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:33:28.667852 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:33:28.667872 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:33:28.667882 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:33:28.667893 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:33:28.667904 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:33:28.667915 | orchestrator | 2026-01-02 00:33:28.667925 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-01-02 00:33:28.667936 | orchestrator | Friday 02 January 2026 00:33:10 +0000 (0:00:00.565) 0:06:49.209 ******** 2026-01-02 00:33:28.667948 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:33:28.667961 | orchestrator | 2026-01-02 00:33:28.667972 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-01-02 00:33:28.667983 | orchestrator | Friday 02 January 2026 00:33:11 +0000 (0:00:01.104) 0:06:50.314 ******** 2026-01-02 00:33:28.667994 | orchestrator | ok: [testbed-manager] 2026-01-02 00:33:28.668005 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:33:28.668016 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:33:28.668027 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:33:28.668037 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:33:28.668048 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:33:28.668059 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:33:28.668070 | orchestrator | 2026-01-02 00:33:28.668081 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-01-02 00:33:28.668092 | orchestrator | Friday 02 January 2026 00:33:12 +0000 (0:00:00.869) 0:06:51.183 ******** 2026-01-02 00:33:28.668103 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-01-02 00:33:28.668133 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-01-02 00:33:28.668144 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-01-02 00:33:28.668155 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-01-02 00:33:28.668166 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-01-02 00:33:28.668177 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-01-02 00:33:28.668188 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-01-02 00:33:28.668199 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-01-02 00:33:28.668210 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-01-02 00:33:28.668221 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-01-02 00:33:28.668232 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-01-02 00:33:28.668243 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-01-02 00:33:28.668253 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-01-02 00:33:28.668264 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-01-02 00:33:28.668276 | orchestrator | 2026-01-02 00:33:28.668287 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-01-02 00:33:28.668298 | orchestrator | Friday 02 January 2026 00:33:14 +0000 (0:00:02.570) 0:06:53.754 ******** 2026-01-02 00:33:28.668309 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:33:28.668320 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:33:28.668330 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:33:28.668341 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:33:28.668352 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:33:28.668363 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:33:28.668373 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:33:28.668384 | orchestrator | 2026-01-02 00:33:28.668395 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-01-02 00:33:28.668406 | orchestrator | Friday 02 January 2026 00:33:15 +0000 (0:00:00.788) 0:06:54.542 ******** 2026-01-02 00:33:28.668419 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:33:28.668439 | orchestrator | 2026-01-02 00:33:28.668450 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-01-02 00:33:28.668461 | orchestrator | Friday 02 January 2026 00:33:16 +0000 (0:00:00.874) 0:06:55.417 ******** 2026-01-02 00:33:28.668472 | orchestrator | ok: [testbed-manager] 2026-01-02 00:33:28.668483 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:33:28.668493 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:33:28.668504 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:33:28.668515 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:33:28.668526 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:33:28.668536 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:33:28.668547 | orchestrator | 2026-01-02 00:33:28.668558 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-01-02 00:33:28.668569 | orchestrator | Friday 02 January 2026 00:33:17 +0000 (0:00:00.916) 0:06:56.333 ******** 2026-01-02 00:33:28.668580 | orchestrator | ok: [testbed-manager] 2026-01-02 00:33:28.668591 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:33:28.668602 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:33:28.668613 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:33:28.668638 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:33:28.668649 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:33:28.668676 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:33:28.668687 | orchestrator | 2026-01-02 00:33:28.668698 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-01-02 00:33:28.668709 | orchestrator | Friday 02 January 2026 00:33:18 +0000 (0:00:01.047) 0:06:57.381 ******** 2026-01-02 00:33:28.668720 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:33:28.668731 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:33:28.668742 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:33:28.668753 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:33:28.668764 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:33:28.668774 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:33:28.668785 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:33:28.668796 | orchestrator | 2026-01-02 00:33:28.668807 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-01-02 00:33:28.668818 | orchestrator | Friday 02 January 2026 00:33:19 +0000 (0:00:00.526) 0:06:57.907 ******** 2026-01-02 00:33:28.668829 | orchestrator | ok: [testbed-manager] 2026-01-02 00:33:28.668840 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:33:28.668850 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:33:28.668861 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:33:28.668872 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:33:28.668882 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:33:28.668893 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:33:28.668904 | orchestrator | 2026-01-02 00:33:28.668915 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-01-02 00:33:28.668926 | orchestrator | Friday 02 January 2026 00:33:20 +0000 (0:00:01.515) 0:06:59.423 ******** 2026-01-02 00:33:28.668937 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:33:28.668948 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:33:28.668958 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:33:28.668969 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:33:28.668980 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:33:28.668990 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:33:28.669001 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:33:28.669012 | orchestrator | 2026-01-02 00:33:28.669022 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-01-02 00:33:28.669033 | orchestrator | Friday 02 January 2026 00:33:21 +0000 (0:00:00.558) 0:06:59.981 ******** 2026-01-02 00:33:28.669044 | orchestrator | ok: [testbed-manager] 2026-01-02 00:33:28.669055 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:33:28.669066 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:33:28.669077 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:33:28.669095 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:33:28.669106 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:33:28.669123 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:34:01.110995 | orchestrator | 2026-01-02 00:34:01.111130 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-01-02 00:34:01.111160 | orchestrator | Friday 02 January 2026 00:33:28 +0000 (0:00:07.433) 0:07:07.415 ******** 2026-01-02 00:34:01.111179 | orchestrator | ok: [testbed-manager] 2026-01-02 00:34:01.111201 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:34:01.111222 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:34:01.111242 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:34:01.111261 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:34:01.111280 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:34:01.111292 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:34:01.111303 | orchestrator | 2026-01-02 00:34:01.111315 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-01-02 00:34:01.111326 | orchestrator | Friday 02 January 2026 00:33:30 +0000 (0:00:01.621) 0:07:09.037 ******** 2026-01-02 00:34:01.111337 | orchestrator | ok: [testbed-manager] 2026-01-02 00:34:01.111348 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:34:01.111359 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:34:01.111370 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:34:01.111381 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:34:01.111392 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:34:01.111403 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:34:01.111413 | orchestrator | 2026-01-02 00:34:01.111425 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-01-02 00:34:01.111436 | orchestrator | Friday 02 January 2026 00:33:32 +0000 (0:00:01.814) 0:07:10.851 ******** 2026-01-02 00:34:01.111447 | orchestrator | ok: [testbed-manager] 2026-01-02 00:34:01.111458 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:34:01.111469 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:34:01.111479 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:34:01.111490 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:34:01.111501 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:34:01.111512 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:34:01.111542 | orchestrator | 2026-01-02 00:34:01.111561 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-02 00:34:01.111581 | orchestrator | Friday 02 January 2026 00:33:33 +0000 (0:00:01.690) 0:07:12.541 ******** 2026-01-02 00:34:01.111624 | orchestrator | ok: [testbed-manager] 2026-01-02 00:34:01.111643 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:34:01.111662 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:34:01.111680 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:34:01.111699 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:34:01.111718 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:34:01.111736 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:34:01.111751 | orchestrator | 2026-01-02 00:34:01.111762 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-02 00:34:01.111773 | orchestrator | Friday 02 January 2026 00:33:34 +0000 (0:00:00.867) 0:07:13.409 ******** 2026-01-02 00:34:01.111784 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:34:01.111795 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:34:01.111806 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:34:01.111816 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:34:01.111827 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:34:01.111838 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:34:01.111848 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:34:01.111859 | orchestrator | 2026-01-02 00:34:01.111870 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-01-02 00:34:01.111881 | orchestrator | Friday 02 January 2026 00:33:35 +0000 (0:00:01.025) 0:07:14.435 ******** 2026-01-02 00:34:01.111893 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:34:01.111904 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:34:01.111941 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:34:01.111953 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:34:01.111978 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:34:01.111988 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:34:01.111999 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:34:01.112010 | orchestrator | 2026-01-02 00:34:01.112020 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-01-02 00:34:01.112031 | orchestrator | Friday 02 January 2026 00:33:36 +0000 (0:00:00.560) 0:07:14.995 ******** 2026-01-02 00:34:01.112042 | orchestrator | ok: [testbed-manager] 2026-01-02 00:34:01.112052 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:34:01.112063 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:34:01.112073 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:34:01.112084 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:34:01.112095 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:34:01.112105 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:34:01.112116 | orchestrator | 2026-01-02 00:34:01.112127 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-01-02 00:34:01.112154 | orchestrator | Friday 02 January 2026 00:33:36 +0000 (0:00:00.538) 0:07:15.534 ******** 2026-01-02 00:34:01.112165 | orchestrator | ok: [testbed-manager] 2026-01-02 00:34:01.112176 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:34:01.112187 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:34:01.112197 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:34:01.112208 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:34:01.112219 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:34:01.112229 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:34:01.112240 | orchestrator | 2026-01-02 00:34:01.112251 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-01-02 00:34:01.112262 | orchestrator | Friday 02 January 2026 00:33:37 +0000 (0:00:00.602) 0:07:16.136 ******** 2026-01-02 00:34:01.112273 | orchestrator | ok: [testbed-manager] 2026-01-02 00:34:01.112283 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:34:01.112294 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:34:01.112304 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:34:01.112315 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:34:01.112325 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:34:01.112336 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:34:01.112347 | orchestrator | 2026-01-02 00:34:01.112357 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-01-02 00:34:01.112368 | orchestrator | Friday 02 January 2026 00:33:38 +0000 (0:00:00.766) 0:07:16.903 ******** 2026-01-02 00:34:01.112379 | orchestrator | ok: [testbed-manager] 2026-01-02 00:34:01.112390 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:34:01.112400 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:34:01.112411 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:34:01.112421 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:34:01.112432 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:34:01.112442 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:34:01.112453 | orchestrator | 2026-01-02 00:34:01.112483 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-01-02 00:34:01.112495 | orchestrator | Friday 02 January 2026 00:33:43 +0000 (0:00:05.600) 0:07:22.503 ******** 2026-01-02 00:34:01.112506 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:34:01.112517 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:34:01.112528 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:34:01.112539 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:34:01.112549 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:34:01.112560 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:34:01.112570 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:34:01.112581 | orchestrator | 2026-01-02 00:34:01.112628 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-01-02 00:34:01.112640 | orchestrator | Friday 02 January 2026 00:33:44 +0000 (0:00:00.539) 0:07:23.043 ******** 2026-01-02 00:34:01.112653 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:34:01.112676 | orchestrator | 2026-01-02 00:34:01.112687 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-01-02 00:34:01.112698 | orchestrator | Friday 02 January 2026 00:33:45 +0000 (0:00:01.065) 0:07:24.108 ******** 2026-01-02 00:34:01.112708 | orchestrator | ok: [testbed-manager] 2026-01-02 00:34:01.112733 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:34:01.112745 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:34:01.112756 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:34:01.112766 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:34:01.112777 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:34:01.112787 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:34:01.112798 | orchestrator | 2026-01-02 00:34:01.112809 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-01-02 00:34:01.112820 | orchestrator | Friday 02 January 2026 00:33:47 +0000 (0:00:01.924) 0:07:26.033 ******** 2026-01-02 00:34:01.112830 | orchestrator | ok: [testbed-manager] 2026-01-02 00:34:01.112841 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:34:01.112851 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:34:01.112862 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:34:01.112873 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:34:01.112883 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:34:01.112894 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:34:01.112904 | orchestrator | 2026-01-02 00:34:01.112915 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-01-02 00:34:01.112926 | orchestrator | Friday 02 January 2026 00:33:48 +0000 (0:00:01.156) 0:07:27.190 ******** 2026-01-02 00:34:01.112937 | orchestrator | ok: [testbed-manager] 2026-01-02 00:34:01.112947 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:34:01.112958 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:34:01.112968 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:34:01.112979 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:34:01.112990 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:34:01.113001 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:34:01.113011 | orchestrator | 2026-01-02 00:34:01.113022 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-01-02 00:34:01.113033 | orchestrator | Friday 02 January 2026 00:33:49 +0000 (0:00:00.857) 0:07:28.047 ******** 2026-01-02 00:34:01.113044 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-02 00:34:01.113057 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-02 00:34:01.113068 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-02 00:34:01.113080 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-02 00:34:01.113090 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-02 00:34:01.113101 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-02 00:34:01.113112 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-02 00:34:01.113122 | orchestrator | 2026-01-02 00:34:01.113133 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-01-02 00:34:01.113144 | orchestrator | Friday 02 January 2026 00:33:51 +0000 (0:00:01.879) 0:07:29.926 ******** 2026-01-02 00:34:01.113155 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:34:01.113175 | orchestrator | 2026-01-02 00:34:01.113186 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-01-02 00:34:01.113197 | orchestrator | Friday 02 January 2026 00:33:52 +0000 (0:00:00.884) 0:07:30.811 ******** 2026-01-02 00:34:01.113208 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:34:01.113219 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:34:01.113230 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:34:01.113241 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:34:01.113251 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:34:01.113262 | orchestrator | changed: [testbed-manager] 2026-01-02 00:34:01.113273 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:34:01.113283 | orchestrator | 2026-01-02 00:34:01.113303 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-01-02 00:34:32.417408 | orchestrator | Friday 02 January 2026 00:34:01 +0000 (0:00:09.045) 0:07:39.857 ******** 2026-01-02 00:34:32.417586 | orchestrator | ok: [testbed-manager] 2026-01-02 00:34:32.417605 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:34:32.417616 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:34:32.417626 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:34:32.417636 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:34:32.417646 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:34:32.417656 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:34:32.417666 | orchestrator | 2026-01-02 00:34:32.417677 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-01-02 00:34:32.417688 | orchestrator | Friday 02 January 2026 00:34:03 +0000 (0:00:02.015) 0:07:41.872 ******** 2026-01-02 00:34:32.417698 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:34:32.417707 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:34:32.417717 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:34:32.417727 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:34:32.417737 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:34:32.417747 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:34:32.417756 | orchestrator | 2026-01-02 00:34:32.417767 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-01-02 00:34:32.417777 | orchestrator | Friday 02 January 2026 00:34:04 +0000 (0:00:01.321) 0:07:43.194 ******** 2026-01-02 00:34:32.417787 | orchestrator | changed: [testbed-manager] 2026-01-02 00:34:32.417799 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:34:32.417809 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:34:32.417818 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:34:32.417828 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:34:32.417838 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:34:32.417848 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:34:32.417858 | orchestrator | 2026-01-02 00:34:32.417867 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-01-02 00:34:32.417877 | orchestrator | 2026-01-02 00:34:32.417887 | orchestrator | TASK [Include hardening role] ************************************************** 2026-01-02 00:34:32.417897 | orchestrator | Friday 02 January 2026 00:34:05 +0000 (0:00:01.254) 0:07:44.449 ******** 2026-01-02 00:34:32.417909 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:34:32.417921 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:34:32.417933 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:34:32.417943 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:34:32.417955 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:34:32.417966 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:34:32.417977 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:34:32.417988 | orchestrator | 2026-01-02 00:34:32.417999 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-01-02 00:34:32.418010 | orchestrator | 2026-01-02 00:34:32.418080 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-01-02 00:34:32.418091 | orchestrator | Friday 02 January 2026 00:34:06 +0000 (0:00:00.722) 0:07:45.171 ******** 2026-01-02 00:34:32.418123 | orchestrator | changed: [testbed-manager] 2026-01-02 00:34:32.418135 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:34:32.418177 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:34:32.418189 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:34:32.418200 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:34:32.418211 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:34:32.418222 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:34:32.418235 | orchestrator | 2026-01-02 00:34:32.418247 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-01-02 00:34:32.418259 | orchestrator | Friday 02 January 2026 00:34:07 +0000 (0:00:01.354) 0:07:46.525 ******** 2026-01-02 00:34:32.418270 | orchestrator | ok: [testbed-manager] 2026-01-02 00:34:32.418282 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:34:32.418293 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:34:32.418303 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:34:32.418318 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:34:32.418327 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:34:32.418337 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:34:32.418346 | orchestrator | 2026-01-02 00:34:32.418356 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-01-02 00:34:32.418366 | orchestrator | Friday 02 January 2026 00:34:09 +0000 (0:00:01.517) 0:07:48.043 ******** 2026-01-02 00:34:32.418376 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:34:32.418386 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:34:32.418396 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:34:32.418405 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:34:32.418415 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:34:32.418424 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:34:32.418434 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:34:32.418444 | orchestrator | 2026-01-02 00:34:32.418453 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-01-02 00:34:32.418463 | orchestrator | Friday 02 January 2026 00:34:09 +0000 (0:00:00.543) 0:07:48.586 ******** 2026-01-02 00:34:32.418474 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:34:32.418485 | orchestrator | 2026-01-02 00:34:32.418495 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-01-02 00:34:32.418504 | orchestrator | Friday 02 January 2026 00:34:10 +0000 (0:00:01.035) 0:07:49.622 ******** 2026-01-02 00:34:32.418515 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:34:32.418528 | orchestrator | 2026-01-02 00:34:32.418537 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-01-02 00:34:32.418577 | orchestrator | Friday 02 January 2026 00:34:11 +0000 (0:00:00.814) 0:07:50.437 ******** 2026-01-02 00:34:32.418587 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:34:32.418597 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:34:32.418607 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:34:32.418616 | orchestrator | changed: [testbed-manager] 2026-01-02 00:34:32.418626 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:34:32.418636 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:34:32.418645 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:34:32.418655 | orchestrator | 2026-01-02 00:34:32.418681 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-01-02 00:34:32.418691 | orchestrator | Friday 02 January 2026 00:34:20 +0000 (0:00:08.652) 0:07:59.090 ******** 2026-01-02 00:34:32.418701 | orchestrator | changed: [testbed-manager] 2026-01-02 00:34:32.418711 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:34:32.418720 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:34:32.418730 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:34:32.418753 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:34:32.418770 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:34:32.418786 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:34:32.418802 | orchestrator | 2026-01-02 00:34:32.418818 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-01-02 00:34:32.418834 | orchestrator | Friday 02 January 2026 00:34:21 +0000 (0:00:01.129) 0:08:00.220 ******** 2026-01-02 00:34:32.418850 | orchestrator | changed: [testbed-manager] 2026-01-02 00:34:32.418874 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:34:32.418890 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:34:32.418906 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:34:32.418921 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:34:32.418936 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:34:32.418952 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:34:32.418968 | orchestrator | 2026-01-02 00:34:32.418985 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-01-02 00:34:32.419001 | orchestrator | Friday 02 January 2026 00:34:22 +0000 (0:00:01.361) 0:08:01.581 ******** 2026-01-02 00:34:32.419017 | orchestrator | changed: [testbed-manager] 2026-01-02 00:34:32.419033 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:34:32.419049 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:34:32.419067 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:34:32.419084 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:34:32.419100 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:34:32.419117 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:34:32.419133 | orchestrator | 2026-01-02 00:34:32.419147 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-01-02 00:34:32.419161 | orchestrator | Friday 02 January 2026 00:34:24 +0000 (0:00:02.015) 0:08:03.597 ******** 2026-01-02 00:34:32.419176 | orchestrator | changed: [testbed-manager] 2026-01-02 00:34:32.419192 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:34:32.419208 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:34:32.419224 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:34:32.419241 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:34:32.419257 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:34:32.419273 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:34:32.419289 | orchestrator | 2026-01-02 00:34:32.419307 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-01-02 00:34:32.419324 | orchestrator | Friday 02 January 2026 00:34:26 +0000 (0:00:01.297) 0:08:04.894 ******** 2026-01-02 00:34:32.419340 | orchestrator | changed: [testbed-manager] 2026-01-02 00:34:32.419355 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:34:32.419371 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:34:32.419386 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:34:32.419401 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:34:32.419416 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:34:32.419432 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:34:32.419449 | orchestrator | 2026-01-02 00:34:32.419465 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-01-02 00:34:32.419483 | orchestrator | 2026-01-02 00:34:32.419499 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-01-02 00:34:32.419517 | orchestrator | Friday 02 January 2026 00:34:27 +0000 (0:00:01.150) 0:08:06.045 ******** 2026-01-02 00:34:32.419544 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:34:32.419590 | orchestrator | 2026-01-02 00:34:32.419600 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-01-02 00:34:32.419610 | orchestrator | Friday 02 January 2026 00:34:28 +0000 (0:00:00.835) 0:08:06.881 ******** 2026-01-02 00:34:32.419620 | orchestrator | ok: [testbed-manager] 2026-01-02 00:34:32.419630 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:34:32.419639 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:34:32.419660 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:34:32.419669 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:34:32.419679 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:34:32.419688 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:34:32.419698 | orchestrator | 2026-01-02 00:34:32.419708 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-01-02 00:34:32.419717 | orchestrator | Friday 02 January 2026 00:34:29 +0000 (0:00:01.169) 0:08:08.050 ******** 2026-01-02 00:34:32.419727 | orchestrator | changed: [testbed-manager] 2026-01-02 00:34:32.419737 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:34:32.419746 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:34:32.419756 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:34:32.419765 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:34:32.419774 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:34:32.419784 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:34:32.419793 | orchestrator | 2026-01-02 00:34:32.419803 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-01-02 00:34:32.419813 | orchestrator | Friday 02 January 2026 00:34:30 +0000 (0:00:01.219) 0:08:09.270 ******** 2026-01-02 00:34:32.419823 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:34:32.419832 | orchestrator | 2026-01-02 00:34:32.419842 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-01-02 00:34:32.419852 | orchestrator | Friday 02 January 2026 00:34:31 +0000 (0:00:01.007) 0:08:10.277 ******** 2026-01-02 00:34:32.419861 | orchestrator | ok: [testbed-manager] 2026-01-02 00:34:32.419871 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:34:32.419881 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:34:32.419890 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:34:32.419900 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:34:32.419909 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:34:32.419919 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:34:32.419928 | orchestrator | 2026-01-02 00:34:32.419951 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-01-02 00:34:34.109429 | orchestrator | Friday 02 January 2026 00:34:32 +0000 (0:00:00.879) 0:08:11.156 ******** 2026-01-02 00:34:34.109631 | orchestrator | changed: [testbed-manager] 2026-01-02 00:34:34.109655 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:34:34.109668 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:34:34.109679 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:34:34.109690 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:34:34.109702 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:34:34.109713 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:34:34.109724 | orchestrator | 2026-01-02 00:34:34.109736 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:34:34.109749 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-01-02 00:34:34.109762 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-02 00:34:34.109773 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-02 00:34:34.109784 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-02 00:34:34.109794 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-01-02 00:34:34.109805 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-01-02 00:34:34.109816 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-01-02 00:34:34.109856 | orchestrator | 2026-01-02 00:34:34.109867 | orchestrator | 2026-01-02 00:34:34.109878 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:34:34.109889 | orchestrator | Friday 02 January 2026 00:34:33 +0000 (0:00:01.167) 0:08:12.324 ******** 2026-01-02 00:34:34.109900 | orchestrator | =============================================================================== 2026-01-02 00:34:34.109911 | orchestrator | osism.commons.packages : Install required packages --------------------- 74.03s 2026-01-02 00:34:34.109922 | orchestrator | osism.commons.packages : Download required packages -------------------- 38.37s 2026-01-02 00:34:34.109933 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 35.07s 2026-01-02 00:34:34.109943 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.67s 2026-01-02 00:34:34.109954 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.89s 2026-01-02 00:34:34.109966 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.56s 2026-01-02 00:34:34.109979 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.30s 2026-01-02 00:34:34.110008 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.50s 2026-01-02 00:34:34.110087 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.05s 2026-01-02 00:34:34.110100 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 9.01s 2026-01-02 00:34:34.110114 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.66s 2026-01-02 00:34:34.110128 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.65s 2026-01-02 00:34:34.110140 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.04s 2026-01-02 00:34:34.110152 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.97s 2026-01-02 00:34:34.110165 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.52s 2026-01-02 00:34:34.110177 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.47s 2026-01-02 00:34:34.110190 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.43s 2026-01-02 00:34:34.110202 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.05s 2026-01-02 00:34:34.110215 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.94s 2026-01-02 00:34:34.110228 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.75s 2026-01-02 00:34:34.462009 | orchestrator | + osism apply fail2ban 2026-01-02 00:34:47.535881 | orchestrator | 2026-01-02 00:34:47 | INFO  | Task ddfa6bac-8341-4667-8155-3a0b163d5f09 (fail2ban) was prepared for execution. 2026-01-02 00:34:47.535999 | orchestrator | 2026-01-02 00:34:47 | INFO  | It takes a moment until task ddfa6bac-8341-4667-8155-3a0b163d5f09 (fail2ban) has been started and output is visible here. 2026-01-02 00:35:09.693800 | orchestrator | 2026-01-02 00:35:09.693934 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-01-02 00:35:09.693966 | orchestrator | 2026-01-02 00:35:09.693990 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-01-02 00:35:09.694013 | orchestrator | Friday 02 January 2026 00:34:52 +0000 (0:00:00.317) 0:00:00.317 ******** 2026-01-02 00:35:09.694123 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:35:09.694150 | orchestrator | 2026-01-02 00:35:09.694173 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-01-02 00:35:09.694196 | orchestrator | Friday 02 January 2026 00:34:53 +0000 (0:00:01.231) 0:00:01.549 ******** 2026-01-02 00:35:09.694249 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:35:09.694269 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:35:09.694286 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:35:09.694306 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:35:09.694325 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:35:09.694348 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:35:09.694376 | orchestrator | changed: [testbed-manager] 2026-01-02 00:35:09.694402 | orchestrator | 2026-01-02 00:35:09.694431 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-01-02 00:35:09.694458 | orchestrator | Friday 02 January 2026 00:35:04 +0000 (0:00:10.995) 0:00:12.544 ******** 2026-01-02 00:35:09.694510 | orchestrator | changed: [testbed-manager] 2026-01-02 00:35:09.694540 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:35:09.694566 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:35:09.694592 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:35:09.694619 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:35:09.694642 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:35:09.694660 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:35:09.694679 | orchestrator | 2026-01-02 00:35:09.694704 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-01-02 00:35:09.694731 | orchestrator | Friday 02 January 2026 00:35:06 +0000 (0:00:01.517) 0:00:14.062 ******** 2026-01-02 00:35:09.694758 | orchestrator | ok: [testbed-manager] 2026-01-02 00:35:09.694786 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:35:09.694813 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:35:09.694840 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:35:09.694866 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:35:09.694886 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:35:09.694904 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:35:09.694929 | orchestrator | 2026-01-02 00:35:09.694954 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-01-02 00:35:09.694976 | orchestrator | Friday 02 January 2026 00:35:07 +0000 (0:00:01.545) 0:00:15.608 ******** 2026-01-02 00:35:09.694994 | orchestrator | changed: [testbed-manager] 2026-01-02 00:35:09.695011 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:35:09.695029 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:35:09.695046 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:35:09.695064 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:35:09.695081 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:35:09.695100 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:35:09.695118 | orchestrator | 2026-01-02 00:35:09.695136 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:35:09.695154 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:35:09.695173 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:35:09.695191 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:35:09.695229 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:35:09.695248 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:35:09.695266 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:35:09.695287 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:35:09.695309 | orchestrator | 2026-01-02 00:35:09.695330 | orchestrator | 2026-01-02 00:35:09.695351 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:35:09.695391 | orchestrator | Friday 02 January 2026 00:35:09 +0000 (0:00:01.671) 0:00:17.280 ******** 2026-01-02 00:35:09.695412 | orchestrator | =============================================================================== 2026-01-02 00:35:09.695434 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.00s 2026-01-02 00:35:09.695455 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.67s 2026-01-02 00:35:09.695472 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.55s 2026-01-02 00:35:09.695517 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.52s 2026-01-02 00:35:09.695536 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.23s 2026-01-02 00:35:10.041721 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-01-02 00:35:10.041816 | orchestrator | + osism apply network 2026-01-02 00:35:22.305121 | orchestrator | 2026-01-02 00:35:22 | INFO  | Task 7ea1b6e3-53a5-4ec1-bee9-48a43ccbd5f7 (network) was prepared for execution. 2026-01-02 00:35:22.305219 | orchestrator | 2026-01-02 00:35:22 | INFO  | It takes a moment until task 7ea1b6e3-53a5-4ec1-bee9-48a43ccbd5f7 (network) has been started and output is visible here. 2026-01-02 00:35:51.706012 | orchestrator | 2026-01-02 00:35:51.706156 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-01-02 00:35:51.706170 | orchestrator | 2026-01-02 00:35:51.706178 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-01-02 00:35:51.706186 | orchestrator | Friday 02 January 2026 00:35:26 +0000 (0:00:00.265) 0:00:00.265 ******** 2026-01-02 00:35:51.706194 | orchestrator | ok: [testbed-manager] 2026-01-02 00:35:51.706203 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:35:51.706210 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:35:51.706219 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:35:51.706226 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:35:51.706233 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:35:51.706241 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:35:51.706248 | orchestrator | 2026-01-02 00:35:51.706255 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-01-02 00:35:51.706263 | orchestrator | Friday 02 January 2026 00:35:27 +0000 (0:00:00.747) 0:00:01.013 ******** 2026-01-02 00:35:51.706271 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:35:51.706281 | orchestrator | 2026-01-02 00:35:51.706289 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-01-02 00:35:51.706296 | orchestrator | Friday 02 January 2026 00:35:28 +0000 (0:00:01.324) 0:00:02.337 ******** 2026-01-02 00:35:51.706303 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:35:51.706311 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:35:51.706318 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:35:51.706325 | orchestrator | ok: [testbed-manager] 2026-01-02 00:35:51.706332 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:35:51.706340 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:35:51.706347 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:35:51.706354 | orchestrator | 2026-01-02 00:35:51.706362 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-01-02 00:35:51.706369 | orchestrator | Friday 02 January 2026 00:35:30 +0000 (0:00:02.068) 0:00:04.406 ******** 2026-01-02 00:35:51.706376 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:35:51.706384 | orchestrator | ok: [testbed-manager] 2026-01-02 00:35:51.706391 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:35:51.706398 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:35:51.706406 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:35:51.706454 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:35:51.706464 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:35:51.706471 | orchestrator | 2026-01-02 00:35:51.706478 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-01-02 00:35:51.706507 | orchestrator | Friday 02 January 2026 00:35:32 +0000 (0:00:01.779) 0:00:06.185 ******** 2026-01-02 00:35:51.706515 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-01-02 00:35:51.706522 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-01-02 00:35:51.706530 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-01-02 00:35:51.706537 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-01-02 00:35:51.706544 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-01-02 00:35:51.706551 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-01-02 00:35:51.706558 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-01-02 00:35:51.706567 | orchestrator | 2026-01-02 00:35:51.706576 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-01-02 00:35:51.706584 | orchestrator | Friday 02 January 2026 00:35:33 +0000 (0:00:00.982) 0:00:07.167 ******** 2026-01-02 00:35:51.706593 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-02 00:35:51.706603 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-02 00:35:51.706612 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-02 00:35:51.706632 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-02 00:35:51.706641 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-02 00:35:51.706650 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-02 00:35:51.706659 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-02 00:35:51.706667 | orchestrator | 2026-01-02 00:35:51.706676 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-01-02 00:35:51.706684 | orchestrator | Friday 02 January 2026 00:35:37 +0000 (0:00:03.488) 0:00:10.655 ******** 2026-01-02 00:35:51.706693 | orchestrator | changed: [testbed-manager] 2026-01-02 00:35:51.706701 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:35:51.706710 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:35:51.706719 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:35:51.706727 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:35:51.706735 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:35:51.706744 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:35:51.706752 | orchestrator | 2026-01-02 00:35:51.706761 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-01-02 00:35:51.706770 | orchestrator | Friday 02 January 2026 00:35:38 +0000 (0:00:01.637) 0:00:12.293 ******** 2026-01-02 00:35:51.706778 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-02 00:35:51.706787 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-02 00:35:51.706795 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-02 00:35:51.706803 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-02 00:35:51.706812 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-02 00:35:51.706820 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-02 00:35:51.706829 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-02 00:35:51.706837 | orchestrator | 2026-01-02 00:35:51.706845 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-01-02 00:35:51.706855 | orchestrator | Friday 02 January 2026 00:35:40 +0000 (0:00:01.753) 0:00:14.046 ******** 2026-01-02 00:35:51.706863 | orchestrator | ok: [testbed-manager] 2026-01-02 00:35:51.706871 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:35:51.706880 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:35:51.706889 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:35:51.706897 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:35:51.706905 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:35:51.706913 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:35:51.706922 | orchestrator | 2026-01-02 00:35:51.706930 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-01-02 00:35:51.706954 | orchestrator | Friday 02 January 2026 00:35:41 +0000 (0:00:01.185) 0:00:15.232 ******** 2026-01-02 00:35:51.706962 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:35:51.706969 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:35:51.706976 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:35:51.706989 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:35:51.706997 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:35:51.707004 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:35:51.707011 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:35:51.707018 | orchestrator | 2026-01-02 00:35:51.707025 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-01-02 00:35:51.707033 | orchestrator | Friday 02 January 2026 00:35:42 +0000 (0:00:00.679) 0:00:15.912 ******** 2026-01-02 00:35:51.707040 | orchestrator | ok: [testbed-manager] 2026-01-02 00:35:51.707047 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:35:51.707054 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:35:51.707061 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:35:51.707068 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:35:51.707076 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:35:51.707083 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:35:51.707090 | orchestrator | 2026-01-02 00:35:51.707097 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-01-02 00:35:51.707104 | orchestrator | Friday 02 January 2026 00:35:44 +0000 (0:00:02.198) 0:00:18.110 ******** 2026-01-02 00:35:51.707112 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:35:51.707119 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:35:51.707126 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:35:51.707133 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:35:51.707140 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:35:51.707147 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:35:51.707155 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-01-02 00:35:51.707164 | orchestrator | 2026-01-02 00:35:51.707171 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-01-02 00:35:51.707178 | orchestrator | Friday 02 January 2026 00:35:45 +0000 (0:00:00.955) 0:00:19.065 ******** 2026-01-02 00:35:51.707186 | orchestrator | ok: [testbed-manager] 2026-01-02 00:35:51.707193 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:35:51.707200 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:35:51.707207 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:35:51.707214 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:35:51.707221 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:35:51.707228 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:35:51.707235 | orchestrator | 2026-01-02 00:35:51.707243 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-01-02 00:35:51.707250 | orchestrator | Friday 02 January 2026 00:35:47 +0000 (0:00:01.742) 0:00:20.807 ******** 2026-01-02 00:35:51.707257 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:35:51.707266 | orchestrator | 2026-01-02 00:35:51.707274 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-01-02 00:35:51.707281 | orchestrator | Friday 02 January 2026 00:35:48 +0000 (0:00:01.273) 0:00:22.081 ******** 2026-01-02 00:35:51.707288 | orchestrator | ok: [testbed-manager] 2026-01-02 00:35:51.707295 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:35:51.707302 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:35:51.707309 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:35:51.707316 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:35:51.707324 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:35:51.707330 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:35:51.707338 | orchestrator | 2026-01-02 00:35:51.707345 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-01-02 00:35:51.707352 | orchestrator | Friday 02 January 2026 00:35:49 +0000 (0:00:01.006) 0:00:23.087 ******** 2026-01-02 00:35:51.707360 | orchestrator | ok: [testbed-manager] 2026-01-02 00:35:51.707367 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:35:51.707374 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:35:51.707386 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:35:51.707393 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:35:51.707400 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:35:51.707407 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:35:51.707431 | orchestrator | 2026-01-02 00:35:51.707439 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-01-02 00:35:51.707447 | orchestrator | Friday 02 January 2026 00:35:50 +0000 (0:00:00.876) 0:00:23.963 ******** 2026-01-02 00:35:51.707454 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-01-02 00:35:51.707461 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-01-02 00:35:51.707468 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-01-02 00:35:51.707475 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-01-02 00:35:51.707483 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-02 00:35:51.707490 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-01-02 00:35:51.707497 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-02 00:35:51.707504 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-01-02 00:35:51.707511 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-02 00:35:51.707518 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-02 00:35:51.707525 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-02 00:35:51.707533 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-02 00:35:51.707550 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-01-02 00:35:51.707558 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-02 00:35:51.707565 | orchestrator | 2026-01-02 00:35:51.707578 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-01-02 00:36:09.054850 | orchestrator | Friday 02 January 2026 00:35:51 +0000 (0:00:01.256) 0:00:25.220 ******** 2026-01-02 00:36:09.054987 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:36:09.055008 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:36:09.055021 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:36:09.055032 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:36:09.055043 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:36:09.055054 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:36:09.055066 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:36:09.055078 | orchestrator | 2026-01-02 00:36:09.055090 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-01-02 00:36:09.055102 | orchestrator | Friday 02 January 2026 00:35:52 +0000 (0:00:00.668) 0:00:25.889 ******** 2026-01-02 00:36:09.055114 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-4, testbed-node-3, testbed-node-5 2026-01-02 00:36:09.055128 | orchestrator | 2026-01-02 00:36:09.055140 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-01-02 00:36:09.055151 | orchestrator | Friday 02 January 2026 00:35:56 +0000 (0:00:04.646) 0:00:30.535 ******** 2026-01-02 00:36:09.055164 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-01-02 00:36:09.055179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-01-02 00:36:09.055214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-01-02 00:36:09.055226 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-01-02 00:36:09.055237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-01-02 00:36:09.055264 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-01-02 00:36:09.055276 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-01-02 00:36:09.055294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-01-02 00:36:09.055306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-01-02 00:36:09.055317 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-01-02 00:36:09.055328 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-01-02 00:36:09.055358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-01-02 00:36:09.055370 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-01-02 00:36:09.055384 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-01-02 00:36:09.055422 | orchestrator | 2026-01-02 00:36:09.055436 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-01-02 00:36:09.055449 | orchestrator | Friday 02 January 2026 00:36:03 +0000 (0:00:06.245) 0:00:36.781 ******** 2026-01-02 00:36:09.055462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-01-02 00:36:09.055484 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-01-02 00:36:09.055498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-01-02 00:36:09.055512 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-01-02 00:36:09.055525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-01-02 00:36:09.055538 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-01-02 00:36:09.055557 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-01-02 00:36:09.055571 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-01-02 00:36:09.055584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-01-02 00:36:09.055598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-01-02 00:36:09.055611 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-01-02 00:36:09.055625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-01-02 00:36:09.055654 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-01-02 00:36:23.406147 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-01-02 00:36:23.406269 | orchestrator | 2026-01-02 00:36:23.406288 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-01-02 00:36:23.406302 | orchestrator | Friday 02 January 2026 00:36:09 +0000 (0:00:05.793) 0:00:42.574 ******** 2026-01-02 00:36:23.406338 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:36:23.406351 | orchestrator | 2026-01-02 00:36:23.406363 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-01-02 00:36:23.406374 | orchestrator | Friday 02 January 2026 00:36:10 +0000 (0:00:01.183) 0:00:43.758 ******** 2026-01-02 00:36:23.406385 | orchestrator | ok: [testbed-manager] 2026-01-02 00:36:23.406460 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:36:23.406472 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:36:23.406483 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:36:23.406493 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:36:23.406504 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:36:23.406531 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:36:23.406543 | orchestrator | 2026-01-02 00:36:23.406554 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-01-02 00:36:23.406565 | orchestrator | Friday 02 January 2026 00:36:11 +0000 (0:00:01.213) 0:00:44.971 ******** 2026-01-02 00:36:23.406576 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-02 00:36:23.406588 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-02 00:36:23.406602 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-02 00:36:23.406615 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-02 00:36:23.406627 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-02 00:36:23.406640 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-02 00:36:23.406652 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-02 00:36:23.406664 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-02 00:36:23.406679 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:36:23.406692 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-02 00:36:23.406703 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-02 00:36:23.406714 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-02 00:36:23.406725 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-02 00:36:23.406752 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:36:23.406763 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-02 00:36:23.406774 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-02 00:36:23.406785 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-02 00:36:23.406796 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-02 00:36:23.406807 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:36:23.406818 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-02 00:36:23.406829 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-02 00:36:23.406839 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-02 00:36:23.406850 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-02 00:36:23.406864 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:36:23.406882 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-02 00:36:23.406899 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-02 00:36:23.406930 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-02 00:36:23.406948 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-02 00:36:23.406960 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:36:23.406971 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:36:23.406983 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-02 00:36:23.406994 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-02 00:36:23.407004 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-02 00:36:23.407015 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-02 00:36:23.407026 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:36:23.407037 | orchestrator | 2026-01-02 00:36:23.407048 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-01-02 00:36:23.407078 | orchestrator | Friday 02 January 2026 00:36:12 +0000 (0:00:01.005) 0:00:45.977 ******** 2026-01-02 00:36:23.407091 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:36:23.407102 | orchestrator | 2026-01-02 00:36:23.407114 | orchestrator | TASK [osism.commons.network : Install required packages for network-extra-init] *** 2026-01-02 00:36:23.407125 | orchestrator | Friday 02 January 2026 00:36:13 +0000 (0:00:01.337) 0:00:47.314 ******** 2026-01-02 00:36:23.407135 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:36:23.407146 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:36:23.407158 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:36:23.407168 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:36:23.407179 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:36:23.407190 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:36:23.407200 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:36:23.407211 | orchestrator | 2026-01-02 00:36:23.407222 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-01-02 00:36:23.407233 | orchestrator | Friday 02 January 2026 00:36:14 +0000 (0:00:00.704) 0:00:48.019 ******** 2026-01-02 00:36:23.407244 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:36:23.407254 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:36:23.407265 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:36:23.407276 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:36:23.407286 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:36:23.407297 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:36:23.407308 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:36:23.407318 | orchestrator | 2026-01-02 00:36:23.407329 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-01-02 00:36:23.407340 | orchestrator | Friday 02 January 2026 00:36:15 +0000 (0:00:00.826) 0:00:48.846 ******** 2026-01-02 00:36:23.407351 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:36:23.407362 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:36:23.407372 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:36:23.407383 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:36:23.407425 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:36:23.407444 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:36:23.407464 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:36:23.407483 | orchestrator | 2026-01-02 00:36:23.407494 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-01-02 00:36:23.407505 | orchestrator | Friday 02 January 2026 00:36:15 +0000 (0:00:00.669) 0:00:49.515 ******** 2026-01-02 00:36:23.407516 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:36:23.407527 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:36:23.407538 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:36:23.407548 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:36:23.407567 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:36:23.407578 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:36:23.407588 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:36:23.407599 | orchestrator | 2026-01-02 00:36:23.407610 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-01-02 00:36:23.407621 | orchestrator | Friday 02 January 2026 00:36:16 +0000 (0:00:00.827) 0:00:50.343 ******** 2026-01-02 00:36:23.407632 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:36:23.407643 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:36:23.407654 | orchestrator | ok: [testbed-manager] 2026-01-02 00:36:23.407665 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:36:23.407675 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:36:23.407693 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:36:23.407704 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:36:23.407714 | orchestrator | 2026-01-02 00:36:23.407725 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-01-02 00:36:23.407736 | orchestrator | Friday 02 January 2026 00:36:18 +0000 (0:00:01.579) 0:00:51.922 ******** 2026-01-02 00:36:23.407747 | orchestrator | ok: [testbed-manager] 2026-01-02 00:36:23.407758 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:36:23.407769 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:36:23.407779 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:36:23.407790 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:36:23.407800 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:36:23.407811 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:36:23.407822 | orchestrator | 2026-01-02 00:36:23.407833 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-01-02 00:36:23.407843 | orchestrator | Friday 02 January 2026 00:36:19 +0000 (0:00:01.258) 0:00:53.181 ******** 2026-01-02 00:36:23.407854 | orchestrator | ok: [testbed-manager] 2026-01-02 00:36:23.407865 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:36:23.407876 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:36:23.407886 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:36:23.407897 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:36:23.407907 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:36:23.407918 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:36:23.407929 | orchestrator | 2026-01-02 00:36:23.407940 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-01-02 00:36:23.407950 | orchestrator | Friday 02 January 2026 00:36:21 +0000 (0:00:02.335) 0:00:55.516 ******** 2026-01-02 00:36:23.407961 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:36:23.407972 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:36:23.407983 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:36:23.407994 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:36:23.408005 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:36:23.408015 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:36:23.408026 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:36:23.408036 | orchestrator | 2026-01-02 00:36:23.408047 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-01-02 00:36:23.408058 | orchestrator | Friday 02 January 2026 00:36:22 +0000 (0:00:00.646) 0:00:56.163 ******** 2026-01-02 00:36:23.408069 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:36:23.408080 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:36:23.408090 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:36:23.408101 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:36:23.408112 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:36:23.408122 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:36:23.408133 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:36:23.408144 | orchestrator | 2026-01-02 00:36:23.408155 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:36:23.839848 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-02 00:36:23.839945 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-02 00:36:23.839984 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-02 00:36:23.839996 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-02 00:36:23.840008 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-02 00:36:23.840019 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-02 00:36:23.840029 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-02 00:36:23.840041 | orchestrator | 2026-01-02 00:36:23.840052 | orchestrator | 2026-01-02 00:36:23.840063 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:36:23.840075 | orchestrator | Friday 02 January 2026 00:36:23 +0000 (0:00:00.771) 0:00:56.935 ******** 2026-01-02 00:36:23.840086 | orchestrator | =============================================================================== 2026-01-02 00:36:23.840097 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 6.25s 2026-01-02 00:36:23.840107 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.79s 2026-01-02 00:36:23.840118 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.65s 2026-01-02 00:36:23.840129 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.49s 2026-01-02 00:36:23.840139 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.34s 2026-01-02 00:36:23.840150 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.20s 2026-01-02 00:36:23.840161 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.07s 2026-01-02 00:36:23.840171 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.78s 2026-01-02 00:36:23.840182 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.75s 2026-01-02 00:36:23.840193 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.74s 2026-01-02 00:36:23.840204 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.64s 2026-01-02 00:36:23.840214 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.58s 2026-01-02 00:36:23.840240 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.34s 2026-01-02 00:36:23.840252 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.32s 2026-01-02 00:36:23.840263 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.27s 2026-01-02 00:36:23.840273 | orchestrator | osism.commons.network : Remove network-extra-init systemd service ------- 1.26s 2026-01-02 00:36:23.840284 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.26s 2026-01-02 00:36:23.840295 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.21s 2026-01-02 00:36:23.840305 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.19s 2026-01-02 00:36:23.840316 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.18s 2026-01-02 00:36:24.192458 | orchestrator | + osism apply wireguard 2026-01-02 00:36:36.405353 | orchestrator | 2026-01-02 00:36:36 | INFO  | Task 58bb12e6-aae4-4c99-afe5-b7983759e831 (wireguard) was prepared for execution. 2026-01-02 00:36:36.405523 | orchestrator | 2026-01-02 00:36:36 | INFO  | It takes a moment until task 58bb12e6-aae4-4c99-afe5-b7983759e831 (wireguard) has been started and output is visible here. 2026-01-02 00:36:57.987447 | orchestrator | 2026-01-02 00:36:57.987607 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-01-02 00:36:57.987628 | orchestrator | 2026-01-02 00:36:57.987641 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-01-02 00:36:57.987652 | orchestrator | Friday 02 January 2026 00:36:40 +0000 (0:00:00.246) 0:00:00.246 ******** 2026-01-02 00:36:57.987663 | orchestrator | ok: [testbed-manager] 2026-01-02 00:36:57.987676 | orchestrator | 2026-01-02 00:36:57.987686 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-01-02 00:36:57.987697 | orchestrator | Friday 02 January 2026 00:36:42 +0000 (0:00:01.688) 0:00:01.935 ******** 2026-01-02 00:36:57.987708 | orchestrator | changed: [testbed-manager] 2026-01-02 00:36:57.987720 | orchestrator | 2026-01-02 00:36:57.987731 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-01-02 00:36:57.987741 | orchestrator | Friday 02 January 2026 00:36:49 +0000 (0:00:07.189) 0:00:09.124 ******** 2026-01-02 00:36:57.987752 | orchestrator | changed: [testbed-manager] 2026-01-02 00:36:57.987763 | orchestrator | 2026-01-02 00:36:57.987774 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-01-02 00:36:57.987784 | orchestrator | Friday 02 January 2026 00:36:50 +0000 (0:00:00.586) 0:00:09.711 ******** 2026-01-02 00:36:57.987795 | orchestrator | changed: [testbed-manager] 2026-01-02 00:36:57.987806 | orchestrator | 2026-01-02 00:36:57.987816 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-01-02 00:36:57.987827 | orchestrator | Friday 02 January 2026 00:36:50 +0000 (0:00:00.501) 0:00:10.212 ******** 2026-01-02 00:36:57.987838 | orchestrator | ok: [testbed-manager] 2026-01-02 00:36:57.987849 | orchestrator | 2026-01-02 00:36:57.987860 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-01-02 00:36:57.987871 | orchestrator | Friday 02 January 2026 00:36:51 +0000 (0:00:00.691) 0:00:10.904 ******** 2026-01-02 00:36:57.987881 | orchestrator | ok: [testbed-manager] 2026-01-02 00:36:57.987892 | orchestrator | 2026-01-02 00:36:57.987903 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-01-02 00:36:57.987913 | orchestrator | Friday 02 January 2026 00:36:52 +0000 (0:00:00.544) 0:00:11.449 ******** 2026-01-02 00:36:57.987924 | orchestrator | ok: [testbed-manager] 2026-01-02 00:36:57.987936 | orchestrator | 2026-01-02 00:36:57.987947 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-01-02 00:36:57.987960 | orchestrator | Friday 02 January 2026 00:36:52 +0000 (0:00:00.443) 0:00:11.893 ******** 2026-01-02 00:36:57.987974 | orchestrator | changed: [testbed-manager] 2026-01-02 00:36:57.987987 | orchestrator | 2026-01-02 00:36:57.987999 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-01-02 00:36:57.988012 | orchestrator | Friday 02 January 2026 00:36:53 +0000 (0:00:01.273) 0:00:13.166 ******** 2026-01-02 00:36:57.988025 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-02 00:36:57.988038 | orchestrator | changed: [testbed-manager] 2026-01-02 00:36:57.988051 | orchestrator | 2026-01-02 00:36:57.988063 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-01-02 00:36:57.988076 | orchestrator | Friday 02 January 2026 00:36:54 +0000 (0:00:01.003) 0:00:14.169 ******** 2026-01-02 00:36:57.988088 | orchestrator | changed: [testbed-manager] 2026-01-02 00:36:57.988100 | orchestrator | 2026-01-02 00:36:57.988113 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-01-02 00:36:57.988127 | orchestrator | Friday 02 January 2026 00:36:56 +0000 (0:00:01.788) 0:00:15.957 ******** 2026-01-02 00:36:57.988139 | orchestrator | changed: [testbed-manager] 2026-01-02 00:36:57.988151 | orchestrator | 2026-01-02 00:36:57.988164 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:36:57.988176 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:36:57.988190 | orchestrator | 2026-01-02 00:36:57.988203 | orchestrator | 2026-01-02 00:36:57.988216 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:36:57.988236 | orchestrator | Friday 02 January 2026 00:36:57 +0000 (0:00:01.022) 0:00:16.980 ******** 2026-01-02 00:36:57.988248 | orchestrator | =============================================================================== 2026-01-02 00:36:57.988261 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.19s 2026-01-02 00:36:57.988274 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.79s 2026-01-02 00:36:57.988286 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.69s 2026-01-02 00:36:57.988298 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.27s 2026-01-02 00:36:57.988311 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 1.02s 2026-01-02 00:36:57.988323 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 1.00s 2026-01-02 00:36:57.988336 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.69s 2026-01-02 00:36:57.988346 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.59s 2026-01-02 00:36:57.988378 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.55s 2026-01-02 00:36:57.988396 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.50s 2026-01-02 00:36:57.988414 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.44s 2026-01-02 00:36:58.328225 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-01-02 00:36:58.363144 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-01-02 00:36:58.363233 | orchestrator | Dload Upload Total Spent Left Speed 2026-01-02 00:36:58.439717 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 183 0 --:--:-- --:--:-- --:--:-- 184 2026-01-02 00:36:58.453482 | orchestrator | + osism apply --environment custom workarounds 2026-01-02 00:37:00.504135 | orchestrator | 2026-01-02 00:37:00 | INFO  | Trying to run play workarounds in environment custom 2026-01-02 00:37:10.726912 | orchestrator | 2026-01-02 00:37:10 | INFO  | Task 577bb4b1-7bad-40a2-a06a-ca1bb9dc2cdb (workarounds) was prepared for execution. 2026-01-02 00:37:10.727122 | orchestrator | 2026-01-02 00:37:10 | INFO  | It takes a moment until task 577bb4b1-7bad-40a2-a06a-ca1bb9dc2cdb (workarounds) has been started and output is visible here. 2026-01-02 00:37:36.798066 | orchestrator | 2026-01-02 00:37:36.798176 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-02 00:37:36.798190 | orchestrator | 2026-01-02 00:37:36.798199 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-01-02 00:37:36.798208 | orchestrator | Friday 02 January 2026 00:37:15 +0000 (0:00:00.130) 0:00:00.130 ******** 2026-01-02 00:37:36.798217 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-01-02 00:37:36.798225 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-01-02 00:37:36.798234 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-01-02 00:37:36.798242 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-01-02 00:37:36.798250 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-01-02 00:37:36.798258 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-01-02 00:37:36.798265 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-01-02 00:37:36.798273 | orchestrator | 2026-01-02 00:37:36.798281 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-01-02 00:37:36.798289 | orchestrator | 2026-01-02 00:37:36.798297 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-01-02 00:37:36.798305 | orchestrator | Friday 02 January 2026 00:37:15 +0000 (0:00:00.724) 0:00:00.855 ******** 2026-01-02 00:37:36.798313 | orchestrator | ok: [testbed-manager] 2026-01-02 00:37:36.798393 | orchestrator | 2026-01-02 00:37:36.798404 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-01-02 00:37:36.798412 | orchestrator | 2026-01-02 00:37:36.798419 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-01-02 00:37:36.798427 | orchestrator | Friday 02 January 2026 00:37:18 +0000 (0:00:02.256) 0:00:03.111 ******** 2026-01-02 00:37:36.798435 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:37:36.798443 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:37:36.798451 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:37:36.798459 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:37:36.798466 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:37:36.798474 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:37:36.798482 | orchestrator | 2026-01-02 00:37:36.798489 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-01-02 00:37:36.798497 | orchestrator | 2026-01-02 00:37:36.798505 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-01-02 00:37:36.798513 | orchestrator | Friday 02 January 2026 00:37:19 +0000 (0:00:01.862) 0:00:04.973 ******** 2026-01-02 00:37:36.798522 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-02 00:37:36.798532 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-02 00:37:36.798540 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-02 00:37:36.798548 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-02 00:37:36.798555 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-02 00:37:36.798563 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-02 00:37:36.798572 | orchestrator | 2026-01-02 00:37:36.798580 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-01-02 00:37:36.798590 | orchestrator | Friday 02 January 2026 00:37:21 +0000 (0:00:01.511) 0:00:06.485 ******** 2026-01-02 00:37:36.798599 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:37:36.798609 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:37:36.798634 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:37:36.798643 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:37:36.798653 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:37:36.798662 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:37:36.798671 | orchestrator | 2026-01-02 00:37:36.798682 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-01-02 00:37:36.798691 | orchestrator | Friday 02 January 2026 00:37:25 +0000 (0:00:03.856) 0:00:10.341 ******** 2026-01-02 00:37:36.798701 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:37:36.798709 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:37:36.798718 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:37:36.798728 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:37:36.798736 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:37:36.798745 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:37:36.798754 | orchestrator | 2026-01-02 00:37:36.798764 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-01-02 00:37:36.798774 | orchestrator | 2026-01-02 00:37:36.798783 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-01-02 00:37:36.798793 | orchestrator | Friday 02 January 2026 00:37:25 +0000 (0:00:00.759) 0:00:11.101 ******** 2026-01-02 00:37:36.798802 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:37:36.798811 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:37:36.798820 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:37:36.798829 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:37:36.798839 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:37:36.798848 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:37:36.798863 | orchestrator | changed: [testbed-manager] 2026-01-02 00:37:36.798871 | orchestrator | 2026-01-02 00:37:36.798881 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-01-02 00:37:36.798890 | orchestrator | Friday 02 January 2026 00:37:27 +0000 (0:00:01.624) 0:00:12.725 ******** 2026-01-02 00:37:36.798900 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:37:36.798908 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:37:36.798918 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:37:36.798927 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:37:36.798937 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:37:36.798946 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:37:36.798969 | orchestrator | changed: [testbed-manager] 2026-01-02 00:37:36.798978 | orchestrator | 2026-01-02 00:37:36.798986 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-01-02 00:37:36.798994 | orchestrator | Friday 02 January 2026 00:37:29 +0000 (0:00:01.628) 0:00:14.353 ******** 2026-01-02 00:37:36.799002 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:37:36.799010 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:37:36.799018 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:37:36.799026 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:37:36.799034 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:37:36.799041 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:37:36.799049 | orchestrator | ok: [testbed-manager] 2026-01-02 00:37:36.799057 | orchestrator | 2026-01-02 00:37:36.799065 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-01-02 00:37:36.799073 | orchestrator | Friday 02 January 2026 00:37:31 +0000 (0:00:01.775) 0:00:16.128 ******** 2026-01-02 00:37:36.799081 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:37:36.799089 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:37:36.799097 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:37:36.799105 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:37:36.799113 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:37:36.799121 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:37:36.799128 | orchestrator | changed: [testbed-manager] 2026-01-02 00:37:36.799136 | orchestrator | 2026-01-02 00:37:36.799144 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-01-02 00:37:36.799152 | orchestrator | Friday 02 January 2026 00:37:33 +0000 (0:00:02.045) 0:00:18.174 ******** 2026-01-02 00:37:36.799160 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:37:36.799168 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:37:36.799176 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:37:36.799183 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:37:36.799191 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:37:36.799199 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:37:36.799207 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:37:36.799214 | orchestrator | 2026-01-02 00:37:36.799223 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-01-02 00:37:36.799231 | orchestrator | 2026-01-02 00:37:36.799238 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-01-02 00:37:36.799246 | orchestrator | Friday 02 January 2026 00:37:33 +0000 (0:00:00.713) 0:00:18.888 ******** 2026-01-02 00:37:36.799254 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:37:36.799262 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:37:36.799270 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:37:36.799277 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:37:36.799285 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:37:36.799293 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:37:36.799301 | orchestrator | ok: [testbed-manager] 2026-01-02 00:37:36.799309 | orchestrator | 2026-01-02 00:37:36.799316 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:37:36.799343 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 00:37:36.799353 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:37:36.799367 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:37:36.799375 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:37:36.799384 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:37:36.799396 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:37:36.799404 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:37:36.799412 | orchestrator | 2026-01-02 00:37:36.799420 | orchestrator | 2026-01-02 00:37:36.799428 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:37:36.799436 | orchestrator | Friday 02 January 2026 00:37:36 +0000 (0:00:02.981) 0:00:21.869 ******** 2026-01-02 00:37:36.799444 | orchestrator | =============================================================================== 2026-01-02 00:37:36.799452 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.86s 2026-01-02 00:37:36.799460 | orchestrator | Install python3-docker -------------------------------------------------- 2.98s 2026-01-02 00:37:36.799468 | orchestrator | Apply netplan configuration --------------------------------------------- 2.26s 2026-01-02 00:37:36.799475 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 2.05s 2026-01-02 00:37:36.799483 | orchestrator | Apply netplan configuration --------------------------------------------- 1.86s 2026-01-02 00:37:36.799491 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.78s 2026-01-02 00:37:36.799499 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.63s 2026-01-02 00:37:36.799507 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.62s 2026-01-02 00:37:36.799515 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.51s 2026-01-02 00:37:36.799523 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.76s 2026-01-02 00:37:36.799531 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.72s 2026-01-02 00:37:36.799544 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.71s 2026-01-02 00:37:37.552525 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-01-02 00:37:49.602608 | orchestrator | 2026-01-02 00:37:49 | INFO  | Task 08694f19-5d13-4e14-8eca-3deba096f9a9 (reboot) was prepared for execution. 2026-01-02 00:37:49.602737 | orchestrator | 2026-01-02 00:37:49 | INFO  | It takes a moment until task 08694f19-5d13-4e14-8eca-3deba096f9a9 (reboot) has been started and output is visible here. 2026-01-02 00:37:59.945826 | orchestrator | 2026-01-02 00:37:59.945948 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-02 00:37:59.945969 | orchestrator | 2026-01-02 00:37:59.945990 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-02 00:37:59.946009 | orchestrator | Friday 02 January 2026 00:37:53 +0000 (0:00:00.220) 0:00:00.220 ******** 2026-01-02 00:37:59.946088 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:37:59.946101 | orchestrator | 2026-01-02 00:37:59.946112 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-02 00:37:59.946124 | orchestrator | Friday 02 January 2026 00:37:54 +0000 (0:00:00.113) 0:00:00.333 ******** 2026-01-02 00:37:59.946135 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:37:59.946146 | orchestrator | 2026-01-02 00:37:59.946157 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-02 00:37:59.946203 | orchestrator | Friday 02 January 2026 00:37:54 +0000 (0:00:00.969) 0:00:01.302 ******** 2026-01-02 00:37:59.946214 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:37:59.946225 | orchestrator | 2026-01-02 00:37:59.946236 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-02 00:37:59.946247 | orchestrator | 2026-01-02 00:37:59.946258 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-02 00:37:59.946268 | orchestrator | Friday 02 January 2026 00:37:55 +0000 (0:00:00.108) 0:00:01.411 ******** 2026-01-02 00:37:59.946279 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:37:59.946289 | orchestrator | 2026-01-02 00:37:59.946329 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-02 00:37:59.946344 | orchestrator | Friday 02 January 2026 00:37:55 +0000 (0:00:00.091) 0:00:01.502 ******** 2026-01-02 00:37:59.946355 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:37:59.946367 | orchestrator | 2026-01-02 00:37:59.946380 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-02 00:37:59.946394 | orchestrator | Friday 02 January 2026 00:37:55 +0000 (0:00:00.630) 0:00:02.132 ******** 2026-01-02 00:37:59.946406 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:37:59.946418 | orchestrator | 2026-01-02 00:37:59.946432 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-02 00:37:59.946445 | orchestrator | 2026-01-02 00:37:59.946457 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-02 00:37:59.946470 | orchestrator | Friday 02 January 2026 00:37:55 +0000 (0:00:00.108) 0:00:02.241 ******** 2026-01-02 00:37:59.946482 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:37:59.946495 | orchestrator | 2026-01-02 00:37:59.946508 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-02 00:37:59.946519 | orchestrator | Friday 02 January 2026 00:37:56 +0000 (0:00:00.183) 0:00:02.425 ******** 2026-01-02 00:37:59.946533 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:37:59.946545 | orchestrator | 2026-01-02 00:37:59.946557 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-02 00:37:59.946570 | orchestrator | Friday 02 January 2026 00:37:56 +0000 (0:00:00.661) 0:00:03.086 ******** 2026-01-02 00:37:59.946583 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:37:59.946596 | orchestrator | 2026-01-02 00:37:59.946609 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-02 00:37:59.946622 | orchestrator | 2026-01-02 00:37:59.946635 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-02 00:37:59.946661 | orchestrator | Friday 02 January 2026 00:37:56 +0000 (0:00:00.109) 0:00:03.195 ******** 2026-01-02 00:37:59.946672 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:37:59.946683 | orchestrator | 2026-01-02 00:37:59.946694 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-02 00:37:59.946704 | orchestrator | Friday 02 January 2026 00:37:56 +0000 (0:00:00.100) 0:00:03.295 ******** 2026-01-02 00:37:59.946715 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:37:59.946726 | orchestrator | 2026-01-02 00:37:59.946736 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-02 00:37:59.946747 | orchestrator | Friday 02 January 2026 00:37:57 +0000 (0:00:00.663) 0:00:03.959 ******** 2026-01-02 00:37:59.946757 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:37:59.946768 | orchestrator | 2026-01-02 00:37:59.946779 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-02 00:37:59.946790 | orchestrator | 2026-01-02 00:37:59.946801 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-02 00:37:59.946811 | orchestrator | Friday 02 January 2026 00:37:57 +0000 (0:00:00.147) 0:00:04.107 ******** 2026-01-02 00:37:59.946822 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:37:59.946833 | orchestrator | 2026-01-02 00:37:59.946843 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-02 00:37:59.946862 | orchestrator | Friday 02 January 2026 00:37:57 +0000 (0:00:00.100) 0:00:04.208 ******** 2026-01-02 00:37:59.946873 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:37:59.946883 | orchestrator | 2026-01-02 00:37:59.946894 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-02 00:37:59.946905 | orchestrator | Friday 02 January 2026 00:37:58 +0000 (0:00:00.664) 0:00:04.872 ******** 2026-01-02 00:37:59.946915 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:37:59.946926 | orchestrator | 2026-01-02 00:37:59.946937 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-02 00:37:59.946947 | orchestrator | 2026-01-02 00:37:59.946958 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-02 00:37:59.946969 | orchestrator | Friday 02 January 2026 00:37:58 +0000 (0:00:00.138) 0:00:05.011 ******** 2026-01-02 00:37:59.946979 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:37:59.946990 | orchestrator | 2026-01-02 00:37:59.947001 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-02 00:37:59.947011 | orchestrator | Friday 02 January 2026 00:37:58 +0000 (0:00:00.113) 0:00:05.124 ******** 2026-01-02 00:37:59.947022 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:37:59.947033 | orchestrator | 2026-01-02 00:37:59.947043 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-02 00:37:59.947054 | orchestrator | Friday 02 January 2026 00:37:59 +0000 (0:00:00.711) 0:00:05.836 ******** 2026-01-02 00:37:59.947083 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:37:59.947095 | orchestrator | 2026-01-02 00:37:59.947106 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:37:59.947118 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:37:59.947129 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:37:59.947140 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:37:59.947151 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:37:59.947162 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:37:59.947172 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:37:59.947200 | orchestrator | 2026-01-02 00:37:59.947211 | orchestrator | 2026-01-02 00:37:59.947233 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:37:59.947244 | orchestrator | Friday 02 January 2026 00:37:59 +0000 (0:00:00.042) 0:00:05.879 ******** 2026-01-02 00:37:59.947255 | orchestrator | =============================================================================== 2026-01-02 00:37:59.947266 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.30s 2026-01-02 00:37:59.947276 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.70s 2026-01-02 00:37:59.947287 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.66s 2026-01-02 00:38:00.298724 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-01-02 00:38:12.452938 | orchestrator | 2026-01-02 00:38:12 | INFO  | Task 9c4606a6-9165-448d-acaa-042fb9347f13 (wait-for-connection) was prepared for execution. 2026-01-02 00:38:12.453057 | orchestrator | 2026-01-02 00:38:12 | INFO  | It takes a moment until task 9c4606a6-9165-448d-acaa-042fb9347f13 (wait-for-connection) has been started and output is visible here. 2026-01-02 00:38:28.737227 | orchestrator | 2026-01-02 00:38:28.737387 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-01-02 00:38:28.737408 | orchestrator | 2026-01-02 00:38:28.737421 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-01-02 00:38:28.737432 | orchestrator | Friday 02 January 2026 00:38:16 +0000 (0:00:00.234) 0:00:00.234 ******** 2026-01-02 00:38:28.737443 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:38:28.737476 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:38:28.737489 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:38:28.737500 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:38:28.737510 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:38:28.737521 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:38:28.737532 | orchestrator | 2026-01-02 00:38:28.737543 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:38:28.737554 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:38:28.737567 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:38:28.737578 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:38:28.737589 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:38:28.737600 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:38:28.737610 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:38:28.737621 | orchestrator | 2026-01-02 00:38:28.737632 | orchestrator | 2026-01-02 00:38:28.737643 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:38:28.737653 | orchestrator | Friday 02 January 2026 00:38:28 +0000 (0:00:11.536) 0:00:11.770 ******** 2026-01-02 00:38:28.737664 | orchestrator | =============================================================================== 2026-01-02 00:38:28.737675 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.54s 2026-01-02 00:38:29.049956 | orchestrator | + osism apply hddtemp 2026-01-02 00:38:41.233010 | orchestrator | 2026-01-02 00:38:41 | INFO  | Task c39eae53-e32a-4ca1-b980-223bf9489067 (hddtemp) was prepared for execution. 2026-01-02 00:38:41.234099 | orchestrator | 2026-01-02 00:38:41 | INFO  | It takes a moment until task c39eae53-e32a-4ca1-b980-223bf9489067 (hddtemp) has been started and output is visible here. 2026-01-02 00:39:10.683597 | orchestrator | 2026-01-02 00:39:10.683742 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-01-02 00:39:10.683760 | orchestrator | 2026-01-02 00:39:10.683773 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-01-02 00:39:10.683786 | orchestrator | Friday 02 January 2026 00:38:45 +0000 (0:00:00.293) 0:00:00.293 ******** 2026-01-02 00:39:10.683798 | orchestrator | ok: [testbed-manager] 2026-01-02 00:39:10.683812 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:39:10.683823 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:39:10.683835 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:39:10.683845 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:39:10.683858 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:39:10.683869 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:39:10.683880 | orchestrator | 2026-01-02 00:39:10.683892 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-01-02 00:39:10.683903 | orchestrator | Friday 02 January 2026 00:38:46 +0000 (0:00:00.748) 0:00:01.041 ******** 2026-01-02 00:39:10.683916 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:39:10.683957 | orchestrator | 2026-01-02 00:39:10.683969 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-01-02 00:39:10.683980 | orchestrator | Friday 02 January 2026 00:38:47 +0000 (0:00:01.258) 0:00:02.299 ******** 2026-01-02 00:39:10.683991 | orchestrator | ok: [testbed-manager] 2026-01-02 00:39:10.684002 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:39:10.684013 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:39:10.684023 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:39:10.684034 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:39:10.684045 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:39:10.684056 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:39:10.684066 | orchestrator | 2026-01-02 00:39:10.684077 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-01-02 00:39:10.684088 | orchestrator | Friday 02 January 2026 00:38:49 +0000 (0:00:02.156) 0:00:04.456 ******** 2026-01-02 00:39:10.684101 | orchestrator | changed: [testbed-manager] 2026-01-02 00:39:10.684116 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:39:10.684130 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:39:10.684142 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:39:10.684154 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:39:10.684166 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:39:10.684179 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:39:10.684192 | orchestrator | 2026-01-02 00:39:10.684205 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-01-02 00:39:10.684217 | orchestrator | Friday 02 January 2026 00:38:51 +0000 (0:00:01.254) 0:00:05.711 ******** 2026-01-02 00:39:10.684231 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:39:10.684243 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:39:10.684291 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:39:10.684304 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:39:10.684317 | orchestrator | ok: [testbed-manager] 2026-01-02 00:39:10.684330 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:39:10.684343 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:39:10.684355 | orchestrator | 2026-01-02 00:39:10.684367 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-01-02 00:39:10.684381 | orchestrator | Friday 02 January 2026 00:38:52 +0000 (0:00:01.221) 0:00:06.933 ******** 2026-01-02 00:39:10.684394 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:39:10.684406 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:39:10.684436 | orchestrator | changed: [testbed-manager] 2026-01-02 00:39:10.684450 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:39:10.684461 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:39:10.684472 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:39:10.684482 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:39:10.684493 | orchestrator | 2026-01-02 00:39:10.684504 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-01-02 00:39:10.684515 | orchestrator | Friday 02 January 2026 00:38:53 +0000 (0:00:00.919) 0:00:07.853 ******** 2026-01-02 00:39:10.684525 | orchestrator | changed: [testbed-manager] 2026-01-02 00:39:10.684536 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:39:10.684547 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:39:10.684558 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:39:10.684568 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:39:10.684579 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:39:10.684590 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:39:10.684600 | orchestrator | 2026-01-02 00:39:10.684611 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-01-02 00:39:10.684622 | orchestrator | Friday 02 January 2026 00:39:07 +0000 (0:00:13.988) 0:00:21.842 ******** 2026-01-02 00:39:10.684633 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:39:10.684653 | orchestrator | 2026-01-02 00:39:10.684664 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-01-02 00:39:10.684675 | orchestrator | Friday 02 January 2026 00:39:08 +0000 (0:00:01.129) 0:00:22.971 ******** 2026-01-02 00:39:10.684686 | orchestrator | changed: [testbed-manager] 2026-01-02 00:39:10.684697 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:39:10.684708 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:39:10.684719 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:39:10.684729 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:39:10.684740 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:39:10.684750 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:39:10.684761 | orchestrator | 2026-01-02 00:39:10.684772 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:39:10.684783 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:39:10.684816 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 00:39:10.684828 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 00:39:10.684839 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 00:39:10.684850 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 00:39:10.684861 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 00:39:10.684872 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 00:39:10.684883 | orchestrator | 2026-01-02 00:39:10.684894 | orchestrator | 2026-01-02 00:39:10.684905 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:39:10.684916 | orchestrator | Friday 02 January 2026 00:39:10 +0000 (0:00:01.880) 0:00:24.851 ******** 2026-01-02 00:39:10.684927 | orchestrator | =============================================================================== 2026-01-02 00:39:10.684938 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.99s 2026-01-02 00:39:10.684949 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.16s 2026-01-02 00:39:10.684960 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.88s 2026-01-02 00:39:10.684971 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.26s 2026-01-02 00:39:10.684982 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.25s 2026-01-02 00:39:10.684993 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.22s 2026-01-02 00:39:10.685004 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.13s 2026-01-02 00:39:10.685015 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.92s 2026-01-02 00:39:10.685026 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.75s 2026-01-02 00:39:11.036129 | orchestrator | ++ semver latest 7.1.1 2026-01-02 00:39:11.087601 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-02 00:39:11.087711 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-02 00:39:11.087727 | orchestrator | + sudo systemctl restart manager.service 2026-01-02 00:39:30.344536 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-02 00:39:30.344640 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-01-02 00:39:30.344658 | orchestrator | + local max_attempts=60 2026-01-02 00:39:30.344672 | orchestrator | + local name=ceph-ansible 2026-01-02 00:39:30.344684 | orchestrator | + local attempt_num=1 2026-01-02 00:39:30.344720 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-02 00:39:30.379078 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-02 00:39:30.379144 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-02 00:39:30.379151 | orchestrator | + sleep 5 2026-01-02 00:39:35.382943 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-02 00:39:35.413665 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-02 00:39:35.413785 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-02 00:39:35.413811 | orchestrator | + sleep 5 2026-01-02 00:39:40.416813 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-02 00:39:40.454743 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-02 00:39:40.454877 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-02 00:39:40.454900 | orchestrator | + sleep 5 2026-01-02 00:39:45.459650 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-02 00:39:45.499435 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-02 00:39:45.499646 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-02 00:39:45.499668 | orchestrator | + sleep 5 2026-01-02 00:39:50.504954 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-02 00:39:50.536634 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-02 00:39:50.536722 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-02 00:39:50.536737 | orchestrator | + sleep 5 2026-01-02 00:39:55.540076 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-02 00:39:55.574645 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-02 00:39:55.574959 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-02 00:39:55.574987 | orchestrator | + sleep 5 2026-01-02 00:40:00.580479 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-02 00:40:00.624053 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-02 00:40:00.624164 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-02 00:40:00.624180 | orchestrator | + sleep 5 2026-01-02 00:40:05.629414 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-02 00:40:05.658189 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-02 00:40:05.658307 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-02 00:40:05.658322 | orchestrator | + sleep 5 2026-01-02 00:40:10.664821 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-02 00:40:10.700888 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-02 00:40:10.700994 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-02 00:40:10.701011 | orchestrator | + sleep 5 2026-01-02 00:40:15.706593 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-02 00:40:15.752349 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-02 00:40:15.752425 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-02 00:40:15.752434 | orchestrator | + sleep 5 2026-01-02 00:40:20.757915 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-02 00:40:20.796818 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-02 00:40:20.796904 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-02 00:40:20.796920 | orchestrator | + sleep 5 2026-01-02 00:40:25.800375 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-02 00:40:25.838912 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-02 00:40:25.839016 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-02 00:40:25.839033 | orchestrator | + sleep 5 2026-01-02 00:40:30.844430 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-02 00:40:30.885311 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-02 00:40:30.885401 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-02 00:40:30.885415 | orchestrator | + sleep 5 2026-01-02 00:40:35.889919 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-02 00:40:35.931926 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-02 00:40:35.932029 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-01-02 00:40:35.932054 | orchestrator | + local max_attempts=60 2026-01-02 00:40:35.932072 | orchestrator | + local name=kolla-ansible 2026-01-02 00:40:35.932090 | orchestrator | + local attempt_num=1 2026-01-02 00:40:35.932938 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-01-02 00:40:35.969478 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-02 00:40:35.969568 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-01-02 00:40:35.969576 | orchestrator | + local max_attempts=60 2026-01-02 00:40:35.969581 | orchestrator | + local name=osism-ansible 2026-01-02 00:40:35.969585 | orchestrator | + local attempt_num=1 2026-01-02 00:40:35.970460 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-01-02 00:40:36.002937 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-02 00:40:36.003008 | orchestrator | + [[ true == \t\r\u\e ]] 2026-01-02 00:40:36.003014 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-01-02 00:40:36.186550 | orchestrator | ARA in ceph-ansible already disabled. 2026-01-02 00:40:36.349409 | orchestrator | ARA in kolla-ansible already disabled. 2026-01-02 00:40:36.524686 | orchestrator | ARA in osism-ansible already disabled. 2026-01-02 00:40:36.692548 | orchestrator | ARA in osism-kubernetes already disabled. 2026-01-02 00:40:36.693337 | orchestrator | + osism apply gather-facts 2026-01-02 00:40:49.015284 | orchestrator | 2026-01-02 00:40:49 | INFO  | Task dd8bb860-1b78-4392-b257-e32930bd0843 (gather-facts) was prepared for execution. 2026-01-02 00:40:49.015438 | orchestrator | 2026-01-02 00:40:49 | INFO  | It takes a moment until task dd8bb860-1b78-4392-b257-e32930bd0843 (gather-facts) has been started and output is visible here. 2026-01-02 00:41:03.890987 | orchestrator | 2026-01-02 00:41:03.891092 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-02 00:41:03.891110 | orchestrator | 2026-01-02 00:41:03.891122 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-02 00:41:03.891134 | orchestrator | Friday 02 January 2026 00:40:53 +0000 (0:00:00.239) 0:00:00.239 ******** 2026-01-02 00:41:03.891145 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:41:03.891158 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:41:03.891212 | orchestrator | ok: [testbed-manager] 2026-01-02 00:41:03.891233 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:41:03.891252 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:41:03.891264 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:41:03.891275 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:41:03.891286 | orchestrator | 2026-01-02 00:41:03.891297 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-02 00:41:03.891308 | orchestrator | 2026-01-02 00:41:03.891319 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-02 00:41:03.891330 | orchestrator | Friday 02 January 2026 00:41:02 +0000 (0:00:09.409) 0:00:09.649 ******** 2026-01-02 00:41:03.891341 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:41:03.891353 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:41:03.891364 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:41:03.891375 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:41:03.891386 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:41:03.891397 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:41:03.891407 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:41:03.891418 | orchestrator | 2026-01-02 00:41:03.891429 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:41:03.891456 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 00:41:03.891469 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 00:41:03.891480 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 00:41:03.891491 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 00:41:03.891502 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 00:41:03.891513 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 00:41:03.891547 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 00:41:03.891561 | orchestrator | 2026-01-02 00:41:03.891574 | orchestrator | 2026-01-02 00:41:03.891588 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:41:03.891602 | orchestrator | Friday 02 January 2026 00:41:03 +0000 (0:00:00.550) 0:00:10.200 ******** 2026-01-02 00:41:03.891616 | orchestrator | =============================================================================== 2026-01-02 00:41:03.891629 | orchestrator | Gathers facts about hosts ----------------------------------------------- 9.41s 2026-01-02 00:41:03.891642 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.55s 2026-01-02 00:41:04.224043 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-01-02 00:41:04.235982 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-01-02 00:41:04.249719 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-01-02 00:41:04.270581 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-01-02 00:41:04.290492 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-01-02 00:41:04.302799 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-01-02 00:41:04.322201 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-01-02 00:41:04.336366 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-01-02 00:41:04.357487 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-01-02 00:41:04.372219 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-01-02 00:41:04.391944 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-01-02 00:41:04.407447 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-01-02 00:41:04.423325 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-01-02 00:41:04.441740 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-01-02 00:41:04.457326 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-01-02 00:41:04.475001 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-01-02 00:41:04.493813 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-01-02 00:41:04.513864 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-01-02 00:41:04.526913 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-01-02 00:41:04.539348 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-01-02 00:41:04.561132 | orchestrator | + [[ false == \t\r\u\e ]] 2026-01-02 00:41:04.829972 | orchestrator | ok: Runtime: 0:25:02.185314 2026-01-02 00:41:04.939416 | 2026-01-02 00:41:04.939566 | TASK [Deploy services] 2026-01-02 00:41:05.477624 | orchestrator | skipping: Conditional result was False 2026-01-02 00:41:05.490507 | 2026-01-02 00:41:05.490672 | TASK [Deploy in a nutshell] 2026-01-02 00:41:06.197673 | orchestrator | + set -e 2026-01-02 00:41:06.197831 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-02 00:41:06.197849 | orchestrator | ++ export INTERACTIVE=false 2026-01-02 00:41:06.197865 | orchestrator | ++ INTERACTIVE=false 2026-01-02 00:41:06.197875 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-02 00:41:06.197884 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-02 00:41:06.197894 | orchestrator | + source /opt/manager-vars.sh 2026-01-02 00:41:06.198047 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-02 00:41:06.198073 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-02 00:41:06.198083 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-02 00:41:06.198095 | orchestrator | ++ CEPH_VERSION=reef 2026-01-02 00:41:06.198103 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-02 00:41:06.199078 | orchestrator | 2026-01-02 00:41:06.199107 | orchestrator | # PULL IMAGES 2026-01-02 00:41:06.199115 | orchestrator | 2026-01-02 00:41:06.199138 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-02 00:41:06.199156 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-02 00:41:06.199187 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-02 00:41:06.199200 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-02 00:41:06.199207 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-02 00:41:06.199215 | orchestrator | ++ export ARA=false 2026-01-02 00:41:06.199223 | orchestrator | ++ ARA=false 2026-01-02 00:41:06.199234 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-02 00:41:06.199245 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-02 00:41:06.199255 | orchestrator | ++ export TEMPEST=true 2026-01-02 00:41:06.199262 | orchestrator | ++ TEMPEST=true 2026-01-02 00:41:06.199270 | orchestrator | ++ export IS_ZUUL=true 2026-01-02 00:41:06.199278 | orchestrator | ++ IS_ZUUL=true 2026-01-02 00:41:06.199285 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.55 2026-01-02 00:41:06.199293 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.55 2026-01-02 00:41:06.199300 | orchestrator | ++ export EXTERNAL_API=false 2026-01-02 00:41:06.199307 | orchestrator | ++ EXTERNAL_API=false 2026-01-02 00:41:06.199319 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-02 00:41:06.199333 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-02 00:41:06.199346 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-02 00:41:06.199359 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-02 00:41:06.199367 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-02 00:41:06.199387 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-02 00:41:06.199397 | orchestrator | + echo 2026-01-02 00:41:06.199411 | orchestrator | + echo '# PULL IMAGES' 2026-01-02 00:41:06.199424 | orchestrator | + echo 2026-01-02 00:41:06.199686 | orchestrator | ++ semver latest 7.0.0 2026-01-02 00:41:06.262280 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-02 00:41:06.262347 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-02 00:41:06.262355 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-01-02 00:41:08.303997 | orchestrator | 2026-01-02 00:41:08 | INFO  | Trying to run play pull-images in environment custom 2026-01-02 00:41:18.543543 | orchestrator | 2026-01-02 00:41:18 | INFO  | Task 1646ec89-48ca-4c3c-9216-2e3023f1f81a (pull-images) was prepared for execution. 2026-01-02 00:41:18.543683 | orchestrator | 2026-01-02 00:41:18 | INFO  | Task 1646ec89-48ca-4c3c-9216-2e3023f1f81a is running in background. No more output. Check ARA for logs. 2026-01-02 00:41:21.093483 | orchestrator | 2026-01-02 00:41:21 | INFO  | Trying to run play wipe-partitions in environment custom 2026-01-02 00:41:31.183122 | orchestrator | 2026-01-02 00:41:31 | INFO  | Task 8af67b43-cd85-4065-870a-4370982dd891 (wipe-partitions) was prepared for execution. 2026-01-02 00:41:31.183250 | orchestrator | 2026-01-02 00:41:31 | INFO  | It takes a moment until task 8af67b43-cd85-4065-870a-4370982dd891 (wipe-partitions) has been started and output is visible here. 2026-01-02 00:41:44.630908 | orchestrator | 2026-01-02 00:41:44.631024 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-01-02 00:41:44.631040 | orchestrator | 2026-01-02 00:41:44.631052 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-01-02 00:41:44.631072 | orchestrator | Friday 02 January 2026 00:41:36 +0000 (0:00:00.141) 0:00:00.141 ******** 2026-01-02 00:41:44.631087 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:41:44.631100 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:41:44.631112 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:41:44.631124 | orchestrator | 2026-01-02 00:41:44.631136 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-01-02 00:41:44.631205 | orchestrator | Friday 02 January 2026 00:41:36 +0000 (0:00:00.572) 0:00:00.714 ******** 2026-01-02 00:41:44.631218 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:41:44.631230 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:41:44.631246 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:41:44.631258 | orchestrator | 2026-01-02 00:41:44.631269 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-01-02 00:41:44.631280 | orchestrator | Friday 02 January 2026 00:41:37 +0000 (0:00:00.382) 0:00:01.096 ******** 2026-01-02 00:41:44.631291 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:41:44.631303 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:41:44.631314 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:41:44.631325 | orchestrator | 2026-01-02 00:41:44.631336 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-01-02 00:41:44.631347 | orchestrator | Friday 02 January 2026 00:41:37 +0000 (0:00:00.593) 0:00:01.690 ******** 2026-01-02 00:41:44.631358 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:41:44.631369 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:41:44.631381 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:41:44.631392 | orchestrator | 2026-01-02 00:41:44.631403 | orchestrator | TASK [Check device availability] *********************************************** 2026-01-02 00:41:44.631414 | orchestrator | Friday 02 January 2026 00:41:38 +0000 (0:00:00.283) 0:00:01.974 ******** 2026-01-02 00:41:44.631425 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-01-02 00:41:44.631443 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-01-02 00:41:44.631456 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-01-02 00:41:44.631470 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-01-02 00:41:44.631482 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-01-02 00:41:44.631495 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-01-02 00:41:44.631508 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-01-02 00:41:44.631521 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-01-02 00:41:44.631534 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-01-02 00:41:44.631547 | orchestrator | 2026-01-02 00:41:44.631560 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-01-02 00:41:44.631573 | orchestrator | Friday 02 January 2026 00:41:39 +0000 (0:00:01.261) 0:00:03.235 ******** 2026-01-02 00:41:44.631586 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-01-02 00:41:44.631600 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-01-02 00:41:44.631613 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-01-02 00:41:44.631625 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-01-02 00:41:44.631639 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-01-02 00:41:44.631652 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-01-02 00:41:44.631665 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-01-02 00:41:44.631678 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-01-02 00:41:44.631690 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-01-02 00:41:44.631702 | orchestrator | 2026-01-02 00:41:44.631715 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-01-02 00:41:44.631728 | orchestrator | Friday 02 January 2026 00:41:40 +0000 (0:00:01.545) 0:00:04.780 ******** 2026-01-02 00:41:44.631740 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-01-02 00:41:44.631753 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-01-02 00:41:44.631766 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-01-02 00:41:44.631779 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-01-02 00:41:44.631792 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-01-02 00:41:44.631810 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-01-02 00:41:44.631822 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-01-02 00:41:44.631841 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-01-02 00:41:44.631852 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-01-02 00:41:44.631863 | orchestrator | 2026-01-02 00:41:44.631874 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-01-02 00:41:44.631884 | orchestrator | Friday 02 January 2026 00:41:42 +0000 (0:00:02.040) 0:00:06.821 ******** 2026-01-02 00:41:44.631896 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:41:44.631907 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:41:44.631918 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:41:44.631929 | orchestrator | 2026-01-02 00:41:44.631940 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-01-02 00:41:44.631951 | orchestrator | Friday 02 January 2026 00:41:43 +0000 (0:00:00.616) 0:00:07.437 ******** 2026-01-02 00:41:44.631962 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:41:44.631973 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:41:44.631984 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:41:44.631995 | orchestrator | 2026-01-02 00:41:44.632006 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:41:44.632019 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:41:44.632031 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:41:44.632061 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:41:44.632072 | orchestrator | 2026-01-02 00:41:44.632083 | orchestrator | 2026-01-02 00:41:44.632095 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:41:44.632106 | orchestrator | Friday 02 January 2026 00:41:44 +0000 (0:00:00.656) 0:00:08.093 ******** 2026-01-02 00:41:44.632116 | orchestrator | =============================================================================== 2026-01-02 00:41:44.632127 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.04s 2026-01-02 00:41:44.632159 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.55s 2026-01-02 00:41:44.632171 | orchestrator | Check device availability ----------------------------------------------- 1.26s 2026-01-02 00:41:44.632182 | orchestrator | Request device events from the kernel ----------------------------------- 0.66s 2026-01-02 00:41:44.632193 | orchestrator | Reload udev rules ------------------------------------------------------- 0.62s 2026-01-02 00:41:44.632204 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.59s 2026-01-02 00:41:44.632215 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.57s 2026-01-02 00:41:44.632226 | orchestrator | Remove all rook related logical devices --------------------------------- 0.38s 2026-01-02 00:41:44.632237 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.28s 2026-01-02 00:41:57.057906 | orchestrator | 2026-01-02 00:41:57 | INFO  | Task c96ecdf8-33c0-47be-a26f-6a974688320a (facts) was prepared for execution. 2026-01-02 00:41:57.058112 | orchestrator | 2026-01-02 00:41:57 | INFO  | It takes a moment until task c96ecdf8-33c0-47be-a26f-6a974688320a (facts) has been started and output is visible here. 2026-01-02 00:42:09.867083 | orchestrator | 2026-01-02 00:42:09.867236 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-02 00:42:09.867255 | orchestrator | 2026-01-02 00:42:09.867268 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-02 00:42:09.867280 | orchestrator | Friday 02 January 2026 00:42:01 +0000 (0:00:00.280) 0:00:00.280 ******** 2026-01-02 00:42:09.867292 | orchestrator | ok: [testbed-manager] 2026-01-02 00:42:09.867305 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:42:09.867317 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:42:09.867356 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:42:09.867368 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:42:09.867379 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:42:09.867390 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:42:09.867415 | orchestrator | 2026-01-02 00:42:09.867429 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-02 00:42:09.867440 | orchestrator | Friday 02 January 2026 00:42:02 +0000 (0:00:01.254) 0:00:01.534 ******** 2026-01-02 00:42:09.867451 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:42:09.867463 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:42:09.867474 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:42:09.867485 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:42:09.867496 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:09.867507 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:09.867518 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:42:09.867529 | orchestrator | 2026-01-02 00:42:09.867541 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-02 00:42:09.867552 | orchestrator | 2026-01-02 00:42:09.867563 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-02 00:42:09.867574 | orchestrator | Friday 02 January 2026 00:42:04 +0000 (0:00:01.358) 0:00:02.892 ******** 2026-01-02 00:42:09.867585 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:42:09.867596 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:42:09.867608 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:42:09.867621 | orchestrator | ok: [testbed-manager] 2026-01-02 00:42:09.867634 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:42:09.867646 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:42:09.867659 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:42:09.867673 | orchestrator | 2026-01-02 00:42:09.867686 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-02 00:42:09.867697 | orchestrator | 2026-01-02 00:42:09.867708 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-02 00:42:09.867736 | orchestrator | Friday 02 January 2026 00:42:08 +0000 (0:00:04.780) 0:00:07.673 ******** 2026-01-02 00:42:09.867748 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:42:09.867759 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:42:09.867770 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:42:09.867782 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:42:09.867793 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:09.867804 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:09.867815 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:42:09.867826 | orchestrator | 2026-01-02 00:42:09.867837 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:42:09.867848 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:42:09.867860 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:42:09.867871 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:42:09.867882 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:42:09.867893 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:42:09.867905 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:42:09.867916 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:42:09.867927 | orchestrator | 2026-01-02 00:42:09.867945 | orchestrator | 2026-01-02 00:42:09.867957 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:42:09.867968 | orchestrator | Friday 02 January 2026 00:42:09 +0000 (0:00:00.574) 0:00:08.247 ******** 2026-01-02 00:42:09.867979 | orchestrator | =============================================================================== 2026-01-02 00:42:09.867990 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.78s 2026-01-02 00:42:09.868001 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.36s 2026-01-02 00:42:09.868012 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.25s 2026-01-02 00:42:09.868023 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.57s 2026-01-02 00:42:12.393262 | orchestrator | 2026-01-02 00:42:12 | INFO  | Task 477e9805-40eb-41f8-b695-c51fac192ca1 (ceph-configure-lvm-volumes) was prepared for execution. 2026-01-02 00:42:12.393398 | orchestrator | 2026-01-02 00:42:12 | INFO  | It takes a moment until task 477e9805-40eb-41f8-b695-c51fac192ca1 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-01-02 00:42:25.865933 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-02 00:42:25.866106 | orchestrator | 2.16.14 2026-01-02 00:42:25.866171 | orchestrator | 2026-01-02 00:42:25.866185 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-02 00:42:25.866198 | orchestrator | 2026-01-02 00:42:25.866211 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-02 00:42:25.866223 | orchestrator | Friday 02 January 2026 00:42:17 +0000 (0:00:00.626) 0:00:00.626 ******** 2026-01-02 00:42:25.866235 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-02 00:42:25.866246 | orchestrator | 2026-01-02 00:42:25.866257 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-02 00:42:25.866268 | orchestrator | Friday 02 January 2026 00:42:17 +0000 (0:00:00.263) 0:00:00.889 ******** 2026-01-02 00:42:25.866280 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:42:25.866292 | orchestrator | 2026-01-02 00:42:25.866304 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:25.866315 | orchestrator | Friday 02 January 2026 00:42:18 +0000 (0:00:00.244) 0:00:01.134 ******** 2026-01-02 00:42:25.866326 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-01-02 00:42:25.866337 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-01-02 00:42:25.866348 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-01-02 00:42:25.866359 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-01-02 00:42:25.866371 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-01-02 00:42:25.866382 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-01-02 00:42:25.866393 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-01-02 00:42:25.866404 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-01-02 00:42:25.866415 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-01-02 00:42:25.866426 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-01-02 00:42:25.866446 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-01-02 00:42:25.866460 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-01-02 00:42:25.866472 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-01-02 00:42:25.866485 | orchestrator | 2026-01-02 00:42:25.866497 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:25.866532 | orchestrator | Friday 02 January 2026 00:42:18 +0000 (0:00:00.591) 0:00:01.725 ******** 2026-01-02 00:42:25.866546 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:25.866559 | orchestrator | 2026-01-02 00:42:25.866578 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:25.866599 | orchestrator | Friday 02 January 2026 00:42:18 +0000 (0:00:00.232) 0:00:01.957 ******** 2026-01-02 00:42:25.866619 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:25.866638 | orchestrator | 2026-01-02 00:42:25.866658 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:25.866679 | orchestrator | Friday 02 January 2026 00:42:19 +0000 (0:00:00.236) 0:00:02.194 ******** 2026-01-02 00:42:25.866701 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:25.866722 | orchestrator | 2026-01-02 00:42:25.866743 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:25.866767 | orchestrator | Friday 02 January 2026 00:42:19 +0000 (0:00:00.256) 0:00:02.451 ******** 2026-01-02 00:42:25.866779 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:25.866790 | orchestrator | 2026-01-02 00:42:25.866801 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:25.866812 | orchestrator | Friday 02 January 2026 00:42:19 +0000 (0:00:00.216) 0:00:02.667 ******** 2026-01-02 00:42:25.866824 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:25.866834 | orchestrator | 2026-01-02 00:42:25.866845 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:25.866856 | orchestrator | Friday 02 January 2026 00:42:19 +0000 (0:00:00.210) 0:00:02.878 ******** 2026-01-02 00:42:25.866867 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:25.866878 | orchestrator | 2026-01-02 00:42:25.866889 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:25.866900 | orchestrator | Friday 02 January 2026 00:42:19 +0000 (0:00:00.209) 0:00:03.087 ******** 2026-01-02 00:42:25.866911 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:25.866921 | orchestrator | 2026-01-02 00:42:25.866933 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:25.866944 | orchestrator | Friday 02 January 2026 00:42:20 +0000 (0:00:00.215) 0:00:03.302 ******** 2026-01-02 00:42:25.866955 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:25.866966 | orchestrator | 2026-01-02 00:42:25.866977 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:25.866988 | orchestrator | Friday 02 January 2026 00:42:20 +0000 (0:00:00.235) 0:00:03.538 ******** 2026-01-02 00:42:25.866999 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_817579c1-b31d-4bbf-8af4-60793d227397) 2026-01-02 00:42:25.867011 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_817579c1-b31d-4bbf-8af4-60793d227397) 2026-01-02 00:42:25.867022 | orchestrator | 2026-01-02 00:42:25.867033 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:25.867064 | orchestrator | Friday 02 January 2026 00:42:20 +0000 (0:00:00.489) 0:00:04.027 ******** 2026-01-02 00:42:25.867076 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6d9d2903-81fe-42d1-9111-d7d9a87231b0) 2026-01-02 00:42:25.867087 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6d9d2903-81fe-42d1-9111-d7d9a87231b0) 2026-01-02 00:42:25.867098 | orchestrator | 2026-01-02 00:42:25.867143 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:25.867163 | orchestrator | Friday 02 January 2026 00:42:21 +0000 (0:00:00.714) 0:00:04.742 ******** 2026-01-02 00:42:25.867183 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_91cfe094-4682-4bfc-95e3-88354566cb8a) 2026-01-02 00:42:25.867195 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_91cfe094-4682-4bfc-95e3-88354566cb8a) 2026-01-02 00:42:25.867206 | orchestrator | 2026-01-02 00:42:25.867217 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:25.867239 | orchestrator | Friday 02 January 2026 00:42:22 +0000 (0:00:00.892) 0:00:05.634 ******** 2026-01-02 00:42:25.867250 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ace49a83-40fe-462c-82a5-a32ee72a9346) 2026-01-02 00:42:25.867261 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ace49a83-40fe-462c-82a5-a32ee72a9346) 2026-01-02 00:42:25.867272 | orchestrator | 2026-01-02 00:42:25.867289 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:25.867308 | orchestrator | Friday 02 January 2026 00:42:23 +0000 (0:00:01.028) 0:00:06.663 ******** 2026-01-02 00:42:25.867327 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-02 00:42:25.867344 | orchestrator | 2026-01-02 00:42:25.867371 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:25.867388 | orchestrator | Friday 02 January 2026 00:42:23 +0000 (0:00:00.359) 0:00:07.022 ******** 2026-01-02 00:42:25.867399 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-01-02 00:42:25.867411 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-01-02 00:42:25.867421 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-01-02 00:42:25.867432 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-01-02 00:42:25.867443 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-01-02 00:42:25.867454 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-01-02 00:42:25.867465 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-01-02 00:42:25.867475 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-01-02 00:42:25.867486 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-01-02 00:42:25.867497 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-01-02 00:42:25.867508 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-01-02 00:42:25.867519 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-01-02 00:42:25.867530 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-01-02 00:42:25.867541 | orchestrator | 2026-01-02 00:42:25.867552 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:25.867563 | orchestrator | Friday 02 January 2026 00:42:24 +0000 (0:00:00.403) 0:00:07.426 ******** 2026-01-02 00:42:25.867574 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:25.867585 | orchestrator | 2026-01-02 00:42:25.867596 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:25.867607 | orchestrator | Friday 02 January 2026 00:42:24 +0000 (0:00:00.231) 0:00:07.658 ******** 2026-01-02 00:42:25.867618 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:25.867629 | orchestrator | 2026-01-02 00:42:25.867639 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:25.867650 | orchestrator | Friday 02 January 2026 00:42:24 +0000 (0:00:00.216) 0:00:07.874 ******** 2026-01-02 00:42:25.867661 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:25.867672 | orchestrator | 2026-01-02 00:42:25.867683 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:25.867694 | orchestrator | Friday 02 January 2026 00:42:25 +0000 (0:00:00.232) 0:00:08.106 ******** 2026-01-02 00:42:25.867705 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:25.867716 | orchestrator | 2026-01-02 00:42:25.867727 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:25.867738 | orchestrator | Friday 02 January 2026 00:42:25 +0000 (0:00:00.236) 0:00:08.343 ******** 2026-01-02 00:42:25.867757 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:25.867767 | orchestrator | 2026-01-02 00:42:25.867778 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:25.867789 | orchestrator | Friday 02 January 2026 00:42:25 +0000 (0:00:00.193) 0:00:08.537 ******** 2026-01-02 00:42:25.867800 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:25.867811 | orchestrator | 2026-01-02 00:42:25.867822 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:25.867833 | orchestrator | Friday 02 January 2026 00:42:25 +0000 (0:00:00.219) 0:00:08.757 ******** 2026-01-02 00:42:25.867845 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:25.867855 | orchestrator | 2026-01-02 00:42:25.867874 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:34.369907 | orchestrator | Friday 02 January 2026 00:42:25 +0000 (0:00:00.203) 0:00:08.960 ******** 2026-01-02 00:42:34.370102 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:34.370195 | orchestrator | 2026-01-02 00:42:34.370209 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:34.370221 | orchestrator | Friday 02 January 2026 00:42:26 +0000 (0:00:00.215) 0:00:09.175 ******** 2026-01-02 00:42:34.370232 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-01-02 00:42:34.370244 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-01-02 00:42:34.370256 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-01-02 00:42:34.370267 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-01-02 00:42:34.370278 | orchestrator | 2026-01-02 00:42:34.370289 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:34.370300 | orchestrator | Friday 02 January 2026 00:42:27 +0000 (0:00:01.200) 0:00:10.376 ******** 2026-01-02 00:42:34.370311 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:34.370323 | orchestrator | 2026-01-02 00:42:34.370334 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:34.370345 | orchestrator | Friday 02 January 2026 00:42:27 +0000 (0:00:00.206) 0:00:10.582 ******** 2026-01-02 00:42:34.370356 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:34.370367 | orchestrator | 2026-01-02 00:42:34.370378 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:34.370389 | orchestrator | Friday 02 January 2026 00:42:27 +0000 (0:00:00.232) 0:00:10.815 ******** 2026-01-02 00:42:34.370400 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:34.370411 | orchestrator | 2026-01-02 00:42:34.370422 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:34.370433 | orchestrator | Friday 02 January 2026 00:42:27 +0000 (0:00:00.206) 0:00:11.021 ******** 2026-01-02 00:42:34.370444 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:34.370457 | orchestrator | 2026-01-02 00:42:34.370470 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-02 00:42:34.370483 | orchestrator | Friday 02 January 2026 00:42:28 +0000 (0:00:00.199) 0:00:11.220 ******** 2026-01-02 00:42:34.370495 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-01-02 00:42:34.370508 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-01-02 00:42:34.370521 | orchestrator | 2026-01-02 00:42:34.370553 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-02 00:42:34.370567 | orchestrator | Friday 02 January 2026 00:42:28 +0000 (0:00:00.182) 0:00:11.403 ******** 2026-01-02 00:42:34.370580 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:34.370593 | orchestrator | 2026-01-02 00:42:34.370606 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-02 00:42:34.370618 | orchestrator | Friday 02 January 2026 00:42:28 +0000 (0:00:00.145) 0:00:11.549 ******** 2026-01-02 00:42:34.370633 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:34.370645 | orchestrator | 2026-01-02 00:42:34.370658 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-02 00:42:34.370697 | orchestrator | Friday 02 January 2026 00:42:28 +0000 (0:00:00.131) 0:00:11.681 ******** 2026-01-02 00:42:34.370710 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:34.370723 | orchestrator | 2026-01-02 00:42:34.370736 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-02 00:42:34.370749 | orchestrator | Friday 02 January 2026 00:42:28 +0000 (0:00:00.142) 0:00:11.823 ******** 2026-01-02 00:42:34.370763 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:42:34.370775 | orchestrator | 2026-01-02 00:42:34.370789 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-02 00:42:34.370802 | orchestrator | Friday 02 January 2026 00:42:28 +0000 (0:00:00.155) 0:00:11.979 ******** 2026-01-02 00:42:34.370815 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c483f3a2-63e3-5a58-8db6-ff291b90fd92'}}) 2026-01-02 00:42:34.370829 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa'}}) 2026-01-02 00:42:34.370842 | orchestrator | 2026-01-02 00:42:34.370854 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-02 00:42:34.370865 | orchestrator | Friday 02 January 2026 00:42:29 +0000 (0:00:00.177) 0:00:12.156 ******** 2026-01-02 00:42:34.370877 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c483f3a2-63e3-5a58-8db6-ff291b90fd92'}})  2026-01-02 00:42:34.370896 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa'}})  2026-01-02 00:42:34.370907 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:34.370918 | orchestrator | 2026-01-02 00:42:34.370929 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-02 00:42:34.370940 | orchestrator | Friday 02 January 2026 00:42:29 +0000 (0:00:00.151) 0:00:12.308 ******** 2026-01-02 00:42:34.370951 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c483f3a2-63e3-5a58-8db6-ff291b90fd92'}})  2026-01-02 00:42:34.370963 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa'}})  2026-01-02 00:42:34.370974 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:34.370984 | orchestrator | 2026-01-02 00:42:34.370996 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-02 00:42:34.371007 | orchestrator | Friday 02 January 2026 00:42:29 +0000 (0:00:00.451) 0:00:12.760 ******** 2026-01-02 00:42:34.371018 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c483f3a2-63e3-5a58-8db6-ff291b90fd92'}})  2026-01-02 00:42:34.371047 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa'}})  2026-01-02 00:42:34.371060 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:34.371071 | orchestrator | 2026-01-02 00:42:34.371082 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-02 00:42:34.371099 | orchestrator | Friday 02 January 2026 00:42:29 +0000 (0:00:00.168) 0:00:12.928 ******** 2026-01-02 00:42:34.371142 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:42:34.371155 | orchestrator | 2026-01-02 00:42:34.371167 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-02 00:42:34.371178 | orchestrator | Friday 02 January 2026 00:42:29 +0000 (0:00:00.157) 0:00:13.085 ******** 2026-01-02 00:42:34.371189 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:42:34.371200 | orchestrator | 2026-01-02 00:42:34.371211 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-02 00:42:34.371222 | orchestrator | Friday 02 January 2026 00:42:30 +0000 (0:00:00.139) 0:00:13.224 ******** 2026-01-02 00:42:34.371233 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:34.371244 | orchestrator | 2026-01-02 00:42:34.371255 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-02 00:42:34.371265 | orchestrator | Friday 02 January 2026 00:42:30 +0000 (0:00:00.136) 0:00:13.361 ******** 2026-01-02 00:42:34.371284 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:34.371296 | orchestrator | 2026-01-02 00:42:34.371307 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-02 00:42:34.371318 | orchestrator | Friday 02 January 2026 00:42:30 +0000 (0:00:00.139) 0:00:13.501 ******** 2026-01-02 00:42:34.371329 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:34.371340 | orchestrator | 2026-01-02 00:42:34.371351 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-02 00:42:34.371362 | orchestrator | Friday 02 January 2026 00:42:30 +0000 (0:00:00.145) 0:00:13.646 ******** 2026-01-02 00:42:34.371373 | orchestrator | ok: [testbed-node-3] => { 2026-01-02 00:42:34.371384 | orchestrator |  "ceph_osd_devices": { 2026-01-02 00:42:34.371395 | orchestrator |  "sdb": { 2026-01-02 00:42:34.371407 | orchestrator |  "osd_lvm_uuid": "c483f3a2-63e3-5a58-8db6-ff291b90fd92" 2026-01-02 00:42:34.371419 | orchestrator |  }, 2026-01-02 00:42:34.371430 | orchestrator |  "sdc": { 2026-01-02 00:42:34.371441 | orchestrator |  "osd_lvm_uuid": "7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa" 2026-01-02 00:42:34.371452 | orchestrator |  } 2026-01-02 00:42:34.371463 | orchestrator |  } 2026-01-02 00:42:34.371475 | orchestrator | } 2026-01-02 00:42:34.371486 | orchestrator | 2026-01-02 00:42:34.371497 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-02 00:42:34.371509 | orchestrator | Friday 02 January 2026 00:42:30 +0000 (0:00:00.135) 0:00:13.782 ******** 2026-01-02 00:42:34.371520 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:34.371530 | orchestrator | 2026-01-02 00:42:34.371542 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-02 00:42:34.371553 | orchestrator | Friday 02 January 2026 00:42:30 +0000 (0:00:00.158) 0:00:13.940 ******** 2026-01-02 00:42:34.371564 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:34.371575 | orchestrator | 2026-01-02 00:42:34.371586 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-02 00:42:34.371597 | orchestrator | Friday 02 January 2026 00:42:30 +0000 (0:00:00.152) 0:00:14.093 ******** 2026-01-02 00:42:34.371608 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:34.371619 | orchestrator | 2026-01-02 00:42:34.371630 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-02 00:42:34.371640 | orchestrator | Friday 02 January 2026 00:42:31 +0000 (0:00:00.145) 0:00:14.238 ******** 2026-01-02 00:42:34.371651 | orchestrator | changed: [testbed-node-3] => { 2026-01-02 00:42:34.371663 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-02 00:42:34.371674 | orchestrator |  "ceph_osd_devices": { 2026-01-02 00:42:34.371685 | orchestrator |  "sdb": { 2026-01-02 00:42:34.371696 | orchestrator |  "osd_lvm_uuid": "c483f3a2-63e3-5a58-8db6-ff291b90fd92" 2026-01-02 00:42:34.371707 | orchestrator |  }, 2026-01-02 00:42:34.371718 | orchestrator |  "sdc": { 2026-01-02 00:42:34.371729 | orchestrator |  "osd_lvm_uuid": "7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa" 2026-01-02 00:42:34.371740 | orchestrator |  } 2026-01-02 00:42:34.371752 | orchestrator |  }, 2026-01-02 00:42:34.371763 | orchestrator |  "lvm_volumes": [ 2026-01-02 00:42:34.371774 | orchestrator |  { 2026-01-02 00:42:34.371785 | orchestrator |  "data": "osd-block-c483f3a2-63e3-5a58-8db6-ff291b90fd92", 2026-01-02 00:42:34.371796 | orchestrator |  "data_vg": "ceph-c483f3a2-63e3-5a58-8db6-ff291b90fd92" 2026-01-02 00:42:34.371807 | orchestrator |  }, 2026-01-02 00:42:34.371818 | orchestrator |  { 2026-01-02 00:42:34.371829 | orchestrator |  "data": "osd-block-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa", 2026-01-02 00:42:34.371840 | orchestrator |  "data_vg": "ceph-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa" 2026-01-02 00:42:34.371856 | orchestrator |  } 2026-01-02 00:42:34.371867 | orchestrator |  ] 2026-01-02 00:42:34.371878 | orchestrator |  } 2026-01-02 00:42:34.371896 | orchestrator | } 2026-01-02 00:42:34.371907 | orchestrator | 2026-01-02 00:42:34.371918 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-02 00:42:34.371929 | orchestrator | Friday 02 January 2026 00:42:31 +0000 (0:00:00.506) 0:00:14.745 ******** 2026-01-02 00:42:34.371940 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-02 00:42:34.371951 | orchestrator | 2026-01-02 00:42:34.371962 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-02 00:42:34.371973 | orchestrator | 2026-01-02 00:42:34.371984 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-02 00:42:34.371995 | orchestrator | Friday 02 January 2026 00:42:33 +0000 (0:00:02.109) 0:00:16.855 ******** 2026-01-02 00:42:34.372006 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-02 00:42:34.372017 | orchestrator | 2026-01-02 00:42:34.372028 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-02 00:42:34.372039 | orchestrator | Friday 02 January 2026 00:42:34 +0000 (0:00:00.288) 0:00:17.143 ******** 2026-01-02 00:42:34.372050 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:42:34.372061 | orchestrator | 2026-01-02 00:42:34.372079 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:43.381687 | orchestrator | Friday 02 January 2026 00:42:34 +0000 (0:00:00.322) 0:00:17.466 ******** 2026-01-02 00:42:43.381767 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-01-02 00:42:43.381774 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-01-02 00:42:43.381780 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-01-02 00:42:43.381785 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-01-02 00:42:43.381790 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-01-02 00:42:43.381797 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-01-02 00:42:43.381805 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-01-02 00:42:43.381812 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-01-02 00:42:43.381819 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-01-02 00:42:43.381827 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-01-02 00:42:43.381835 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-01-02 00:42:43.381845 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-01-02 00:42:43.381852 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-01-02 00:42:43.381861 | orchestrator | 2026-01-02 00:42:43.381870 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:43.381877 | orchestrator | Friday 02 January 2026 00:42:34 +0000 (0:00:00.449) 0:00:17.916 ******** 2026-01-02 00:42:43.381885 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:43.381895 | orchestrator | 2026-01-02 00:42:43.381903 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:43.381911 | orchestrator | Friday 02 January 2026 00:42:35 +0000 (0:00:00.216) 0:00:18.132 ******** 2026-01-02 00:42:43.381918 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:43.381927 | orchestrator | 2026-01-02 00:42:43.381932 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:43.381937 | orchestrator | Friday 02 January 2026 00:42:35 +0000 (0:00:00.207) 0:00:18.339 ******** 2026-01-02 00:42:43.381944 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:43.381952 | orchestrator | 2026-01-02 00:42:43.381960 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:43.381990 | orchestrator | Friday 02 January 2026 00:42:35 +0000 (0:00:00.196) 0:00:18.535 ******** 2026-01-02 00:42:43.382000 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:43.382007 | orchestrator | 2026-01-02 00:42:43.382041 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:43.382050 | orchestrator | Friday 02 January 2026 00:42:35 +0000 (0:00:00.194) 0:00:18.730 ******** 2026-01-02 00:42:43.382058 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:43.382067 | orchestrator | 2026-01-02 00:42:43.382076 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:43.382084 | orchestrator | Friday 02 January 2026 00:42:36 +0000 (0:00:00.830) 0:00:19.561 ******** 2026-01-02 00:42:43.382092 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:43.382100 | orchestrator | 2026-01-02 00:42:43.382140 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:43.382149 | orchestrator | Friday 02 January 2026 00:42:36 +0000 (0:00:00.208) 0:00:19.769 ******** 2026-01-02 00:42:43.382157 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:43.382165 | orchestrator | 2026-01-02 00:42:43.382172 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:43.382180 | orchestrator | Friday 02 January 2026 00:42:36 +0000 (0:00:00.206) 0:00:19.976 ******** 2026-01-02 00:42:43.382186 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:43.382194 | orchestrator | 2026-01-02 00:42:43.382202 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:43.382210 | orchestrator | Friday 02 January 2026 00:42:37 +0000 (0:00:00.203) 0:00:20.179 ******** 2026-01-02 00:42:43.382218 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_cfac6910-579b-4d78-84a3-2d39a75847a6) 2026-01-02 00:42:43.382227 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_cfac6910-579b-4d78-84a3-2d39a75847a6) 2026-01-02 00:42:43.382234 | orchestrator | 2026-01-02 00:42:43.382242 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:43.382250 | orchestrator | Friday 02 January 2026 00:42:37 +0000 (0:00:00.438) 0:00:20.618 ******** 2026-01-02 00:42:43.382258 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_84499345-a879-443a-82ee-40e5571fa8cd) 2026-01-02 00:42:43.382266 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_84499345-a879-443a-82ee-40e5571fa8cd) 2026-01-02 00:42:43.382274 | orchestrator | 2026-01-02 00:42:43.382282 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:43.382290 | orchestrator | Friday 02 January 2026 00:42:37 +0000 (0:00:00.451) 0:00:21.070 ******** 2026-01-02 00:42:43.382298 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_7a849538-9b89-4e07-840a-8a2ecc10a58d) 2026-01-02 00:42:43.382305 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_7a849538-9b89-4e07-840a-8a2ecc10a58d) 2026-01-02 00:42:43.382314 | orchestrator | 2026-01-02 00:42:43.382322 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:43.382345 | orchestrator | Friday 02 January 2026 00:42:38 +0000 (0:00:00.430) 0:00:21.500 ******** 2026-01-02 00:42:43.382354 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_496b1234-da7e-4975-8125-a1f8cbe1a452) 2026-01-02 00:42:43.382362 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_496b1234-da7e-4975-8125-a1f8cbe1a452) 2026-01-02 00:42:43.382370 | orchestrator | 2026-01-02 00:42:43.382379 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:43.382386 | orchestrator | Friday 02 January 2026 00:42:38 +0000 (0:00:00.472) 0:00:21.972 ******** 2026-01-02 00:42:43.382392 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-02 00:42:43.382397 | orchestrator | 2026-01-02 00:42:43.382402 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:43.382408 | orchestrator | Friday 02 January 2026 00:42:39 +0000 (0:00:00.364) 0:00:22.336 ******** 2026-01-02 00:42:43.382419 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-01-02 00:42:43.382424 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-01-02 00:42:43.382430 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-01-02 00:42:43.382435 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-01-02 00:42:43.382441 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-01-02 00:42:43.382446 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-01-02 00:42:43.382451 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-01-02 00:42:43.382457 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-01-02 00:42:43.382462 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-01-02 00:42:43.382467 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-01-02 00:42:43.382473 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-01-02 00:42:43.382478 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-01-02 00:42:43.382483 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-01-02 00:42:43.382489 | orchestrator | 2026-01-02 00:42:43.382494 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:43.382500 | orchestrator | Friday 02 January 2026 00:42:39 +0000 (0:00:00.467) 0:00:22.804 ******** 2026-01-02 00:42:43.382505 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:43.382511 | orchestrator | 2026-01-02 00:42:43.382516 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:43.382527 | orchestrator | Friday 02 January 2026 00:42:40 +0000 (0:00:00.841) 0:00:23.645 ******** 2026-01-02 00:42:43.382533 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:43.382538 | orchestrator | 2026-01-02 00:42:43.382544 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:43.382549 | orchestrator | Friday 02 January 2026 00:42:40 +0000 (0:00:00.214) 0:00:23.859 ******** 2026-01-02 00:42:43.382554 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:43.382559 | orchestrator | 2026-01-02 00:42:43.382565 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:43.382570 | orchestrator | Friday 02 January 2026 00:42:40 +0000 (0:00:00.222) 0:00:24.082 ******** 2026-01-02 00:42:43.382576 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:43.382581 | orchestrator | 2026-01-02 00:42:43.382586 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:43.382592 | orchestrator | Friday 02 January 2026 00:42:41 +0000 (0:00:00.245) 0:00:24.328 ******** 2026-01-02 00:42:43.382598 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:43.382603 | orchestrator | 2026-01-02 00:42:43.382608 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:43.382612 | orchestrator | Friday 02 January 2026 00:42:41 +0000 (0:00:00.220) 0:00:24.548 ******** 2026-01-02 00:42:43.382617 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:43.382622 | orchestrator | 2026-01-02 00:42:43.382626 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:43.382631 | orchestrator | Friday 02 January 2026 00:42:41 +0000 (0:00:00.255) 0:00:24.804 ******** 2026-01-02 00:42:43.382636 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:43.382640 | orchestrator | 2026-01-02 00:42:43.382645 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:43.382649 | orchestrator | Friday 02 January 2026 00:42:41 +0000 (0:00:00.232) 0:00:25.036 ******** 2026-01-02 00:42:43.382657 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:43.382662 | orchestrator | 2026-01-02 00:42:43.382667 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:43.382671 | orchestrator | Friday 02 January 2026 00:42:42 +0000 (0:00:00.225) 0:00:25.262 ******** 2026-01-02 00:42:43.382676 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-01-02 00:42:43.382681 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-01-02 00:42:43.382686 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-01-02 00:42:43.382691 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-01-02 00:42:43.382696 | orchestrator | 2026-01-02 00:42:43.382700 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:43.382705 | orchestrator | Friday 02 January 2026 00:42:43 +0000 (0:00:00.992) 0:00:26.255 ******** 2026-01-02 00:42:43.382710 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:51.698956 | orchestrator | 2026-01-02 00:42:51.699063 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:51.699080 | orchestrator | Friday 02 January 2026 00:42:43 +0000 (0:00:00.224) 0:00:26.480 ******** 2026-01-02 00:42:51.699093 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:51.699136 | orchestrator | 2026-01-02 00:42:51.699148 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:51.699159 | orchestrator | Friday 02 January 2026 00:42:43 +0000 (0:00:00.211) 0:00:26.691 ******** 2026-01-02 00:42:51.699170 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:51.699181 | orchestrator | 2026-01-02 00:42:51.699192 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:51.699203 | orchestrator | Friday 02 January 2026 00:42:43 +0000 (0:00:00.228) 0:00:26.920 ******** 2026-01-02 00:42:51.699214 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:51.699225 | orchestrator | 2026-01-02 00:42:51.699236 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-02 00:42:51.699247 | orchestrator | Friday 02 January 2026 00:42:44 +0000 (0:00:00.887) 0:00:27.808 ******** 2026-01-02 00:42:51.699258 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-01-02 00:42:51.699269 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-01-02 00:42:51.699280 | orchestrator | 2026-01-02 00:42:51.699291 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-02 00:42:51.699302 | orchestrator | Friday 02 January 2026 00:42:44 +0000 (0:00:00.203) 0:00:28.011 ******** 2026-01-02 00:42:51.699312 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:51.699324 | orchestrator | 2026-01-02 00:42:51.699335 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-02 00:42:51.699346 | orchestrator | Friday 02 January 2026 00:42:45 +0000 (0:00:00.153) 0:00:28.164 ******** 2026-01-02 00:42:51.699357 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:51.699368 | orchestrator | 2026-01-02 00:42:51.699379 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-02 00:42:51.699390 | orchestrator | Friday 02 January 2026 00:42:45 +0000 (0:00:00.154) 0:00:28.319 ******** 2026-01-02 00:42:51.699401 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:51.699411 | orchestrator | 2026-01-02 00:42:51.699422 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-02 00:42:51.699433 | orchestrator | Friday 02 January 2026 00:42:45 +0000 (0:00:00.136) 0:00:28.455 ******** 2026-01-02 00:42:51.699444 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:42:51.699456 | orchestrator | 2026-01-02 00:42:51.699467 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-02 00:42:51.699478 | orchestrator | Friday 02 January 2026 00:42:45 +0000 (0:00:00.142) 0:00:28.598 ******** 2026-01-02 00:42:51.699492 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '98c0a427-0bfe-5560-90fa-409a46d34f73'}}) 2026-01-02 00:42:51.699505 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b563cbc7-469d-5dd4-bc68-32b49ff22a36'}}) 2026-01-02 00:42:51.699545 | orchestrator | 2026-01-02 00:42:51.699558 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-02 00:42:51.699571 | orchestrator | Friday 02 January 2026 00:42:45 +0000 (0:00:00.233) 0:00:28.831 ******** 2026-01-02 00:42:51.699584 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '98c0a427-0bfe-5560-90fa-409a46d34f73'}})  2026-01-02 00:42:51.699615 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b563cbc7-469d-5dd4-bc68-32b49ff22a36'}})  2026-01-02 00:42:51.699628 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:51.699642 | orchestrator | 2026-01-02 00:42:51.699654 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-02 00:42:51.699667 | orchestrator | Friday 02 January 2026 00:42:45 +0000 (0:00:00.159) 0:00:28.990 ******** 2026-01-02 00:42:51.699679 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '98c0a427-0bfe-5560-90fa-409a46d34f73'}})  2026-01-02 00:42:51.699691 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b563cbc7-469d-5dd4-bc68-32b49ff22a36'}})  2026-01-02 00:42:51.699704 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:51.699718 | orchestrator | 2026-01-02 00:42:51.699731 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-02 00:42:51.699743 | orchestrator | Friday 02 January 2026 00:42:46 +0000 (0:00:00.181) 0:00:29.172 ******** 2026-01-02 00:42:51.699756 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '98c0a427-0bfe-5560-90fa-409a46d34f73'}})  2026-01-02 00:42:51.699770 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b563cbc7-469d-5dd4-bc68-32b49ff22a36'}})  2026-01-02 00:42:51.699783 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:51.699796 | orchestrator | 2026-01-02 00:42:51.699807 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-02 00:42:51.699818 | orchestrator | Friday 02 January 2026 00:42:46 +0000 (0:00:00.175) 0:00:29.347 ******** 2026-01-02 00:42:51.699829 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:42:51.699840 | orchestrator | 2026-01-02 00:42:51.699851 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-02 00:42:51.699862 | orchestrator | Friday 02 January 2026 00:42:46 +0000 (0:00:00.146) 0:00:29.494 ******** 2026-01-02 00:42:51.699873 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:42:51.699884 | orchestrator | 2026-01-02 00:42:51.699895 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-02 00:42:51.699906 | orchestrator | Friday 02 January 2026 00:42:46 +0000 (0:00:00.144) 0:00:29.639 ******** 2026-01-02 00:42:51.699935 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:51.699947 | orchestrator | 2026-01-02 00:42:51.699958 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-02 00:42:51.699969 | orchestrator | Friday 02 January 2026 00:42:46 +0000 (0:00:00.426) 0:00:30.066 ******** 2026-01-02 00:42:51.699980 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:51.699991 | orchestrator | 2026-01-02 00:42:51.700003 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-02 00:42:51.700013 | orchestrator | Friday 02 January 2026 00:42:47 +0000 (0:00:00.141) 0:00:30.207 ******** 2026-01-02 00:42:51.700024 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:51.700035 | orchestrator | 2026-01-02 00:42:51.700046 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-02 00:42:51.700057 | orchestrator | Friday 02 January 2026 00:42:47 +0000 (0:00:00.145) 0:00:30.352 ******** 2026-01-02 00:42:51.700068 | orchestrator | ok: [testbed-node-4] => { 2026-01-02 00:42:51.700079 | orchestrator |  "ceph_osd_devices": { 2026-01-02 00:42:51.700090 | orchestrator |  "sdb": { 2026-01-02 00:42:51.700152 | orchestrator |  "osd_lvm_uuid": "98c0a427-0bfe-5560-90fa-409a46d34f73" 2026-01-02 00:42:51.700175 | orchestrator |  }, 2026-01-02 00:42:51.700187 | orchestrator |  "sdc": { 2026-01-02 00:42:51.700198 | orchestrator |  "osd_lvm_uuid": "b563cbc7-469d-5dd4-bc68-32b49ff22a36" 2026-01-02 00:42:51.700209 | orchestrator |  } 2026-01-02 00:42:51.700307 | orchestrator |  } 2026-01-02 00:42:51.700323 | orchestrator | } 2026-01-02 00:42:51.700334 | orchestrator | 2026-01-02 00:42:51.700346 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-02 00:42:51.700357 | orchestrator | Friday 02 January 2026 00:42:47 +0000 (0:00:00.164) 0:00:30.516 ******** 2026-01-02 00:42:51.700367 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:51.700379 | orchestrator | 2026-01-02 00:42:51.700389 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-02 00:42:51.700400 | orchestrator | Friday 02 January 2026 00:42:47 +0000 (0:00:00.153) 0:00:30.670 ******** 2026-01-02 00:42:51.700411 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:51.700422 | orchestrator | 2026-01-02 00:42:51.700433 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-02 00:42:51.700444 | orchestrator | Friday 02 January 2026 00:42:47 +0000 (0:00:00.186) 0:00:30.856 ******** 2026-01-02 00:42:51.700455 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:51.700466 | orchestrator | 2026-01-02 00:42:51.700477 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-02 00:42:51.700487 | orchestrator | Friday 02 January 2026 00:42:47 +0000 (0:00:00.168) 0:00:31.025 ******** 2026-01-02 00:42:51.700498 | orchestrator | changed: [testbed-node-4] => { 2026-01-02 00:42:51.700510 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-02 00:42:51.700521 | orchestrator |  "ceph_osd_devices": { 2026-01-02 00:42:51.700532 | orchestrator |  "sdb": { 2026-01-02 00:42:51.700543 | orchestrator |  "osd_lvm_uuid": "98c0a427-0bfe-5560-90fa-409a46d34f73" 2026-01-02 00:42:51.700554 | orchestrator |  }, 2026-01-02 00:42:51.700566 | orchestrator |  "sdc": { 2026-01-02 00:42:51.700577 | orchestrator |  "osd_lvm_uuid": "b563cbc7-469d-5dd4-bc68-32b49ff22a36" 2026-01-02 00:42:51.700588 | orchestrator |  } 2026-01-02 00:42:51.700599 | orchestrator |  }, 2026-01-02 00:42:51.700610 | orchestrator |  "lvm_volumes": [ 2026-01-02 00:42:51.700621 | orchestrator |  { 2026-01-02 00:42:51.700632 | orchestrator |  "data": "osd-block-98c0a427-0bfe-5560-90fa-409a46d34f73", 2026-01-02 00:42:51.700643 | orchestrator |  "data_vg": "ceph-98c0a427-0bfe-5560-90fa-409a46d34f73" 2026-01-02 00:42:51.700653 | orchestrator |  }, 2026-01-02 00:42:51.700664 | orchestrator |  { 2026-01-02 00:42:51.700675 | orchestrator |  "data": "osd-block-b563cbc7-469d-5dd4-bc68-32b49ff22a36", 2026-01-02 00:42:51.700686 | orchestrator |  "data_vg": "ceph-b563cbc7-469d-5dd4-bc68-32b49ff22a36" 2026-01-02 00:42:51.700697 | orchestrator |  } 2026-01-02 00:42:51.700708 | orchestrator |  ] 2026-01-02 00:42:51.700719 | orchestrator |  } 2026-01-02 00:42:51.700730 | orchestrator | } 2026-01-02 00:42:51.700741 | orchestrator | 2026-01-02 00:42:51.700753 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-02 00:42:51.700763 | orchestrator | Friday 02 January 2026 00:42:48 +0000 (0:00:00.261) 0:00:31.286 ******** 2026-01-02 00:42:51.700774 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-02 00:42:51.700785 | orchestrator | 2026-01-02 00:42:51.700796 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-02 00:42:51.700807 | orchestrator | 2026-01-02 00:42:51.700818 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-02 00:42:51.700828 | orchestrator | Friday 02 January 2026 00:42:49 +0000 (0:00:01.807) 0:00:33.094 ******** 2026-01-02 00:42:51.700839 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-02 00:42:51.700850 | orchestrator | 2026-01-02 00:42:51.700862 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-02 00:42:51.700887 | orchestrator | Friday 02 January 2026 00:42:50 +0000 (0:00:00.983) 0:00:34.078 ******** 2026-01-02 00:42:51.700899 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:42:51.700910 | orchestrator | 2026-01-02 00:42:51.700921 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:51.700932 | orchestrator | Friday 02 January 2026 00:42:51 +0000 (0:00:00.261) 0:00:34.339 ******** 2026-01-02 00:42:51.700943 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-01-02 00:42:51.700954 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-01-02 00:42:51.700965 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-01-02 00:42:51.700976 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-01-02 00:42:51.700987 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-01-02 00:42:51.701007 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-01-02 00:43:00.371413 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-01-02 00:43:00.371513 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-01-02 00:43:00.371524 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-01-02 00:43:00.371532 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-01-02 00:43:00.371539 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-01-02 00:43:00.371545 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-01-02 00:43:00.371552 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-01-02 00:43:00.371560 | orchestrator | 2026-01-02 00:43:00.371568 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:43:00.371576 | orchestrator | Friday 02 January 2026 00:42:51 +0000 (0:00:00.453) 0:00:34.793 ******** 2026-01-02 00:43:00.371583 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:43:00.371591 | orchestrator | 2026-01-02 00:43:00.371598 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:43:00.371605 | orchestrator | Friday 02 January 2026 00:42:51 +0000 (0:00:00.219) 0:00:35.012 ******** 2026-01-02 00:43:00.371612 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:43:00.371619 | orchestrator | 2026-01-02 00:43:00.371626 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:43:00.371633 | orchestrator | Friday 02 January 2026 00:42:52 +0000 (0:00:00.197) 0:00:35.210 ******** 2026-01-02 00:43:00.371639 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:43:00.371646 | orchestrator | 2026-01-02 00:43:00.371652 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:43:00.371659 | orchestrator | Friday 02 January 2026 00:42:52 +0000 (0:00:00.211) 0:00:35.422 ******** 2026-01-02 00:43:00.371666 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:43:00.371673 | orchestrator | 2026-01-02 00:43:00.371680 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:43:00.371687 | orchestrator | Friday 02 January 2026 00:42:52 +0000 (0:00:00.296) 0:00:35.718 ******** 2026-01-02 00:43:00.371694 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:43:00.371700 | orchestrator | 2026-01-02 00:43:00.371707 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:43:00.371714 | orchestrator | Friday 02 January 2026 00:42:52 +0000 (0:00:00.244) 0:00:35.963 ******** 2026-01-02 00:43:00.371721 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:43:00.371728 | orchestrator | 2026-01-02 00:43:00.371735 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:43:00.371769 | orchestrator | Friday 02 January 2026 00:42:53 +0000 (0:00:00.192) 0:00:36.155 ******** 2026-01-02 00:43:00.371777 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:43:00.371784 | orchestrator | 2026-01-02 00:43:00.371790 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:43:00.371797 | orchestrator | Friday 02 January 2026 00:42:53 +0000 (0:00:00.230) 0:00:36.386 ******** 2026-01-02 00:43:00.371803 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:43:00.371810 | orchestrator | 2026-01-02 00:43:00.371817 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:43:00.371824 | orchestrator | Friday 02 January 2026 00:42:53 +0000 (0:00:00.196) 0:00:36.582 ******** 2026-01-02 00:43:00.371830 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f4449738-099c-443f-90a1-9eef773d53ef) 2026-01-02 00:43:00.371838 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f4449738-099c-443f-90a1-9eef773d53ef) 2026-01-02 00:43:00.371844 | orchestrator | 2026-01-02 00:43:00.371851 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:43:00.371858 | orchestrator | Friday 02 January 2026 00:42:54 +0000 (0:00:01.054) 0:00:37.636 ******** 2026-01-02 00:43:00.371864 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3f193762-36b0-4c27-b28e-8efb206edc66) 2026-01-02 00:43:00.371871 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3f193762-36b0-4c27-b28e-8efb206edc66) 2026-01-02 00:43:00.371878 | orchestrator | 2026-01-02 00:43:00.371885 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:43:00.371892 | orchestrator | Friday 02 January 2026 00:42:55 +0000 (0:00:00.488) 0:00:38.124 ******** 2026-01-02 00:43:00.371898 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_26cdd52f-83be-4086-bce2-9cb6df4f24ab) 2026-01-02 00:43:00.371905 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_26cdd52f-83be-4086-bce2-9cb6df4f24ab) 2026-01-02 00:43:00.371911 | orchestrator | 2026-01-02 00:43:00.371918 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:43:00.371925 | orchestrator | Friday 02 January 2026 00:42:55 +0000 (0:00:00.590) 0:00:38.715 ******** 2026-01-02 00:43:00.371932 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3a47a132-03ad-4adf-a37b-d405efe1a07c) 2026-01-02 00:43:00.371939 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3a47a132-03ad-4adf-a37b-d405efe1a07c) 2026-01-02 00:43:00.371945 | orchestrator | 2026-01-02 00:43:00.371952 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:43:00.371959 | orchestrator | Friday 02 January 2026 00:42:56 +0000 (0:00:00.601) 0:00:39.316 ******** 2026-01-02 00:43:00.371965 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-02 00:43:00.371973 | orchestrator | 2026-01-02 00:43:00.371980 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:43:00.372003 | orchestrator | Friday 02 January 2026 00:42:56 +0000 (0:00:00.335) 0:00:39.652 ******** 2026-01-02 00:43:00.372010 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-01-02 00:43:00.372017 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-01-02 00:43:00.372024 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-01-02 00:43:00.372031 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-01-02 00:43:00.372039 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-01-02 00:43:00.372063 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-01-02 00:43:00.372072 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-01-02 00:43:00.372079 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-01-02 00:43:00.372117 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-01-02 00:43:00.372125 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-01-02 00:43:00.372132 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-01-02 00:43:00.372140 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-01-02 00:43:00.372147 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-01-02 00:43:00.372154 | orchestrator | 2026-01-02 00:43:00.372161 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:43:00.372167 | orchestrator | Friday 02 January 2026 00:42:56 +0000 (0:00:00.387) 0:00:40.039 ******** 2026-01-02 00:43:00.372174 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:43:00.372181 | orchestrator | 2026-01-02 00:43:00.372187 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:43:00.372194 | orchestrator | Friday 02 January 2026 00:42:57 +0000 (0:00:00.229) 0:00:40.269 ******** 2026-01-02 00:43:00.372201 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:43:00.372208 | orchestrator | 2026-01-02 00:43:00.372215 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:43:00.372225 | orchestrator | Friday 02 January 2026 00:42:57 +0000 (0:00:00.205) 0:00:40.474 ******** 2026-01-02 00:43:00.372232 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:43:00.372238 | orchestrator | 2026-01-02 00:43:00.372245 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:43:00.372252 | orchestrator | Friday 02 January 2026 00:42:57 +0000 (0:00:00.185) 0:00:40.659 ******** 2026-01-02 00:43:00.372258 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:43:00.372265 | orchestrator | 2026-01-02 00:43:00.372272 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:43:00.372279 | orchestrator | Friday 02 January 2026 00:42:57 +0000 (0:00:00.188) 0:00:40.847 ******** 2026-01-02 00:43:00.372285 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:43:00.372291 | orchestrator | 2026-01-02 00:43:00.372298 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:43:00.372305 | orchestrator | Friday 02 January 2026 00:42:57 +0000 (0:00:00.183) 0:00:41.030 ******** 2026-01-02 00:43:00.372312 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:43:00.372319 | orchestrator | 2026-01-02 00:43:00.372326 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:43:00.372333 | orchestrator | Friday 02 January 2026 00:42:58 +0000 (0:00:00.570) 0:00:41.600 ******** 2026-01-02 00:43:00.372339 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:43:00.372345 | orchestrator | 2026-01-02 00:43:00.372352 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:43:00.372359 | orchestrator | Friday 02 January 2026 00:42:58 +0000 (0:00:00.197) 0:00:41.798 ******** 2026-01-02 00:43:00.372366 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:43:00.372373 | orchestrator | 2026-01-02 00:43:00.372380 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:43:00.372386 | orchestrator | Friday 02 January 2026 00:42:58 +0000 (0:00:00.190) 0:00:41.989 ******** 2026-01-02 00:43:00.372393 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-01-02 00:43:00.372400 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-01-02 00:43:00.372407 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-01-02 00:43:00.372414 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-01-02 00:43:00.372421 | orchestrator | 2026-01-02 00:43:00.372428 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:43:00.372435 | orchestrator | Friday 02 January 2026 00:42:59 +0000 (0:00:00.628) 0:00:42.617 ******** 2026-01-02 00:43:00.372441 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:43:00.372453 | orchestrator | 2026-01-02 00:43:00.372461 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:43:00.372468 | orchestrator | Friday 02 January 2026 00:42:59 +0000 (0:00:00.207) 0:00:42.825 ******** 2026-01-02 00:43:00.372475 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:43:00.372482 | orchestrator | 2026-01-02 00:43:00.372489 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:43:00.372495 | orchestrator | Friday 02 January 2026 00:42:59 +0000 (0:00:00.208) 0:00:43.034 ******** 2026-01-02 00:43:00.372502 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:43:00.372508 | orchestrator | 2026-01-02 00:43:00.372515 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:43:00.372523 | orchestrator | Friday 02 January 2026 00:43:00 +0000 (0:00:00.237) 0:00:43.272 ******** 2026-01-02 00:43:00.372529 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:43:00.372537 | orchestrator | 2026-01-02 00:43:00.372549 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-02 00:43:05.172796 | orchestrator | Friday 02 January 2026 00:43:00 +0000 (0:00:00.194) 0:00:43.466 ******** 2026-01-02 00:43:05.172887 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-01-02 00:43:05.172898 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-01-02 00:43:05.172905 | orchestrator | 2026-01-02 00:43:05.172913 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-02 00:43:05.172920 | orchestrator | Friday 02 January 2026 00:43:00 +0000 (0:00:00.165) 0:00:43.632 ******** 2026-01-02 00:43:05.172927 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:43:05.172934 | orchestrator | 2026-01-02 00:43:05.172940 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-02 00:43:05.172947 | orchestrator | Friday 02 January 2026 00:43:00 +0000 (0:00:00.142) 0:00:43.774 ******** 2026-01-02 00:43:05.172953 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:43:05.172959 | orchestrator | 2026-01-02 00:43:05.172966 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-02 00:43:05.172972 | orchestrator | Friday 02 January 2026 00:43:00 +0000 (0:00:00.150) 0:00:43.925 ******** 2026-01-02 00:43:05.172979 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:43:05.172984 | orchestrator | 2026-01-02 00:43:05.172988 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-02 00:43:05.172993 | orchestrator | Friday 02 January 2026 00:43:01 +0000 (0:00:00.461) 0:00:44.387 ******** 2026-01-02 00:43:05.172999 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:43:05.173006 | orchestrator | 2026-01-02 00:43:05.173013 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-02 00:43:05.173020 | orchestrator | Friday 02 January 2026 00:43:01 +0000 (0:00:00.165) 0:00:44.552 ******** 2026-01-02 00:43:05.173027 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8c17e839-2cbb-5f17-abcc-9f26ae111b42'}}) 2026-01-02 00:43:05.173034 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '37cfd703-64b2-55b0-ad28-4f6812d5fa0d'}}) 2026-01-02 00:43:05.173040 | orchestrator | 2026-01-02 00:43:05.173046 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-02 00:43:05.173052 | orchestrator | Friday 02 January 2026 00:43:01 +0000 (0:00:00.232) 0:00:44.785 ******** 2026-01-02 00:43:05.173059 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8c17e839-2cbb-5f17-abcc-9f26ae111b42'}})  2026-01-02 00:43:05.173067 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '37cfd703-64b2-55b0-ad28-4f6812d5fa0d'}})  2026-01-02 00:43:05.173073 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:43:05.173079 | orchestrator | 2026-01-02 00:43:05.173086 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-02 00:43:05.173111 | orchestrator | Friday 02 January 2026 00:43:01 +0000 (0:00:00.161) 0:00:44.947 ******** 2026-01-02 00:43:05.173143 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8c17e839-2cbb-5f17-abcc-9f26ae111b42'}})  2026-01-02 00:43:05.173149 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '37cfd703-64b2-55b0-ad28-4f6812d5fa0d'}})  2026-01-02 00:43:05.173155 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:43:05.173162 | orchestrator | 2026-01-02 00:43:05.173168 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-02 00:43:05.173174 | orchestrator | Friday 02 January 2026 00:43:02 +0000 (0:00:00.180) 0:00:45.127 ******** 2026-01-02 00:43:05.173194 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8c17e839-2cbb-5f17-abcc-9f26ae111b42'}})  2026-01-02 00:43:05.173201 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '37cfd703-64b2-55b0-ad28-4f6812d5fa0d'}})  2026-01-02 00:43:05.173207 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:43:05.173213 | orchestrator | 2026-01-02 00:43:05.173219 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-02 00:43:05.173225 | orchestrator | Friday 02 January 2026 00:43:02 +0000 (0:00:00.214) 0:00:45.342 ******** 2026-01-02 00:43:05.173231 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:43:05.173237 | orchestrator | 2026-01-02 00:43:05.173243 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-02 00:43:05.173249 | orchestrator | Friday 02 January 2026 00:43:02 +0000 (0:00:00.152) 0:00:45.494 ******** 2026-01-02 00:43:05.173254 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:43:05.173260 | orchestrator | 2026-01-02 00:43:05.173265 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-02 00:43:05.173271 | orchestrator | Friday 02 January 2026 00:43:02 +0000 (0:00:00.144) 0:00:45.639 ******** 2026-01-02 00:43:05.173277 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:43:05.173282 | orchestrator | 2026-01-02 00:43:05.173289 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-02 00:43:05.173295 | orchestrator | Friday 02 January 2026 00:43:02 +0000 (0:00:00.134) 0:00:45.773 ******** 2026-01-02 00:43:05.173301 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:43:05.173307 | orchestrator | 2026-01-02 00:43:05.173313 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-02 00:43:05.173320 | orchestrator | Friday 02 January 2026 00:43:02 +0000 (0:00:00.141) 0:00:45.915 ******** 2026-01-02 00:43:05.173326 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:43:05.173332 | orchestrator | 2026-01-02 00:43:05.173338 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-02 00:43:05.173345 | orchestrator | Friday 02 January 2026 00:43:03 +0000 (0:00:00.188) 0:00:46.103 ******** 2026-01-02 00:43:05.173351 | orchestrator | ok: [testbed-node-5] => { 2026-01-02 00:43:05.173357 | orchestrator |  "ceph_osd_devices": { 2026-01-02 00:43:05.173363 | orchestrator |  "sdb": { 2026-01-02 00:43:05.173385 | orchestrator |  "osd_lvm_uuid": "8c17e839-2cbb-5f17-abcc-9f26ae111b42" 2026-01-02 00:43:05.173393 | orchestrator |  }, 2026-01-02 00:43:05.173400 | orchestrator |  "sdc": { 2026-01-02 00:43:05.173407 | orchestrator |  "osd_lvm_uuid": "37cfd703-64b2-55b0-ad28-4f6812d5fa0d" 2026-01-02 00:43:05.173414 | orchestrator |  } 2026-01-02 00:43:05.173421 | orchestrator |  } 2026-01-02 00:43:05.173428 | orchestrator | } 2026-01-02 00:43:05.173434 | orchestrator | 2026-01-02 00:43:05.173440 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-02 00:43:05.173447 | orchestrator | Friday 02 January 2026 00:43:03 +0000 (0:00:00.161) 0:00:46.265 ******** 2026-01-02 00:43:05.173453 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:43:05.173460 | orchestrator | 2026-01-02 00:43:05.173466 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-02 00:43:05.173473 | orchestrator | Friday 02 January 2026 00:43:03 +0000 (0:00:00.427) 0:00:46.693 ******** 2026-01-02 00:43:05.173489 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:43:05.173495 | orchestrator | 2026-01-02 00:43:05.173501 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-02 00:43:05.173508 | orchestrator | Friday 02 January 2026 00:43:03 +0000 (0:00:00.148) 0:00:46.842 ******** 2026-01-02 00:43:05.173514 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:43:05.173519 | orchestrator | 2026-01-02 00:43:05.173526 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-02 00:43:05.173532 | orchestrator | Friday 02 January 2026 00:43:03 +0000 (0:00:00.166) 0:00:47.008 ******** 2026-01-02 00:43:05.173539 | orchestrator | changed: [testbed-node-5] => { 2026-01-02 00:43:05.173545 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-02 00:43:05.173552 | orchestrator |  "ceph_osd_devices": { 2026-01-02 00:43:05.173557 | orchestrator |  "sdb": { 2026-01-02 00:43:05.173564 | orchestrator |  "osd_lvm_uuid": "8c17e839-2cbb-5f17-abcc-9f26ae111b42" 2026-01-02 00:43:05.173570 | orchestrator |  }, 2026-01-02 00:43:05.173576 | orchestrator |  "sdc": { 2026-01-02 00:43:05.173583 | orchestrator |  "osd_lvm_uuid": "37cfd703-64b2-55b0-ad28-4f6812d5fa0d" 2026-01-02 00:43:05.173588 | orchestrator |  } 2026-01-02 00:43:05.173595 | orchestrator |  }, 2026-01-02 00:43:05.173601 | orchestrator |  "lvm_volumes": [ 2026-01-02 00:43:05.173607 | orchestrator |  { 2026-01-02 00:43:05.173614 | orchestrator |  "data": "osd-block-8c17e839-2cbb-5f17-abcc-9f26ae111b42", 2026-01-02 00:43:05.173621 | orchestrator |  "data_vg": "ceph-8c17e839-2cbb-5f17-abcc-9f26ae111b42" 2026-01-02 00:43:05.173628 | orchestrator |  }, 2026-01-02 00:43:05.173634 | orchestrator |  { 2026-01-02 00:43:05.173641 | orchestrator |  "data": "osd-block-37cfd703-64b2-55b0-ad28-4f6812d5fa0d", 2026-01-02 00:43:05.173647 | orchestrator |  "data_vg": "ceph-37cfd703-64b2-55b0-ad28-4f6812d5fa0d" 2026-01-02 00:43:05.173653 | orchestrator |  } 2026-01-02 00:43:05.173664 | orchestrator |  ] 2026-01-02 00:43:05.173670 | orchestrator |  } 2026-01-02 00:43:05.173699 | orchestrator | } 2026-01-02 00:43:05.173705 | orchestrator | 2026-01-02 00:43:05.173712 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-02 00:43:05.173718 | orchestrator | Friday 02 January 2026 00:43:04 +0000 (0:00:00.225) 0:00:47.234 ******** 2026-01-02 00:43:05.173725 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-02 00:43:05.173731 | orchestrator | 2026-01-02 00:43:05.173737 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:43:05.173744 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-02 00:43:05.173752 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-02 00:43:05.173759 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-02 00:43:05.173765 | orchestrator | 2026-01-02 00:43:05.173771 | orchestrator | 2026-01-02 00:43:05.173778 | orchestrator | 2026-01-02 00:43:05.173784 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:43:05.173790 | orchestrator | Friday 02 January 2026 00:43:05 +0000 (0:00:01.013) 0:00:48.248 ******** 2026-01-02 00:43:05.173796 | orchestrator | =============================================================================== 2026-01-02 00:43:05.173802 | orchestrator | Write configuration file ------------------------------------------------ 4.93s 2026-01-02 00:43:05.173809 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.54s 2026-01-02 00:43:05.173815 | orchestrator | Add known links to the list of available block devices ------------------ 1.49s 2026-01-02 00:43:05.173821 | orchestrator | Add known partitions to the list of available block devices ------------- 1.26s 2026-01-02 00:43:05.173836 | orchestrator | Add known partitions to the list of available block devices ------------- 1.20s 2026-01-02 00:43:05.173842 | orchestrator | Add known links to the list of available block devices ------------------ 1.05s 2026-01-02 00:43:05.173848 | orchestrator | Add known links to the list of available block devices ------------------ 1.03s 2026-01-02 00:43:05.173854 | orchestrator | Print configuration data ------------------------------------------------ 0.99s 2026-01-02 00:43:05.173861 | orchestrator | Add known partitions to the list of available block devices ------------- 0.99s 2026-01-02 00:43:05.173867 | orchestrator | Add known links to the list of available block devices ------------------ 0.89s 2026-01-02 00:43:05.173873 | orchestrator | Add known partitions to the list of available block devices ------------- 0.89s 2026-01-02 00:43:05.173879 | orchestrator | Add known partitions to the list of available block devices ------------- 0.84s 2026-01-02 00:43:05.173885 | orchestrator | Add known links to the list of available block devices ------------------ 0.83s 2026-01-02 00:43:05.173900 | orchestrator | Get initial list of available block devices ----------------------------- 0.83s 2026-01-02 00:43:05.650293 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.81s 2026-01-02 00:43:05.650432 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.74s 2026-01-02 00:43:05.650448 | orchestrator | Print WAL devices ------------------------------------------------------- 0.74s 2026-01-02 00:43:05.650461 | orchestrator | Add known links to the list of available block devices ------------------ 0.71s 2026-01-02 00:43:05.650473 | orchestrator | Set DB devices config data ---------------------------------------------- 0.70s 2026-01-02 00:43:05.650485 | orchestrator | Generate lvm_volumes structure (block only) ----------------------------- 0.64s 2026-01-02 00:43:28.355208 | orchestrator | 2026-01-02 00:43:28 | INFO  | Task 53afef31-11b7-4933-b5ef-46b8dbe957e3 (sync inventory) is running in background. Output coming soon. 2026-01-02 00:43:57.258951 | orchestrator | 2026-01-02 00:43:29 | INFO  | Starting group_vars file reorganization 2026-01-02 00:43:57.259105 | orchestrator | 2026-01-02 00:43:29 | INFO  | Moved 0 file(s) to their respective directories 2026-01-02 00:43:57.259125 | orchestrator | 2026-01-02 00:43:29 | INFO  | Group_vars file reorganization completed 2026-01-02 00:43:57.259138 | orchestrator | 2026-01-02 00:43:32 | INFO  | Starting variable preparation from inventory 2026-01-02 00:43:57.259151 | orchestrator | 2026-01-02 00:43:35 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-01-02 00:43:57.259162 | orchestrator | 2026-01-02 00:43:35 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-01-02 00:43:57.259196 | orchestrator | 2026-01-02 00:43:35 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-01-02 00:43:57.259208 | orchestrator | 2026-01-02 00:43:35 | INFO  | 3 file(s) written, 6 host(s) processed 2026-01-02 00:43:57.259220 | orchestrator | 2026-01-02 00:43:35 | INFO  | Variable preparation completed 2026-01-02 00:43:57.259231 | orchestrator | 2026-01-02 00:43:37 | INFO  | Starting inventory overwrite handling 2026-01-02 00:43:57.259248 | orchestrator | 2026-01-02 00:43:37 | INFO  | Handling group overwrites in 99-overwrite 2026-01-02 00:43:57.259260 | orchestrator | 2026-01-02 00:43:37 | INFO  | Removing group frr:children from 60-generic 2026-01-02 00:43:57.259271 | orchestrator | 2026-01-02 00:43:37 | INFO  | Removing group netbird:children from 50-infrastructure 2026-01-02 00:43:57.259282 | orchestrator | 2026-01-02 00:43:37 | INFO  | Removing group ceph-rgw from 50-ceph 2026-01-02 00:43:57.259293 | orchestrator | 2026-01-02 00:43:37 | INFO  | Removing group ceph-mds from 50-ceph 2026-01-02 00:43:57.259304 | orchestrator | 2026-01-02 00:43:37 | INFO  | Handling group overwrites in 20-roles 2026-01-02 00:43:57.259339 | orchestrator | 2026-01-02 00:43:37 | INFO  | Removing group k3s_node from 50-infrastructure 2026-01-02 00:43:57.259351 | orchestrator | 2026-01-02 00:43:37 | INFO  | Removed 5 group(s) in total 2026-01-02 00:43:57.259362 | orchestrator | 2026-01-02 00:43:37 | INFO  | Inventory overwrite handling completed 2026-01-02 00:43:57.259373 | orchestrator | 2026-01-02 00:43:39 | INFO  | Starting merge of inventory files 2026-01-02 00:43:57.259384 | orchestrator | 2026-01-02 00:43:39 | INFO  | Inventory files merged successfully 2026-01-02 00:43:57.259395 | orchestrator | 2026-01-02 00:43:44 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-01-02 00:43:57.259406 | orchestrator | 2026-01-02 00:43:55 | INFO  | Successfully wrote ClusterShell configuration 2026-01-02 00:43:57.259418 | orchestrator | [master d7b0c8b] 2026-01-02-00-43 2026-01-02 00:43:57.259432 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-01-02 00:43:59.909542 | orchestrator | 2026-01-02 00:43:59 | INFO  | Task 0e74aac6-2676-4e31-b5f0-6fc29092a9ee (ceph-create-lvm-devices) was prepared for execution. 2026-01-02 00:43:59.909615 | orchestrator | 2026-01-02 00:43:59 | INFO  | It takes a moment until task 0e74aac6-2676-4e31-b5f0-6fc29092a9ee (ceph-create-lvm-devices) has been started and output is visible here. 2026-01-02 00:44:13.734321 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-02 00:44:13.734409 | orchestrator | 2.16.14 2026-01-02 00:44:13.734418 | orchestrator | 2026-01-02 00:44:13.734424 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-02 00:44:13.734431 | orchestrator | 2026-01-02 00:44:13.734436 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-02 00:44:13.734441 | orchestrator | Friday 02 January 2026 00:44:04 +0000 (0:00:00.321) 0:00:00.321 ******** 2026-01-02 00:44:13.734447 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-02 00:44:13.734452 | orchestrator | 2026-01-02 00:44:13.734457 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-02 00:44:13.734462 | orchestrator | Friday 02 January 2026 00:44:05 +0000 (0:00:00.257) 0:00:00.579 ******** 2026-01-02 00:44:13.734467 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:44:13.734473 | orchestrator | 2026-01-02 00:44:13.734479 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:13.734484 | orchestrator | Friday 02 January 2026 00:44:05 +0000 (0:00:00.240) 0:00:00.819 ******** 2026-01-02 00:44:13.734489 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-01-02 00:44:13.734494 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-01-02 00:44:13.734499 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-01-02 00:44:13.734504 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-01-02 00:44:13.734509 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-01-02 00:44:13.734514 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-01-02 00:44:13.734518 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-01-02 00:44:13.734523 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-01-02 00:44:13.734528 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-01-02 00:44:13.734533 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-01-02 00:44:13.734538 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-01-02 00:44:13.734543 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-01-02 00:44:13.734564 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-01-02 00:44:13.734570 | orchestrator | 2026-01-02 00:44:13.734575 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:13.734590 | orchestrator | Friday 02 January 2026 00:44:05 +0000 (0:00:00.583) 0:00:01.403 ******** 2026-01-02 00:44:13.734595 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:13.734600 | orchestrator | 2026-01-02 00:44:13.734611 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:13.734616 | orchestrator | Friday 02 January 2026 00:44:06 +0000 (0:00:00.219) 0:00:01.622 ******** 2026-01-02 00:44:13.734621 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:13.734626 | orchestrator | 2026-01-02 00:44:13.734631 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:13.734636 | orchestrator | Friday 02 January 2026 00:44:06 +0000 (0:00:00.212) 0:00:01.835 ******** 2026-01-02 00:44:13.734641 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:13.734646 | orchestrator | 2026-01-02 00:44:13.734651 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:13.734656 | orchestrator | Friday 02 January 2026 00:44:06 +0000 (0:00:00.204) 0:00:02.039 ******** 2026-01-02 00:44:13.734661 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:13.734666 | orchestrator | 2026-01-02 00:44:13.734671 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:13.734676 | orchestrator | Friday 02 January 2026 00:44:06 +0000 (0:00:00.217) 0:00:02.257 ******** 2026-01-02 00:44:13.734681 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:13.734686 | orchestrator | 2026-01-02 00:44:13.734699 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:13.734704 | orchestrator | Friday 02 January 2026 00:44:07 +0000 (0:00:00.248) 0:00:02.505 ******** 2026-01-02 00:44:13.734714 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:13.734719 | orchestrator | 2026-01-02 00:44:13.734724 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:13.734729 | orchestrator | Friday 02 January 2026 00:44:07 +0000 (0:00:00.234) 0:00:02.740 ******** 2026-01-02 00:44:13.734734 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:13.734739 | orchestrator | 2026-01-02 00:44:13.734744 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:13.734749 | orchestrator | Friday 02 January 2026 00:44:07 +0000 (0:00:00.223) 0:00:02.963 ******** 2026-01-02 00:44:13.734754 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:13.734759 | orchestrator | 2026-01-02 00:44:13.734763 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:13.734768 | orchestrator | Friday 02 January 2026 00:44:07 +0000 (0:00:00.235) 0:00:03.198 ******** 2026-01-02 00:44:13.734773 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_817579c1-b31d-4bbf-8af4-60793d227397) 2026-01-02 00:44:13.734779 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_817579c1-b31d-4bbf-8af4-60793d227397) 2026-01-02 00:44:13.734785 | orchestrator | 2026-01-02 00:44:13.734789 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:13.734806 | orchestrator | Friday 02 January 2026 00:44:08 +0000 (0:00:00.483) 0:00:03.682 ******** 2026-01-02 00:44:13.734811 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6d9d2903-81fe-42d1-9111-d7d9a87231b0) 2026-01-02 00:44:13.734816 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6d9d2903-81fe-42d1-9111-d7d9a87231b0) 2026-01-02 00:44:13.734821 | orchestrator | 2026-01-02 00:44:13.734826 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:13.734831 | orchestrator | Friday 02 January 2026 00:44:09 +0000 (0:00:00.881) 0:00:04.564 ******** 2026-01-02 00:44:13.734835 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_91cfe094-4682-4bfc-95e3-88354566cb8a) 2026-01-02 00:44:13.734845 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_91cfe094-4682-4bfc-95e3-88354566cb8a) 2026-01-02 00:44:13.734850 | orchestrator | 2026-01-02 00:44:13.734855 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:13.734859 | orchestrator | Friday 02 January 2026 00:44:09 +0000 (0:00:00.920) 0:00:05.484 ******** 2026-01-02 00:44:13.734864 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ace49a83-40fe-462c-82a5-a32ee72a9346) 2026-01-02 00:44:13.734869 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ace49a83-40fe-462c-82a5-a32ee72a9346) 2026-01-02 00:44:13.734874 | orchestrator | 2026-01-02 00:44:13.734879 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:13.734886 | orchestrator | Friday 02 January 2026 00:44:11 +0000 (0:00:01.130) 0:00:06.615 ******** 2026-01-02 00:44:13.734891 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-02 00:44:13.734897 | orchestrator | 2026-01-02 00:44:13.734902 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:13.734908 | orchestrator | Friday 02 January 2026 00:44:11 +0000 (0:00:00.419) 0:00:07.035 ******** 2026-01-02 00:44:13.734913 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-01-02 00:44:13.734919 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-01-02 00:44:13.734924 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-01-02 00:44:13.734943 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-01-02 00:44:13.734949 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-01-02 00:44:13.734955 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-01-02 00:44:13.734960 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-01-02 00:44:13.734966 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-01-02 00:44:13.734971 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-01-02 00:44:13.734977 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-01-02 00:44:13.734983 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-01-02 00:44:13.734991 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-01-02 00:44:13.734997 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-01-02 00:44:13.735002 | orchestrator | 2026-01-02 00:44:13.735008 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:13.735014 | orchestrator | Friday 02 January 2026 00:44:12 +0000 (0:00:00.532) 0:00:07.567 ******** 2026-01-02 00:44:13.735019 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:13.735025 | orchestrator | 2026-01-02 00:44:13.735030 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:13.735036 | orchestrator | Friday 02 January 2026 00:44:12 +0000 (0:00:00.279) 0:00:07.847 ******** 2026-01-02 00:44:13.735042 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:13.735047 | orchestrator | 2026-01-02 00:44:13.735052 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:13.735071 | orchestrator | Friday 02 January 2026 00:44:12 +0000 (0:00:00.219) 0:00:08.067 ******** 2026-01-02 00:44:13.735077 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:13.735083 | orchestrator | 2026-01-02 00:44:13.735088 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:13.735094 | orchestrator | Friday 02 January 2026 00:44:12 +0000 (0:00:00.258) 0:00:08.325 ******** 2026-01-02 00:44:13.735099 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:13.735108 | orchestrator | 2026-01-02 00:44:13.735114 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:13.735120 | orchestrator | Friday 02 January 2026 00:44:13 +0000 (0:00:00.221) 0:00:08.546 ******** 2026-01-02 00:44:13.735126 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:13.735131 | orchestrator | 2026-01-02 00:44:13.735137 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:13.735143 | orchestrator | Friday 02 January 2026 00:44:13 +0000 (0:00:00.223) 0:00:08.769 ******** 2026-01-02 00:44:13.735149 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:13.735154 | orchestrator | 2026-01-02 00:44:13.735160 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:13.735166 | orchestrator | Friday 02 January 2026 00:44:13 +0000 (0:00:00.219) 0:00:08.988 ******** 2026-01-02 00:44:13.735171 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:13.735177 | orchestrator | 2026-01-02 00:44:13.735187 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:22.638187 | orchestrator | Friday 02 January 2026 00:44:13 +0000 (0:00:00.240) 0:00:09.229 ******** 2026-01-02 00:44:22.638320 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:22.638344 | orchestrator | 2026-01-02 00:44:22.638359 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:22.638372 | orchestrator | Friday 02 January 2026 00:44:13 +0000 (0:00:00.235) 0:00:09.465 ******** 2026-01-02 00:44:22.638386 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-01-02 00:44:22.638402 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-01-02 00:44:22.638416 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-01-02 00:44:22.638431 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-01-02 00:44:22.638445 | orchestrator | 2026-01-02 00:44:22.638458 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:22.638470 | orchestrator | Friday 02 January 2026 00:44:15 +0000 (0:00:01.496) 0:00:10.961 ******** 2026-01-02 00:44:22.638478 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:22.638487 | orchestrator | 2026-01-02 00:44:22.638495 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:22.638503 | orchestrator | Friday 02 January 2026 00:44:15 +0000 (0:00:00.252) 0:00:11.214 ******** 2026-01-02 00:44:22.638512 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:22.638520 | orchestrator | 2026-01-02 00:44:22.638528 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:22.638537 | orchestrator | Friday 02 January 2026 00:44:15 +0000 (0:00:00.251) 0:00:11.465 ******** 2026-01-02 00:44:22.638545 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:22.638553 | orchestrator | 2026-01-02 00:44:22.638561 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:22.638569 | orchestrator | Friday 02 January 2026 00:44:16 +0000 (0:00:00.221) 0:00:11.687 ******** 2026-01-02 00:44:22.638577 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:22.638585 | orchestrator | 2026-01-02 00:44:22.638593 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-02 00:44:22.638601 | orchestrator | Friday 02 January 2026 00:44:16 +0000 (0:00:00.206) 0:00:11.894 ******** 2026-01-02 00:44:22.638609 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:22.638617 | orchestrator | 2026-01-02 00:44:22.638624 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-02 00:44:22.638633 | orchestrator | Friday 02 January 2026 00:44:16 +0000 (0:00:00.129) 0:00:12.023 ******** 2026-01-02 00:44:22.638641 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c483f3a2-63e3-5a58-8db6-ff291b90fd92'}}) 2026-01-02 00:44:22.638650 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa'}}) 2026-01-02 00:44:22.638658 | orchestrator | 2026-01-02 00:44:22.638666 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-02 00:44:22.638699 | orchestrator | Friday 02 January 2026 00:44:16 +0000 (0:00:00.225) 0:00:12.249 ******** 2026-01-02 00:44:22.638710 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c483f3a2-63e3-5a58-8db6-ff291b90fd92', 'data_vg': 'ceph-c483f3a2-63e3-5a58-8db6-ff291b90fd92'}) 2026-01-02 00:44:22.638719 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa', 'data_vg': 'ceph-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa'}) 2026-01-02 00:44:22.638727 | orchestrator | 2026-01-02 00:44:22.638735 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-02 00:44:22.638743 | orchestrator | Friday 02 January 2026 00:44:18 +0000 (0:00:02.110) 0:00:14.359 ******** 2026-01-02 00:44:22.638751 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c483f3a2-63e3-5a58-8db6-ff291b90fd92', 'data_vg': 'ceph-c483f3a2-63e3-5a58-8db6-ff291b90fd92'})  2026-01-02 00:44:22.638761 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa', 'data_vg': 'ceph-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa'})  2026-01-02 00:44:22.638769 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:22.638777 | orchestrator | 2026-01-02 00:44:22.638785 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-02 00:44:22.638793 | orchestrator | Friday 02 January 2026 00:44:19 +0000 (0:00:00.159) 0:00:14.519 ******** 2026-01-02 00:44:22.638801 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c483f3a2-63e3-5a58-8db6-ff291b90fd92', 'data_vg': 'ceph-c483f3a2-63e3-5a58-8db6-ff291b90fd92'}) 2026-01-02 00:44:22.638809 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa', 'data_vg': 'ceph-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa'}) 2026-01-02 00:44:22.638817 | orchestrator | 2026-01-02 00:44:22.638825 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-02 00:44:22.638833 | orchestrator | Friday 02 January 2026 00:44:20 +0000 (0:00:01.513) 0:00:16.033 ******** 2026-01-02 00:44:22.638841 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c483f3a2-63e3-5a58-8db6-ff291b90fd92', 'data_vg': 'ceph-c483f3a2-63e3-5a58-8db6-ff291b90fd92'})  2026-01-02 00:44:22.638849 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa', 'data_vg': 'ceph-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa'})  2026-01-02 00:44:22.638857 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:22.638865 | orchestrator | 2026-01-02 00:44:22.638873 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-02 00:44:22.638881 | orchestrator | Friday 02 January 2026 00:44:20 +0000 (0:00:00.161) 0:00:16.194 ******** 2026-01-02 00:44:22.638908 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:22.638916 | orchestrator | 2026-01-02 00:44:22.638924 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-02 00:44:22.638932 | orchestrator | Friday 02 January 2026 00:44:20 +0000 (0:00:00.136) 0:00:16.331 ******** 2026-01-02 00:44:22.638940 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c483f3a2-63e3-5a58-8db6-ff291b90fd92', 'data_vg': 'ceph-c483f3a2-63e3-5a58-8db6-ff291b90fd92'})  2026-01-02 00:44:22.638948 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa', 'data_vg': 'ceph-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa'})  2026-01-02 00:44:22.638956 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:22.638964 | orchestrator | 2026-01-02 00:44:22.638972 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-02 00:44:22.638980 | orchestrator | Friday 02 January 2026 00:44:21 +0000 (0:00:00.408) 0:00:16.740 ******** 2026-01-02 00:44:22.638988 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:22.638996 | orchestrator | 2026-01-02 00:44:22.639004 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-02 00:44:22.639012 | orchestrator | Friday 02 January 2026 00:44:21 +0000 (0:00:00.154) 0:00:16.894 ******** 2026-01-02 00:44:22.639026 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c483f3a2-63e3-5a58-8db6-ff291b90fd92', 'data_vg': 'ceph-c483f3a2-63e3-5a58-8db6-ff291b90fd92'})  2026-01-02 00:44:22.639034 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa', 'data_vg': 'ceph-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa'})  2026-01-02 00:44:22.639042 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:22.639050 | orchestrator | 2026-01-02 00:44:22.639080 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-02 00:44:22.639089 | orchestrator | Friday 02 January 2026 00:44:21 +0000 (0:00:00.152) 0:00:17.047 ******** 2026-01-02 00:44:22.639096 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:22.639105 | orchestrator | 2026-01-02 00:44:22.639113 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-02 00:44:22.639120 | orchestrator | Friday 02 January 2026 00:44:21 +0000 (0:00:00.142) 0:00:17.189 ******** 2026-01-02 00:44:22.639128 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c483f3a2-63e3-5a58-8db6-ff291b90fd92', 'data_vg': 'ceph-c483f3a2-63e3-5a58-8db6-ff291b90fd92'})  2026-01-02 00:44:22.639137 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa', 'data_vg': 'ceph-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa'})  2026-01-02 00:44:22.639145 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:22.639153 | orchestrator | 2026-01-02 00:44:22.639160 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-02 00:44:22.639168 | orchestrator | Friday 02 January 2026 00:44:21 +0000 (0:00:00.159) 0:00:17.349 ******** 2026-01-02 00:44:22.639176 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:44:22.639184 | orchestrator | 2026-01-02 00:44:22.639192 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-02 00:44:22.639216 | orchestrator | Friday 02 January 2026 00:44:21 +0000 (0:00:00.141) 0:00:17.490 ******** 2026-01-02 00:44:22.639228 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c483f3a2-63e3-5a58-8db6-ff291b90fd92', 'data_vg': 'ceph-c483f3a2-63e3-5a58-8db6-ff291b90fd92'})  2026-01-02 00:44:22.639236 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa', 'data_vg': 'ceph-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa'})  2026-01-02 00:44:22.639244 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:22.639252 | orchestrator | 2026-01-02 00:44:22.639260 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-02 00:44:22.639268 | orchestrator | Friday 02 January 2026 00:44:22 +0000 (0:00:00.178) 0:00:17.669 ******** 2026-01-02 00:44:22.639276 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c483f3a2-63e3-5a58-8db6-ff291b90fd92', 'data_vg': 'ceph-c483f3a2-63e3-5a58-8db6-ff291b90fd92'})  2026-01-02 00:44:22.639284 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa', 'data_vg': 'ceph-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa'})  2026-01-02 00:44:22.639292 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:22.639302 | orchestrator | 2026-01-02 00:44:22.639316 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-02 00:44:22.639329 | orchestrator | Friday 02 January 2026 00:44:22 +0000 (0:00:00.162) 0:00:17.832 ******** 2026-01-02 00:44:22.639343 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c483f3a2-63e3-5a58-8db6-ff291b90fd92', 'data_vg': 'ceph-c483f3a2-63e3-5a58-8db6-ff291b90fd92'})  2026-01-02 00:44:22.639356 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa', 'data_vg': 'ceph-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa'})  2026-01-02 00:44:22.639368 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:22.639384 | orchestrator | 2026-01-02 00:44:22.639404 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-02 00:44:22.639439 | orchestrator | Friday 02 January 2026 00:44:22 +0000 (0:00:00.163) 0:00:17.996 ******** 2026-01-02 00:44:22.639462 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:22.639508 | orchestrator | 2026-01-02 00:44:22.639541 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-02 00:44:22.639571 | orchestrator | Friday 02 January 2026 00:44:22 +0000 (0:00:00.138) 0:00:18.134 ******** 2026-01-02 00:44:29.605708 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:29.605804 | orchestrator | 2026-01-02 00:44:29.605817 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-02 00:44:29.605827 | orchestrator | Friday 02 January 2026 00:44:22 +0000 (0:00:00.138) 0:00:18.272 ******** 2026-01-02 00:44:29.605835 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:29.605842 | orchestrator | 2026-01-02 00:44:29.605850 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-02 00:44:29.605858 | orchestrator | Friday 02 January 2026 00:44:22 +0000 (0:00:00.136) 0:00:18.409 ******** 2026-01-02 00:44:29.605865 | orchestrator | ok: [testbed-node-3] => { 2026-01-02 00:44:29.605873 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-02 00:44:29.605881 | orchestrator | } 2026-01-02 00:44:29.605889 | orchestrator | 2026-01-02 00:44:29.605896 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-02 00:44:29.605904 | orchestrator | Friday 02 January 2026 00:44:23 +0000 (0:00:00.368) 0:00:18.777 ******** 2026-01-02 00:44:29.605911 | orchestrator | ok: [testbed-node-3] => { 2026-01-02 00:44:29.605919 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-02 00:44:29.605926 | orchestrator | } 2026-01-02 00:44:29.605933 | orchestrator | 2026-01-02 00:44:29.605941 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-02 00:44:29.605948 | orchestrator | Friday 02 January 2026 00:44:23 +0000 (0:00:00.170) 0:00:18.948 ******** 2026-01-02 00:44:29.605957 | orchestrator | ok: [testbed-node-3] => { 2026-01-02 00:44:29.605964 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-02 00:44:29.605972 | orchestrator | } 2026-01-02 00:44:29.605979 | orchestrator | 2026-01-02 00:44:29.605987 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-02 00:44:29.605994 | orchestrator | Friday 02 January 2026 00:44:23 +0000 (0:00:00.180) 0:00:19.129 ******** 2026-01-02 00:44:29.606002 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:44:29.606010 | orchestrator | 2026-01-02 00:44:29.606100 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-02 00:44:29.606110 | orchestrator | Friday 02 January 2026 00:44:24 +0000 (0:00:00.729) 0:00:19.858 ******** 2026-01-02 00:44:29.606117 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:44:29.606125 | orchestrator | 2026-01-02 00:44:29.606133 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-02 00:44:29.606140 | orchestrator | Friday 02 January 2026 00:44:24 +0000 (0:00:00.549) 0:00:20.408 ******** 2026-01-02 00:44:29.606148 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:44:29.606155 | orchestrator | 2026-01-02 00:44:29.606163 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-02 00:44:29.606170 | orchestrator | Friday 02 January 2026 00:44:25 +0000 (0:00:00.577) 0:00:20.985 ******** 2026-01-02 00:44:29.606179 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:44:29.606191 | orchestrator | 2026-01-02 00:44:29.606204 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-02 00:44:29.606216 | orchestrator | Friday 02 January 2026 00:44:25 +0000 (0:00:00.147) 0:00:21.132 ******** 2026-01-02 00:44:29.606226 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:29.606238 | orchestrator | 2026-01-02 00:44:29.606251 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-02 00:44:29.606266 | orchestrator | Friday 02 January 2026 00:44:25 +0000 (0:00:00.111) 0:00:21.244 ******** 2026-01-02 00:44:29.606279 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:29.606291 | orchestrator | 2026-01-02 00:44:29.606300 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-02 00:44:29.606342 | orchestrator | Friday 02 January 2026 00:44:25 +0000 (0:00:00.107) 0:00:21.351 ******** 2026-01-02 00:44:29.606351 | orchestrator | ok: [testbed-node-3] => { 2026-01-02 00:44:29.606361 | orchestrator |  "vgs_report": { 2026-01-02 00:44:29.606370 | orchestrator |  "vg": [] 2026-01-02 00:44:29.606379 | orchestrator |  } 2026-01-02 00:44:29.606388 | orchestrator | } 2026-01-02 00:44:29.606397 | orchestrator | 2026-01-02 00:44:29.606405 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-02 00:44:29.606414 | orchestrator | Friday 02 January 2026 00:44:26 +0000 (0:00:00.162) 0:00:21.514 ******** 2026-01-02 00:44:29.606423 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:29.606431 | orchestrator | 2026-01-02 00:44:29.606440 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-02 00:44:29.606448 | orchestrator | Friday 02 January 2026 00:44:26 +0000 (0:00:00.169) 0:00:21.683 ******** 2026-01-02 00:44:29.606457 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:29.606465 | orchestrator | 2026-01-02 00:44:29.606474 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-02 00:44:29.606483 | orchestrator | Friday 02 January 2026 00:44:26 +0000 (0:00:00.151) 0:00:21.834 ******** 2026-01-02 00:44:29.606491 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:29.606500 | orchestrator | 2026-01-02 00:44:29.606509 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-02 00:44:29.606517 | orchestrator | Friday 02 January 2026 00:44:26 +0000 (0:00:00.415) 0:00:22.250 ******** 2026-01-02 00:44:29.606526 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:29.606535 | orchestrator | 2026-01-02 00:44:29.606543 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-02 00:44:29.606552 | orchestrator | Friday 02 January 2026 00:44:26 +0000 (0:00:00.140) 0:00:22.391 ******** 2026-01-02 00:44:29.606560 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:29.606569 | orchestrator | 2026-01-02 00:44:29.606577 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-02 00:44:29.606586 | orchestrator | Friday 02 January 2026 00:44:27 +0000 (0:00:00.183) 0:00:22.574 ******** 2026-01-02 00:44:29.606594 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:29.606603 | orchestrator | 2026-01-02 00:44:29.606612 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-02 00:44:29.606619 | orchestrator | Friday 02 January 2026 00:44:27 +0000 (0:00:00.145) 0:00:22.719 ******** 2026-01-02 00:44:29.606627 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:29.606634 | orchestrator | 2026-01-02 00:44:29.606641 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-02 00:44:29.606649 | orchestrator | Friday 02 January 2026 00:44:27 +0000 (0:00:00.163) 0:00:22.882 ******** 2026-01-02 00:44:29.606672 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:29.606680 | orchestrator | 2026-01-02 00:44:29.606687 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-02 00:44:29.606695 | orchestrator | Friday 02 January 2026 00:44:27 +0000 (0:00:00.142) 0:00:23.025 ******** 2026-01-02 00:44:29.606702 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:29.606709 | orchestrator | 2026-01-02 00:44:29.606717 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-02 00:44:29.606724 | orchestrator | Friday 02 January 2026 00:44:27 +0000 (0:00:00.153) 0:00:23.179 ******** 2026-01-02 00:44:29.606731 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:29.606739 | orchestrator | 2026-01-02 00:44:29.606746 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-02 00:44:29.606753 | orchestrator | Friday 02 January 2026 00:44:27 +0000 (0:00:00.136) 0:00:23.315 ******** 2026-01-02 00:44:29.606761 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:29.606768 | orchestrator | 2026-01-02 00:44:29.606775 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-02 00:44:29.606783 | orchestrator | Friday 02 January 2026 00:44:27 +0000 (0:00:00.148) 0:00:23.463 ******** 2026-01-02 00:44:29.606798 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:29.606805 | orchestrator | 2026-01-02 00:44:29.606813 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-02 00:44:29.606820 | orchestrator | Friday 02 January 2026 00:44:28 +0000 (0:00:00.158) 0:00:23.622 ******** 2026-01-02 00:44:29.606828 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:29.606835 | orchestrator | 2026-01-02 00:44:29.606842 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-02 00:44:29.606850 | orchestrator | Friday 02 January 2026 00:44:28 +0000 (0:00:00.148) 0:00:23.770 ******** 2026-01-02 00:44:29.606857 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:29.606864 | orchestrator | 2026-01-02 00:44:29.606872 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-02 00:44:29.606879 | orchestrator | Friday 02 January 2026 00:44:28 +0000 (0:00:00.140) 0:00:23.910 ******** 2026-01-02 00:44:29.606888 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c483f3a2-63e3-5a58-8db6-ff291b90fd92', 'data_vg': 'ceph-c483f3a2-63e3-5a58-8db6-ff291b90fd92'})  2026-01-02 00:44:29.606897 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa', 'data_vg': 'ceph-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa'})  2026-01-02 00:44:29.606905 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:29.606912 | orchestrator | 2026-01-02 00:44:29.606919 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-02 00:44:29.606927 | orchestrator | Friday 02 January 2026 00:44:28 +0000 (0:00:00.387) 0:00:24.298 ******** 2026-01-02 00:44:29.606934 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c483f3a2-63e3-5a58-8db6-ff291b90fd92', 'data_vg': 'ceph-c483f3a2-63e3-5a58-8db6-ff291b90fd92'})  2026-01-02 00:44:29.606942 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa', 'data_vg': 'ceph-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa'})  2026-01-02 00:44:29.606949 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:29.606957 | orchestrator | 2026-01-02 00:44:29.606964 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-02 00:44:29.606972 | orchestrator | Friday 02 January 2026 00:44:28 +0000 (0:00:00.156) 0:00:24.455 ******** 2026-01-02 00:44:29.606979 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c483f3a2-63e3-5a58-8db6-ff291b90fd92', 'data_vg': 'ceph-c483f3a2-63e3-5a58-8db6-ff291b90fd92'})  2026-01-02 00:44:29.606987 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa', 'data_vg': 'ceph-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa'})  2026-01-02 00:44:29.606994 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:29.607001 | orchestrator | 2026-01-02 00:44:29.607009 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-02 00:44:29.607016 | orchestrator | Friday 02 January 2026 00:44:29 +0000 (0:00:00.164) 0:00:24.619 ******** 2026-01-02 00:44:29.607023 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c483f3a2-63e3-5a58-8db6-ff291b90fd92', 'data_vg': 'ceph-c483f3a2-63e3-5a58-8db6-ff291b90fd92'})  2026-01-02 00:44:29.607031 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa', 'data_vg': 'ceph-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa'})  2026-01-02 00:44:29.607038 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:29.607046 | orchestrator | 2026-01-02 00:44:29.607074 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-02 00:44:29.607084 | orchestrator | Friday 02 January 2026 00:44:29 +0000 (0:00:00.152) 0:00:24.772 ******** 2026-01-02 00:44:29.607091 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c483f3a2-63e3-5a58-8db6-ff291b90fd92', 'data_vg': 'ceph-c483f3a2-63e3-5a58-8db6-ff291b90fd92'})  2026-01-02 00:44:29.607098 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa', 'data_vg': 'ceph-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa'})  2026-01-02 00:44:29.607112 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:29.607120 | orchestrator | 2026-01-02 00:44:29.607127 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-02 00:44:29.607142 | orchestrator | Friday 02 January 2026 00:44:29 +0000 (0:00:00.160) 0:00:24.932 ******** 2026-01-02 00:44:29.607155 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c483f3a2-63e3-5a58-8db6-ff291b90fd92', 'data_vg': 'ceph-c483f3a2-63e3-5a58-8db6-ff291b90fd92'})  2026-01-02 00:44:35.406285 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa', 'data_vg': 'ceph-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa'})  2026-01-02 00:44:35.406385 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:35.406398 | orchestrator | 2026-01-02 00:44:35.406408 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-02 00:44:35.406418 | orchestrator | Friday 02 January 2026 00:44:29 +0000 (0:00:00.172) 0:00:25.105 ******** 2026-01-02 00:44:35.406426 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c483f3a2-63e3-5a58-8db6-ff291b90fd92', 'data_vg': 'ceph-c483f3a2-63e3-5a58-8db6-ff291b90fd92'})  2026-01-02 00:44:35.406435 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa', 'data_vg': 'ceph-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa'})  2026-01-02 00:44:35.406443 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:35.406451 | orchestrator | 2026-01-02 00:44:35.406459 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-02 00:44:35.406467 | orchestrator | Friday 02 January 2026 00:44:29 +0000 (0:00:00.166) 0:00:25.271 ******** 2026-01-02 00:44:35.406476 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c483f3a2-63e3-5a58-8db6-ff291b90fd92', 'data_vg': 'ceph-c483f3a2-63e3-5a58-8db6-ff291b90fd92'})  2026-01-02 00:44:35.406484 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa', 'data_vg': 'ceph-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa'})  2026-01-02 00:44:35.406492 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:35.406500 | orchestrator | 2026-01-02 00:44:35.406508 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-02 00:44:35.406516 | orchestrator | Friday 02 January 2026 00:44:29 +0000 (0:00:00.190) 0:00:25.462 ******** 2026-01-02 00:44:35.406524 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:44:35.406533 | orchestrator | 2026-01-02 00:44:35.406541 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-02 00:44:35.406549 | orchestrator | Friday 02 January 2026 00:44:30 +0000 (0:00:00.550) 0:00:26.012 ******** 2026-01-02 00:44:35.406557 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:44:35.406565 | orchestrator | 2026-01-02 00:44:35.406573 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-02 00:44:35.406581 | orchestrator | Friday 02 January 2026 00:44:31 +0000 (0:00:00.520) 0:00:26.533 ******** 2026-01-02 00:44:35.406589 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:44:35.406597 | orchestrator | 2026-01-02 00:44:35.406605 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-02 00:44:35.406613 | orchestrator | Friday 02 January 2026 00:44:31 +0000 (0:00:00.184) 0:00:26.717 ******** 2026-01-02 00:44:35.406622 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa', 'vg_name': 'ceph-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa'}) 2026-01-02 00:44:35.406646 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-c483f3a2-63e3-5a58-8db6-ff291b90fd92', 'vg_name': 'ceph-c483f3a2-63e3-5a58-8db6-ff291b90fd92'}) 2026-01-02 00:44:35.406654 | orchestrator | 2026-01-02 00:44:35.406662 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-02 00:44:35.406670 | orchestrator | Friday 02 January 2026 00:44:31 +0000 (0:00:00.217) 0:00:26.934 ******** 2026-01-02 00:44:35.406699 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c483f3a2-63e3-5a58-8db6-ff291b90fd92', 'data_vg': 'ceph-c483f3a2-63e3-5a58-8db6-ff291b90fd92'})  2026-01-02 00:44:35.406708 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa', 'data_vg': 'ceph-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa'})  2026-01-02 00:44:35.406716 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:35.406724 | orchestrator | 2026-01-02 00:44:35.406731 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-02 00:44:35.406739 | orchestrator | Friday 02 January 2026 00:44:31 +0000 (0:00:00.411) 0:00:27.346 ******** 2026-01-02 00:44:35.406747 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c483f3a2-63e3-5a58-8db6-ff291b90fd92', 'data_vg': 'ceph-c483f3a2-63e3-5a58-8db6-ff291b90fd92'})  2026-01-02 00:44:35.406755 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa', 'data_vg': 'ceph-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa'})  2026-01-02 00:44:35.406763 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:35.406772 | orchestrator | 2026-01-02 00:44:35.406779 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-02 00:44:35.406787 | orchestrator | Friday 02 January 2026 00:44:32 +0000 (0:00:00.176) 0:00:27.522 ******** 2026-01-02 00:44:35.406795 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c483f3a2-63e3-5a58-8db6-ff291b90fd92', 'data_vg': 'ceph-c483f3a2-63e3-5a58-8db6-ff291b90fd92'})  2026-01-02 00:44:35.406803 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa', 'data_vg': 'ceph-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa'})  2026-01-02 00:44:35.406811 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:35.406819 | orchestrator | 2026-01-02 00:44:35.406827 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-02 00:44:35.406837 | orchestrator | Friday 02 January 2026 00:44:32 +0000 (0:00:00.179) 0:00:27.702 ******** 2026-01-02 00:44:35.406861 | orchestrator | ok: [testbed-node-3] => { 2026-01-02 00:44:35.406871 | orchestrator |  "lvm_report": { 2026-01-02 00:44:35.406881 | orchestrator |  "lv": [ 2026-01-02 00:44:35.406890 | orchestrator |  { 2026-01-02 00:44:35.406900 | orchestrator |  "lv_name": "osd-block-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa", 2026-01-02 00:44:35.406911 | orchestrator |  "vg_name": "ceph-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa" 2026-01-02 00:44:35.406920 | orchestrator |  }, 2026-01-02 00:44:35.406929 | orchestrator |  { 2026-01-02 00:44:35.406938 | orchestrator |  "lv_name": "osd-block-c483f3a2-63e3-5a58-8db6-ff291b90fd92", 2026-01-02 00:44:35.406947 | orchestrator |  "vg_name": "ceph-c483f3a2-63e3-5a58-8db6-ff291b90fd92" 2026-01-02 00:44:35.406956 | orchestrator |  } 2026-01-02 00:44:35.406965 | orchestrator |  ], 2026-01-02 00:44:35.406974 | orchestrator |  "pv": [ 2026-01-02 00:44:35.406983 | orchestrator |  { 2026-01-02 00:44:35.406992 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-02 00:44:35.407001 | orchestrator |  "vg_name": "ceph-c483f3a2-63e3-5a58-8db6-ff291b90fd92" 2026-01-02 00:44:35.407009 | orchestrator |  }, 2026-01-02 00:44:35.407019 | orchestrator |  { 2026-01-02 00:44:35.407028 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-02 00:44:35.407038 | orchestrator |  "vg_name": "ceph-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa" 2026-01-02 00:44:35.407047 | orchestrator |  } 2026-01-02 00:44:35.407085 | orchestrator |  ] 2026-01-02 00:44:35.407095 | orchestrator |  } 2026-01-02 00:44:35.407105 | orchestrator | } 2026-01-02 00:44:35.407114 | orchestrator | 2026-01-02 00:44:35.407124 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-02 00:44:35.407133 | orchestrator | 2026-01-02 00:44:35.407142 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-02 00:44:35.407159 | orchestrator | Friday 02 January 2026 00:44:32 +0000 (0:00:00.365) 0:00:28.068 ******** 2026-01-02 00:44:35.407168 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-02 00:44:35.407178 | orchestrator | 2026-01-02 00:44:35.407188 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-02 00:44:35.407196 | orchestrator | Friday 02 January 2026 00:44:32 +0000 (0:00:00.277) 0:00:28.345 ******** 2026-01-02 00:44:35.407204 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:44:35.407212 | orchestrator | 2026-01-02 00:44:35.407220 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:35.407228 | orchestrator | Friday 02 January 2026 00:44:33 +0000 (0:00:00.262) 0:00:28.608 ******** 2026-01-02 00:44:35.407236 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-01-02 00:44:35.407244 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-01-02 00:44:35.407252 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-01-02 00:44:35.407260 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-01-02 00:44:35.407268 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-01-02 00:44:35.407276 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-01-02 00:44:35.407288 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-01-02 00:44:35.407297 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-01-02 00:44:35.407305 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-01-02 00:44:35.407313 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-01-02 00:44:35.407321 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-01-02 00:44:35.407328 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-01-02 00:44:35.407336 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-01-02 00:44:35.407344 | orchestrator | 2026-01-02 00:44:35.407352 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:35.407360 | orchestrator | Friday 02 January 2026 00:44:33 +0000 (0:00:00.531) 0:00:29.139 ******** 2026-01-02 00:44:35.407368 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:35.407376 | orchestrator | 2026-01-02 00:44:35.407384 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:35.407392 | orchestrator | Friday 02 January 2026 00:44:33 +0000 (0:00:00.210) 0:00:29.350 ******** 2026-01-02 00:44:35.407400 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:35.407408 | orchestrator | 2026-01-02 00:44:35.407416 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:35.407424 | orchestrator | Friday 02 January 2026 00:44:34 +0000 (0:00:00.240) 0:00:29.590 ******** 2026-01-02 00:44:35.407432 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:35.407440 | orchestrator | 2026-01-02 00:44:35.407448 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:35.407456 | orchestrator | Friday 02 January 2026 00:44:34 +0000 (0:00:00.674) 0:00:30.264 ******** 2026-01-02 00:44:35.407464 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:35.407472 | orchestrator | 2026-01-02 00:44:35.407480 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:35.407487 | orchestrator | Friday 02 January 2026 00:44:34 +0000 (0:00:00.201) 0:00:30.466 ******** 2026-01-02 00:44:35.407495 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:35.407503 | orchestrator | 2026-01-02 00:44:35.407511 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:35.407525 | orchestrator | Friday 02 January 2026 00:44:35 +0000 (0:00:00.217) 0:00:30.684 ******** 2026-01-02 00:44:35.407533 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:35.407541 | orchestrator | 2026-01-02 00:44:35.407554 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:47.435775 | orchestrator | Friday 02 January 2026 00:44:35 +0000 (0:00:00.219) 0:00:30.904 ******** 2026-01-02 00:44:47.435885 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:47.435901 | orchestrator | 2026-01-02 00:44:47.435912 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:47.435922 | orchestrator | Friday 02 January 2026 00:44:35 +0000 (0:00:00.212) 0:00:31.116 ******** 2026-01-02 00:44:47.435931 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:47.435940 | orchestrator | 2026-01-02 00:44:47.435949 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:47.435958 | orchestrator | Friday 02 January 2026 00:44:35 +0000 (0:00:00.241) 0:00:31.357 ******** 2026-01-02 00:44:47.435967 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_cfac6910-579b-4d78-84a3-2d39a75847a6) 2026-01-02 00:44:47.435977 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_cfac6910-579b-4d78-84a3-2d39a75847a6) 2026-01-02 00:44:47.435986 | orchestrator | 2026-01-02 00:44:47.435995 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:47.436004 | orchestrator | Friday 02 January 2026 00:44:36 +0000 (0:00:00.477) 0:00:31.835 ******** 2026-01-02 00:44:47.436012 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_84499345-a879-443a-82ee-40e5571fa8cd) 2026-01-02 00:44:47.436021 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_84499345-a879-443a-82ee-40e5571fa8cd) 2026-01-02 00:44:47.436030 | orchestrator | 2026-01-02 00:44:47.436039 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:47.436074 | orchestrator | Friday 02 January 2026 00:44:36 +0000 (0:00:00.466) 0:00:32.302 ******** 2026-01-02 00:44:47.436084 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_7a849538-9b89-4e07-840a-8a2ecc10a58d) 2026-01-02 00:44:47.436093 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_7a849538-9b89-4e07-840a-8a2ecc10a58d) 2026-01-02 00:44:47.436102 | orchestrator | 2026-01-02 00:44:47.436111 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:47.436120 | orchestrator | Friday 02 January 2026 00:44:37 +0000 (0:00:00.434) 0:00:32.736 ******** 2026-01-02 00:44:47.436129 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_496b1234-da7e-4975-8125-a1f8cbe1a452) 2026-01-02 00:44:47.436138 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_496b1234-da7e-4975-8125-a1f8cbe1a452) 2026-01-02 00:44:47.436147 | orchestrator | 2026-01-02 00:44:47.436156 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:47.436164 | orchestrator | Friday 02 January 2026 00:44:37 +0000 (0:00:00.704) 0:00:33.440 ******** 2026-01-02 00:44:47.436173 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-02 00:44:47.436182 | orchestrator | 2026-01-02 00:44:47.436191 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:47.436200 | orchestrator | Friday 02 January 2026 00:44:38 +0000 (0:00:00.628) 0:00:34.068 ******** 2026-01-02 00:44:47.436209 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-01-02 00:44:47.436219 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-01-02 00:44:47.436228 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-01-02 00:44:47.436237 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-01-02 00:44:47.436246 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-01-02 00:44:47.436297 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-01-02 00:44:47.436308 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-01-02 00:44:47.436319 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-01-02 00:44:47.436329 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-01-02 00:44:47.436340 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-01-02 00:44:47.436350 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-01-02 00:44:47.436360 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-01-02 00:44:47.436370 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-01-02 00:44:47.436380 | orchestrator | 2026-01-02 00:44:47.436390 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:47.436400 | orchestrator | Friday 02 January 2026 00:44:39 +0000 (0:00:00.963) 0:00:35.032 ******** 2026-01-02 00:44:47.436410 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:47.436420 | orchestrator | 2026-01-02 00:44:47.436431 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:47.436441 | orchestrator | Friday 02 January 2026 00:44:39 +0000 (0:00:00.217) 0:00:35.249 ******** 2026-01-02 00:44:47.436452 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:47.436462 | orchestrator | 2026-01-02 00:44:47.436472 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:47.436482 | orchestrator | Friday 02 January 2026 00:44:39 +0000 (0:00:00.221) 0:00:35.471 ******** 2026-01-02 00:44:47.436492 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:47.436502 | orchestrator | 2026-01-02 00:44:47.436530 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:47.436542 | orchestrator | Friday 02 January 2026 00:44:40 +0000 (0:00:00.234) 0:00:35.706 ******** 2026-01-02 00:44:47.436552 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:47.436563 | orchestrator | 2026-01-02 00:44:47.436572 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:47.436582 | orchestrator | Friday 02 January 2026 00:44:40 +0000 (0:00:00.200) 0:00:35.906 ******** 2026-01-02 00:44:47.436592 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:47.436602 | orchestrator | 2026-01-02 00:44:47.436612 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:47.436622 | orchestrator | Friday 02 January 2026 00:44:40 +0000 (0:00:00.218) 0:00:36.125 ******** 2026-01-02 00:44:47.436632 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:47.436642 | orchestrator | 2026-01-02 00:44:47.436651 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:47.436660 | orchestrator | Friday 02 January 2026 00:44:40 +0000 (0:00:00.208) 0:00:36.334 ******** 2026-01-02 00:44:47.436669 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:47.436678 | orchestrator | 2026-01-02 00:44:47.436686 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:47.436695 | orchestrator | Friday 02 January 2026 00:44:41 +0000 (0:00:00.231) 0:00:36.565 ******** 2026-01-02 00:44:47.436704 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:47.436713 | orchestrator | 2026-01-02 00:44:47.436722 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:47.436730 | orchestrator | Friday 02 January 2026 00:44:41 +0000 (0:00:00.211) 0:00:36.776 ******** 2026-01-02 00:44:47.436739 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-01-02 00:44:47.436748 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-01-02 00:44:47.436758 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-01-02 00:44:47.436767 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-01-02 00:44:47.436783 | orchestrator | 2026-01-02 00:44:47.436792 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:47.436801 | orchestrator | Friday 02 January 2026 00:44:42 +0000 (0:00:00.963) 0:00:37.740 ******** 2026-01-02 00:44:47.436810 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:47.436818 | orchestrator | 2026-01-02 00:44:47.436827 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:47.436836 | orchestrator | Friday 02 January 2026 00:44:42 +0000 (0:00:00.213) 0:00:37.953 ******** 2026-01-02 00:44:47.436845 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:47.436854 | orchestrator | 2026-01-02 00:44:47.436863 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:47.436871 | orchestrator | Friday 02 January 2026 00:44:43 +0000 (0:00:00.715) 0:00:38.669 ******** 2026-01-02 00:44:47.436880 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:47.436889 | orchestrator | 2026-01-02 00:44:47.436898 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:47.436907 | orchestrator | Friday 02 January 2026 00:44:43 +0000 (0:00:00.234) 0:00:38.904 ******** 2026-01-02 00:44:47.436916 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:47.436924 | orchestrator | 2026-01-02 00:44:47.436933 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-02 00:44:47.436947 | orchestrator | Friday 02 January 2026 00:44:43 +0000 (0:00:00.230) 0:00:39.135 ******** 2026-01-02 00:44:47.436956 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:47.436965 | orchestrator | 2026-01-02 00:44:47.436974 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-02 00:44:47.436982 | orchestrator | Friday 02 January 2026 00:44:43 +0000 (0:00:00.143) 0:00:39.278 ******** 2026-01-02 00:44:47.436991 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '98c0a427-0bfe-5560-90fa-409a46d34f73'}}) 2026-01-02 00:44:47.437000 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b563cbc7-469d-5dd4-bc68-32b49ff22a36'}}) 2026-01-02 00:44:47.437009 | orchestrator | 2026-01-02 00:44:47.437018 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-02 00:44:47.437027 | orchestrator | Friday 02 January 2026 00:44:44 +0000 (0:00:00.244) 0:00:39.523 ******** 2026-01-02 00:44:47.437037 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-98c0a427-0bfe-5560-90fa-409a46d34f73', 'data_vg': 'ceph-98c0a427-0bfe-5560-90fa-409a46d34f73'}) 2026-01-02 00:44:47.437077 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b563cbc7-469d-5dd4-bc68-32b49ff22a36', 'data_vg': 'ceph-b563cbc7-469d-5dd4-bc68-32b49ff22a36'}) 2026-01-02 00:44:47.437086 | orchestrator | 2026-01-02 00:44:47.437095 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-02 00:44:47.437104 | orchestrator | Friday 02 January 2026 00:44:45 +0000 (0:00:01.880) 0:00:41.404 ******** 2026-01-02 00:44:47.437113 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-98c0a427-0bfe-5560-90fa-409a46d34f73', 'data_vg': 'ceph-98c0a427-0bfe-5560-90fa-409a46d34f73'})  2026-01-02 00:44:47.437124 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b563cbc7-469d-5dd4-bc68-32b49ff22a36', 'data_vg': 'ceph-b563cbc7-469d-5dd4-bc68-32b49ff22a36'})  2026-01-02 00:44:47.437133 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:47.437141 | orchestrator | 2026-01-02 00:44:47.437150 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-02 00:44:47.437159 | orchestrator | Friday 02 January 2026 00:44:46 +0000 (0:00:00.163) 0:00:41.567 ******** 2026-01-02 00:44:47.437168 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-98c0a427-0bfe-5560-90fa-409a46d34f73', 'data_vg': 'ceph-98c0a427-0bfe-5560-90fa-409a46d34f73'}) 2026-01-02 00:44:47.437184 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b563cbc7-469d-5dd4-bc68-32b49ff22a36', 'data_vg': 'ceph-b563cbc7-469d-5dd4-bc68-32b49ff22a36'}) 2026-01-02 00:44:53.452203 | orchestrator | 2026-01-02 00:44:53.452345 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-02 00:44:53.452362 | orchestrator | Friday 02 January 2026 00:44:47 +0000 (0:00:01.361) 0:00:42.929 ******** 2026-01-02 00:44:53.452376 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-98c0a427-0bfe-5560-90fa-409a46d34f73', 'data_vg': 'ceph-98c0a427-0bfe-5560-90fa-409a46d34f73'})  2026-01-02 00:44:53.452390 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b563cbc7-469d-5dd4-bc68-32b49ff22a36', 'data_vg': 'ceph-b563cbc7-469d-5dd4-bc68-32b49ff22a36'})  2026-01-02 00:44:53.452402 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:53.452416 | orchestrator | 2026-01-02 00:44:53.452428 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-02 00:44:53.452439 | orchestrator | Friday 02 January 2026 00:44:47 +0000 (0:00:00.178) 0:00:43.107 ******** 2026-01-02 00:44:53.452451 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:53.452462 | orchestrator | 2026-01-02 00:44:53.452474 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-02 00:44:53.452485 | orchestrator | Friday 02 January 2026 00:44:47 +0000 (0:00:00.159) 0:00:43.266 ******** 2026-01-02 00:44:53.452496 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-98c0a427-0bfe-5560-90fa-409a46d34f73', 'data_vg': 'ceph-98c0a427-0bfe-5560-90fa-409a46d34f73'})  2026-01-02 00:44:53.452507 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b563cbc7-469d-5dd4-bc68-32b49ff22a36', 'data_vg': 'ceph-b563cbc7-469d-5dd4-bc68-32b49ff22a36'})  2026-01-02 00:44:53.452519 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:53.452530 | orchestrator | 2026-01-02 00:44:53.452541 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-02 00:44:53.452552 | orchestrator | Friday 02 January 2026 00:44:47 +0000 (0:00:00.154) 0:00:43.421 ******** 2026-01-02 00:44:53.452564 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:53.452575 | orchestrator | 2026-01-02 00:44:53.452586 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-02 00:44:53.452597 | orchestrator | Friday 02 January 2026 00:44:48 +0000 (0:00:00.157) 0:00:43.579 ******** 2026-01-02 00:44:53.452608 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-98c0a427-0bfe-5560-90fa-409a46d34f73', 'data_vg': 'ceph-98c0a427-0bfe-5560-90fa-409a46d34f73'})  2026-01-02 00:44:53.452620 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b563cbc7-469d-5dd4-bc68-32b49ff22a36', 'data_vg': 'ceph-b563cbc7-469d-5dd4-bc68-32b49ff22a36'})  2026-01-02 00:44:53.452631 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:53.452642 | orchestrator | 2026-01-02 00:44:53.452654 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-02 00:44:53.452685 | orchestrator | Friday 02 January 2026 00:44:48 +0000 (0:00:00.388) 0:00:43.968 ******** 2026-01-02 00:44:53.452696 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:53.452708 | orchestrator | 2026-01-02 00:44:53.452719 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-02 00:44:53.452730 | orchestrator | Friday 02 January 2026 00:44:48 +0000 (0:00:00.159) 0:00:44.128 ******** 2026-01-02 00:44:53.452741 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-98c0a427-0bfe-5560-90fa-409a46d34f73', 'data_vg': 'ceph-98c0a427-0bfe-5560-90fa-409a46d34f73'})  2026-01-02 00:44:53.452752 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b563cbc7-469d-5dd4-bc68-32b49ff22a36', 'data_vg': 'ceph-b563cbc7-469d-5dd4-bc68-32b49ff22a36'})  2026-01-02 00:44:53.452763 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:53.452774 | orchestrator | 2026-01-02 00:44:53.452785 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-02 00:44:53.452797 | orchestrator | Friday 02 January 2026 00:44:48 +0000 (0:00:00.164) 0:00:44.292 ******** 2026-01-02 00:44:53.452808 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:44:53.452854 | orchestrator | 2026-01-02 00:44:53.452867 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-02 00:44:53.452878 | orchestrator | Friday 02 January 2026 00:44:48 +0000 (0:00:00.150) 0:00:44.443 ******** 2026-01-02 00:44:53.452889 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-98c0a427-0bfe-5560-90fa-409a46d34f73', 'data_vg': 'ceph-98c0a427-0bfe-5560-90fa-409a46d34f73'})  2026-01-02 00:44:53.452901 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b563cbc7-469d-5dd4-bc68-32b49ff22a36', 'data_vg': 'ceph-b563cbc7-469d-5dd4-bc68-32b49ff22a36'})  2026-01-02 00:44:53.452912 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:53.452923 | orchestrator | 2026-01-02 00:44:53.452934 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-02 00:44:53.452945 | orchestrator | Friday 02 January 2026 00:44:49 +0000 (0:00:00.154) 0:00:44.597 ******** 2026-01-02 00:44:53.452956 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-98c0a427-0bfe-5560-90fa-409a46d34f73', 'data_vg': 'ceph-98c0a427-0bfe-5560-90fa-409a46d34f73'})  2026-01-02 00:44:53.452967 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b563cbc7-469d-5dd4-bc68-32b49ff22a36', 'data_vg': 'ceph-b563cbc7-469d-5dd4-bc68-32b49ff22a36'})  2026-01-02 00:44:53.452979 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:53.452990 | orchestrator | 2026-01-02 00:44:53.453001 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-02 00:44:53.453031 | orchestrator | Friday 02 January 2026 00:44:49 +0000 (0:00:00.181) 0:00:44.779 ******** 2026-01-02 00:44:53.453068 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-98c0a427-0bfe-5560-90fa-409a46d34f73', 'data_vg': 'ceph-98c0a427-0bfe-5560-90fa-409a46d34f73'})  2026-01-02 00:44:53.453081 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b563cbc7-469d-5dd4-bc68-32b49ff22a36', 'data_vg': 'ceph-b563cbc7-469d-5dd4-bc68-32b49ff22a36'})  2026-01-02 00:44:53.453092 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:53.453103 | orchestrator | 2026-01-02 00:44:53.453114 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-02 00:44:53.453125 | orchestrator | Friday 02 January 2026 00:44:49 +0000 (0:00:00.150) 0:00:44.929 ******** 2026-01-02 00:44:53.453136 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:53.453147 | orchestrator | 2026-01-02 00:44:53.453158 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-02 00:44:53.453169 | orchestrator | Friday 02 January 2026 00:44:49 +0000 (0:00:00.145) 0:00:45.075 ******** 2026-01-02 00:44:53.453180 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:53.453191 | orchestrator | 2026-01-02 00:44:53.453202 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-02 00:44:53.453214 | orchestrator | Friday 02 January 2026 00:44:49 +0000 (0:00:00.136) 0:00:45.211 ******** 2026-01-02 00:44:53.453224 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:53.453235 | orchestrator | 2026-01-02 00:44:53.453246 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-02 00:44:53.453257 | orchestrator | Friday 02 January 2026 00:44:49 +0000 (0:00:00.160) 0:00:45.372 ******** 2026-01-02 00:44:53.453268 | orchestrator | ok: [testbed-node-4] => { 2026-01-02 00:44:53.453279 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-02 00:44:53.453291 | orchestrator | } 2026-01-02 00:44:53.453303 | orchestrator | 2026-01-02 00:44:53.453314 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-02 00:44:53.453325 | orchestrator | Friday 02 January 2026 00:44:50 +0000 (0:00:00.152) 0:00:45.525 ******** 2026-01-02 00:44:53.453336 | orchestrator | ok: [testbed-node-4] => { 2026-01-02 00:44:53.453347 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-02 00:44:53.453358 | orchestrator | } 2026-01-02 00:44:53.453369 | orchestrator | 2026-01-02 00:44:53.453380 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-02 00:44:53.453391 | orchestrator | Friday 02 January 2026 00:44:50 +0000 (0:00:00.149) 0:00:45.674 ******** 2026-01-02 00:44:53.453411 | orchestrator | ok: [testbed-node-4] => { 2026-01-02 00:44:53.453422 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-02 00:44:53.453434 | orchestrator | } 2026-01-02 00:44:53.453445 | orchestrator | 2026-01-02 00:44:53.453455 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-02 00:44:53.453466 | orchestrator | Friday 02 January 2026 00:44:50 +0000 (0:00:00.414) 0:00:46.089 ******** 2026-01-02 00:44:53.453478 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:44:53.453489 | orchestrator | 2026-01-02 00:44:53.453500 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-02 00:44:53.453512 | orchestrator | Friday 02 January 2026 00:44:51 +0000 (0:00:00.581) 0:00:46.670 ******** 2026-01-02 00:44:53.453523 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:44:53.453534 | orchestrator | 2026-01-02 00:44:53.453545 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-02 00:44:53.453556 | orchestrator | Friday 02 January 2026 00:44:51 +0000 (0:00:00.546) 0:00:47.216 ******** 2026-01-02 00:44:53.453568 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:44:53.453579 | orchestrator | 2026-01-02 00:44:53.453590 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-02 00:44:53.453601 | orchestrator | Friday 02 January 2026 00:44:52 +0000 (0:00:00.523) 0:00:47.740 ******** 2026-01-02 00:44:53.453612 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:44:53.453623 | orchestrator | 2026-01-02 00:44:53.453634 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-02 00:44:53.453645 | orchestrator | Friday 02 January 2026 00:44:52 +0000 (0:00:00.161) 0:00:47.901 ******** 2026-01-02 00:44:53.453657 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:53.453668 | orchestrator | 2026-01-02 00:44:53.453687 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-02 00:44:53.453698 | orchestrator | Friday 02 January 2026 00:44:52 +0000 (0:00:00.127) 0:00:48.029 ******** 2026-01-02 00:44:53.453709 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:53.453721 | orchestrator | 2026-01-02 00:44:53.453731 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-02 00:44:53.453742 | orchestrator | Friday 02 January 2026 00:44:52 +0000 (0:00:00.124) 0:00:48.154 ******** 2026-01-02 00:44:53.453753 | orchestrator | ok: [testbed-node-4] => { 2026-01-02 00:44:53.453765 | orchestrator |  "vgs_report": { 2026-01-02 00:44:53.453776 | orchestrator |  "vg": [] 2026-01-02 00:44:53.453788 | orchestrator |  } 2026-01-02 00:44:53.453799 | orchestrator | } 2026-01-02 00:44:53.453810 | orchestrator | 2026-01-02 00:44:53.453821 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-02 00:44:53.453832 | orchestrator | Friday 02 January 2026 00:44:52 +0000 (0:00:00.161) 0:00:48.316 ******** 2026-01-02 00:44:53.453843 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:53.453854 | orchestrator | 2026-01-02 00:44:53.453865 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-02 00:44:53.453876 | orchestrator | Friday 02 January 2026 00:44:52 +0000 (0:00:00.154) 0:00:48.470 ******** 2026-01-02 00:44:53.453887 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:53.453898 | orchestrator | 2026-01-02 00:44:53.453909 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-02 00:44:53.453920 | orchestrator | Friday 02 January 2026 00:44:53 +0000 (0:00:00.147) 0:00:48.618 ******** 2026-01-02 00:44:53.453931 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:53.453942 | orchestrator | 2026-01-02 00:44:53.453953 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-02 00:44:53.453964 | orchestrator | Friday 02 January 2026 00:44:53 +0000 (0:00:00.160) 0:00:48.778 ******** 2026-01-02 00:44:53.453976 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:53.453987 | orchestrator | 2026-01-02 00:44:53.454004 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-02 00:44:58.319901 | orchestrator | Friday 02 January 2026 00:44:53 +0000 (0:00:00.171) 0:00:48.950 ******** 2026-01-02 00:44:58.320098 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:58.320120 | orchestrator | 2026-01-02 00:44:58.320133 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-02 00:44:58.320145 | orchestrator | Friday 02 January 2026 00:44:53 +0000 (0:00:00.472) 0:00:49.422 ******** 2026-01-02 00:44:58.320156 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:58.320167 | orchestrator | 2026-01-02 00:44:58.320178 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-02 00:44:58.320189 | orchestrator | Friday 02 January 2026 00:44:54 +0000 (0:00:00.172) 0:00:49.595 ******** 2026-01-02 00:44:58.320200 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:58.320211 | orchestrator | 2026-01-02 00:44:58.320221 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-02 00:44:58.320232 | orchestrator | Friday 02 January 2026 00:44:54 +0000 (0:00:00.152) 0:00:49.748 ******** 2026-01-02 00:44:58.320243 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:58.320253 | orchestrator | 2026-01-02 00:44:58.320264 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-02 00:44:58.320275 | orchestrator | Friday 02 January 2026 00:44:54 +0000 (0:00:00.162) 0:00:49.911 ******** 2026-01-02 00:44:58.320286 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:58.320297 | orchestrator | 2026-01-02 00:44:58.320307 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-02 00:44:58.320318 | orchestrator | Friday 02 January 2026 00:44:54 +0000 (0:00:00.133) 0:00:50.044 ******** 2026-01-02 00:44:58.320329 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:58.320340 | orchestrator | 2026-01-02 00:44:58.320351 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-02 00:44:58.320362 | orchestrator | Friday 02 January 2026 00:44:54 +0000 (0:00:00.137) 0:00:50.181 ******** 2026-01-02 00:44:58.320372 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:58.320383 | orchestrator | 2026-01-02 00:44:58.320397 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-02 00:44:58.320410 | orchestrator | Friday 02 January 2026 00:44:54 +0000 (0:00:00.127) 0:00:50.308 ******** 2026-01-02 00:44:58.320423 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:58.320436 | orchestrator | 2026-01-02 00:44:58.320448 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-02 00:44:58.320461 | orchestrator | Friday 02 January 2026 00:44:54 +0000 (0:00:00.137) 0:00:50.445 ******** 2026-01-02 00:44:58.320474 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:58.320487 | orchestrator | 2026-01-02 00:44:58.320500 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-02 00:44:58.320511 | orchestrator | Friday 02 January 2026 00:44:55 +0000 (0:00:00.128) 0:00:50.574 ******** 2026-01-02 00:44:58.320522 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:58.320532 | orchestrator | 2026-01-02 00:44:58.320544 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-02 00:44:58.320570 | orchestrator | Friday 02 January 2026 00:44:55 +0000 (0:00:00.136) 0:00:50.711 ******** 2026-01-02 00:44:58.320582 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-98c0a427-0bfe-5560-90fa-409a46d34f73', 'data_vg': 'ceph-98c0a427-0bfe-5560-90fa-409a46d34f73'})  2026-01-02 00:44:58.320596 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b563cbc7-469d-5dd4-bc68-32b49ff22a36', 'data_vg': 'ceph-b563cbc7-469d-5dd4-bc68-32b49ff22a36'})  2026-01-02 00:44:58.320606 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:58.320617 | orchestrator | 2026-01-02 00:44:58.320628 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-02 00:44:58.320639 | orchestrator | Friday 02 January 2026 00:44:55 +0000 (0:00:00.162) 0:00:50.873 ******** 2026-01-02 00:44:58.320650 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-98c0a427-0bfe-5560-90fa-409a46d34f73', 'data_vg': 'ceph-98c0a427-0bfe-5560-90fa-409a46d34f73'})  2026-01-02 00:44:58.320671 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b563cbc7-469d-5dd4-bc68-32b49ff22a36', 'data_vg': 'ceph-b563cbc7-469d-5dd4-bc68-32b49ff22a36'})  2026-01-02 00:44:58.320682 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:58.320693 | orchestrator | 2026-01-02 00:44:58.320704 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-02 00:44:58.320714 | orchestrator | Friday 02 January 2026 00:44:55 +0000 (0:00:00.192) 0:00:51.065 ******** 2026-01-02 00:44:58.320725 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-98c0a427-0bfe-5560-90fa-409a46d34f73', 'data_vg': 'ceph-98c0a427-0bfe-5560-90fa-409a46d34f73'})  2026-01-02 00:44:58.320736 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b563cbc7-469d-5dd4-bc68-32b49ff22a36', 'data_vg': 'ceph-b563cbc7-469d-5dd4-bc68-32b49ff22a36'})  2026-01-02 00:44:58.320747 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:58.320758 | orchestrator | 2026-01-02 00:44:58.320769 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-02 00:44:58.320780 | orchestrator | Friday 02 January 2026 00:44:55 +0000 (0:00:00.352) 0:00:51.418 ******** 2026-01-02 00:44:58.320790 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-98c0a427-0bfe-5560-90fa-409a46d34f73', 'data_vg': 'ceph-98c0a427-0bfe-5560-90fa-409a46d34f73'})  2026-01-02 00:44:58.320802 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b563cbc7-469d-5dd4-bc68-32b49ff22a36', 'data_vg': 'ceph-b563cbc7-469d-5dd4-bc68-32b49ff22a36'})  2026-01-02 00:44:58.320813 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:58.320823 | orchestrator | 2026-01-02 00:44:58.320854 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-02 00:44:58.320865 | orchestrator | Friday 02 January 2026 00:44:56 +0000 (0:00:00.140) 0:00:51.558 ******** 2026-01-02 00:44:58.320876 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-98c0a427-0bfe-5560-90fa-409a46d34f73', 'data_vg': 'ceph-98c0a427-0bfe-5560-90fa-409a46d34f73'})  2026-01-02 00:44:58.320887 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b563cbc7-469d-5dd4-bc68-32b49ff22a36', 'data_vg': 'ceph-b563cbc7-469d-5dd4-bc68-32b49ff22a36'})  2026-01-02 00:44:58.320898 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:58.320909 | orchestrator | 2026-01-02 00:44:58.320920 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-02 00:44:58.320931 | orchestrator | Friday 02 January 2026 00:44:56 +0000 (0:00:00.183) 0:00:51.742 ******** 2026-01-02 00:44:58.320943 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-98c0a427-0bfe-5560-90fa-409a46d34f73', 'data_vg': 'ceph-98c0a427-0bfe-5560-90fa-409a46d34f73'})  2026-01-02 00:44:58.320954 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b563cbc7-469d-5dd4-bc68-32b49ff22a36', 'data_vg': 'ceph-b563cbc7-469d-5dd4-bc68-32b49ff22a36'})  2026-01-02 00:44:58.320965 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:58.320976 | orchestrator | 2026-01-02 00:44:58.320987 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-02 00:44:58.320998 | orchestrator | Friday 02 January 2026 00:44:56 +0000 (0:00:00.181) 0:00:51.923 ******** 2026-01-02 00:44:58.321008 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-98c0a427-0bfe-5560-90fa-409a46d34f73', 'data_vg': 'ceph-98c0a427-0bfe-5560-90fa-409a46d34f73'})  2026-01-02 00:44:58.321019 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b563cbc7-469d-5dd4-bc68-32b49ff22a36', 'data_vg': 'ceph-b563cbc7-469d-5dd4-bc68-32b49ff22a36'})  2026-01-02 00:44:58.321030 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:58.321058 | orchestrator | 2026-01-02 00:44:58.321070 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-02 00:44:58.321081 | orchestrator | Friday 02 January 2026 00:44:56 +0000 (0:00:00.158) 0:00:52.081 ******** 2026-01-02 00:44:58.321099 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-98c0a427-0bfe-5560-90fa-409a46d34f73', 'data_vg': 'ceph-98c0a427-0bfe-5560-90fa-409a46d34f73'})  2026-01-02 00:44:58.321115 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b563cbc7-469d-5dd4-bc68-32b49ff22a36', 'data_vg': 'ceph-b563cbc7-469d-5dd4-bc68-32b49ff22a36'})  2026-01-02 00:44:58.321126 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:58.321137 | orchestrator | 2026-01-02 00:44:58.321148 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-02 00:44:58.321159 | orchestrator | Friday 02 January 2026 00:44:56 +0000 (0:00:00.155) 0:00:52.236 ******** 2026-01-02 00:44:58.321171 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:44:58.321182 | orchestrator | 2026-01-02 00:44:58.321192 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-02 00:44:58.321203 | orchestrator | Friday 02 January 2026 00:44:57 +0000 (0:00:00.506) 0:00:52.743 ******** 2026-01-02 00:44:58.321214 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:44:58.321225 | orchestrator | 2026-01-02 00:44:58.321236 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-02 00:44:58.321247 | orchestrator | Friday 02 January 2026 00:44:57 +0000 (0:00:00.507) 0:00:53.250 ******** 2026-01-02 00:44:58.321258 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:44:58.321269 | orchestrator | 2026-01-02 00:44:58.321279 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-02 00:44:58.321290 | orchestrator | Friday 02 January 2026 00:44:57 +0000 (0:00:00.146) 0:00:53.396 ******** 2026-01-02 00:44:58.321301 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-98c0a427-0bfe-5560-90fa-409a46d34f73', 'vg_name': 'ceph-98c0a427-0bfe-5560-90fa-409a46d34f73'}) 2026-01-02 00:44:58.321313 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-b563cbc7-469d-5dd4-bc68-32b49ff22a36', 'vg_name': 'ceph-b563cbc7-469d-5dd4-bc68-32b49ff22a36'}) 2026-01-02 00:44:58.321324 | orchestrator | 2026-01-02 00:44:58.321335 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-02 00:44:58.321345 | orchestrator | Friday 02 January 2026 00:44:58 +0000 (0:00:00.140) 0:00:53.537 ******** 2026-01-02 00:44:58.321356 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-98c0a427-0bfe-5560-90fa-409a46d34f73', 'data_vg': 'ceph-98c0a427-0bfe-5560-90fa-409a46d34f73'})  2026-01-02 00:44:58.321367 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b563cbc7-469d-5dd4-bc68-32b49ff22a36', 'data_vg': 'ceph-b563cbc7-469d-5dd4-bc68-32b49ff22a36'})  2026-01-02 00:44:58.321378 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:58.321389 | orchestrator | 2026-01-02 00:44:58.321400 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-02 00:44:58.321411 | orchestrator | Friday 02 January 2026 00:44:58 +0000 (0:00:00.130) 0:00:53.668 ******** 2026-01-02 00:44:58.321422 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-98c0a427-0bfe-5560-90fa-409a46d34f73', 'data_vg': 'ceph-98c0a427-0bfe-5560-90fa-409a46d34f73'})  2026-01-02 00:44:58.321441 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b563cbc7-469d-5dd4-bc68-32b49ff22a36', 'data_vg': 'ceph-b563cbc7-469d-5dd4-bc68-32b49ff22a36'})  2026-01-02 00:45:04.741842 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:45:04.741960 | orchestrator | 2026-01-02 00:45:04.741981 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-02 00:45:04.741995 | orchestrator | Friday 02 January 2026 00:44:58 +0000 (0:00:00.151) 0:00:53.819 ******** 2026-01-02 00:45:04.742007 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-98c0a427-0bfe-5560-90fa-409a46d34f73', 'data_vg': 'ceph-98c0a427-0bfe-5560-90fa-409a46d34f73'})  2026-01-02 00:45:04.742137 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b563cbc7-469d-5dd4-bc68-32b49ff22a36', 'data_vg': 'ceph-b563cbc7-469d-5dd4-bc68-32b49ff22a36'})  2026-01-02 00:45:04.742163 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:45:04.742201 | orchestrator | 2026-01-02 00:45:04.742214 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-02 00:45:04.742225 | orchestrator | Friday 02 January 2026 00:44:58 +0000 (0:00:00.148) 0:00:53.967 ******** 2026-01-02 00:45:04.742237 | orchestrator | ok: [testbed-node-4] => { 2026-01-02 00:45:04.742248 | orchestrator |  "lvm_report": { 2026-01-02 00:45:04.742261 | orchestrator |  "lv": [ 2026-01-02 00:45:04.742272 | orchestrator |  { 2026-01-02 00:45:04.742283 | orchestrator |  "lv_name": "osd-block-98c0a427-0bfe-5560-90fa-409a46d34f73", 2026-01-02 00:45:04.742295 | orchestrator |  "vg_name": "ceph-98c0a427-0bfe-5560-90fa-409a46d34f73" 2026-01-02 00:45:04.742306 | orchestrator |  }, 2026-01-02 00:45:04.742322 | orchestrator |  { 2026-01-02 00:45:04.742340 | orchestrator |  "lv_name": "osd-block-b563cbc7-469d-5dd4-bc68-32b49ff22a36", 2026-01-02 00:45:04.742358 | orchestrator |  "vg_name": "ceph-b563cbc7-469d-5dd4-bc68-32b49ff22a36" 2026-01-02 00:45:04.742376 | orchestrator |  } 2026-01-02 00:45:04.742396 | orchestrator |  ], 2026-01-02 00:45:04.742417 | orchestrator |  "pv": [ 2026-01-02 00:45:04.742435 | orchestrator |  { 2026-01-02 00:45:04.742452 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-02 00:45:04.742467 | orchestrator |  "vg_name": "ceph-98c0a427-0bfe-5560-90fa-409a46d34f73" 2026-01-02 00:45:04.742480 | orchestrator |  }, 2026-01-02 00:45:04.742493 | orchestrator |  { 2026-01-02 00:45:04.742506 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-02 00:45:04.742518 | orchestrator |  "vg_name": "ceph-b563cbc7-469d-5dd4-bc68-32b49ff22a36" 2026-01-02 00:45:04.742531 | orchestrator |  } 2026-01-02 00:45:04.742544 | orchestrator |  ] 2026-01-02 00:45:04.742557 | orchestrator |  } 2026-01-02 00:45:04.742571 | orchestrator | } 2026-01-02 00:45:04.742584 | orchestrator | 2026-01-02 00:45:04.742597 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-02 00:45:04.742610 | orchestrator | 2026-01-02 00:45:04.742624 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-02 00:45:04.742637 | orchestrator | Friday 02 January 2026 00:44:58 +0000 (0:00:00.399) 0:00:54.367 ******** 2026-01-02 00:45:04.742651 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-02 00:45:04.742664 | orchestrator | 2026-01-02 00:45:04.742678 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-02 00:45:04.742692 | orchestrator | Friday 02 January 2026 00:44:59 +0000 (0:00:00.244) 0:00:54.611 ******** 2026-01-02 00:45:04.742705 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:45:04.742719 | orchestrator | 2026-01-02 00:45:04.742731 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:45:04.742742 | orchestrator | Friday 02 January 2026 00:44:59 +0000 (0:00:00.241) 0:00:54.853 ******** 2026-01-02 00:45:04.742752 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-01-02 00:45:04.742763 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-01-02 00:45:04.742774 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-01-02 00:45:04.742785 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-01-02 00:45:04.742795 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-01-02 00:45:04.742806 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-01-02 00:45:04.742817 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-01-02 00:45:04.742828 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-01-02 00:45:04.742838 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-01-02 00:45:04.742860 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-01-02 00:45:04.742871 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-01-02 00:45:04.742882 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-01-02 00:45:04.742893 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-01-02 00:45:04.742904 | orchestrator | 2026-01-02 00:45:04.742920 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:45:04.742931 | orchestrator | Friday 02 January 2026 00:44:59 +0000 (0:00:00.487) 0:00:55.340 ******** 2026-01-02 00:45:04.742942 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:04.742953 | orchestrator | 2026-01-02 00:45:04.742964 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:45:04.742975 | orchestrator | Friday 02 January 2026 00:45:00 +0000 (0:00:00.237) 0:00:55.577 ******** 2026-01-02 00:45:04.742986 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:04.742997 | orchestrator | 2026-01-02 00:45:04.743008 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:45:04.743069 | orchestrator | Friday 02 January 2026 00:45:00 +0000 (0:00:00.207) 0:00:55.785 ******** 2026-01-02 00:45:04.743083 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:04.743095 | orchestrator | 2026-01-02 00:45:04.743106 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:45:04.743117 | orchestrator | Friday 02 January 2026 00:45:00 +0000 (0:00:00.204) 0:00:55.990 ******** 2026-01-02 00:45:04.743128 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:04.743139 | orchestrator | 2026-01-02 00:45:04.743150 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:45:04.743222 | orchestrator | Friday 02 January 2026 00:45:00 +0000 (0:00:00.209) 0:00:56.199 ******** 2026-01-02 00:45:04.743245 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:04.743263 | orchestrator | 2026-01-02 00:45:04.743280 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:45:04.743295 | orchestrator | Friday 02 January 2026 00:45:01 +0000 (0:00:00.722) 0:00:56.922 ******** 2026-01-02 00:45:04.743313 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:04.743330 | orchestrator | 2026-01-02 00:45:04.743349 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:45:04.743367 | orchestrator | Friday 02 January 2026 00:45:01 +0000 (0:00:00.212) 0:00:57.135 ******** 2026-01-02 00:45:04.743385 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:04.743402 | orchestrator | 2026-01-02 00:45:04.743422 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:45:04.743441 | orchestrator | Friday 02 January 2026 00:45:01 +0000 (0:00:00.252) 0:00:57.387 ******** 2026-01-02 00:45:04.743459 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:04.743474 | orchestrator | 2026-01-02 00:45:04.743486 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:45:04.743497 | orchestrator | Friday 02 January 2026 00:45:02 +0000 (0:00:00.208) 0:00:57.596 ******** 2026-01-02 00:45:04.743508 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f4449738-099c-443f-90a1-9eef773d53ef) 2026-01-02 00:45:04.743520 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f4449738-099c-443f-90a1-9eef773d53ef) 2026-01-02 00:45:04.743531 | orchestrator | 2026-01-02 00:45:04.743542 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:45:04.743552 | orchestrator | Friday 02 January 2026 00:45:02 +0000 (0:00:00.430) 0:00:58.026 ******** 2026-01-02 00:45:04.743563 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3f193762-36b0-4c27-b28e-8efb206edc66) 2026-01-02 00:45:04.743574 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3f193762-36b0-4c27-b28e-8efb206edc66) 2026-01-02 00:45:04.743585 | orchestrator | 2026-01-02 00:45:04.743607 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:45:04.743624 | orchestrator | Friday 02 January 2026 00:45:02 +0000 (0:00:00.443) 0:00:58.470 ******** 2026-01-02 00:45:04.743635 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_26cdd52f-83be-4086-bce2-9cb6df4f24ab) 2026-01-02 00:45:04.743646 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_26cdd52f-83be-4086-bce2-9cb6df4f24ab) 2026-01-02 00:45:04.743657 | orchestrator | 2026-01-02 00:45:04.743668 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:45:04.743679 | orchestrator | Friday 02 January 2026 00:45:03 +0000 (0:00:00.462) 0:00:58.932 ******** 2026-01-02 00:45:04.743690 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3a47a132-03ad-4adf-a37b-d405efe1a07c) 2026-01-02 00:45:04.743771 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3a47a132-03ad-4adf-a37b-d405efe1a07c) 2026-01-02 00:45:04.743785 | orchestrator | 2026-01-02 00:45:04.743797 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:45:04.743808 | orchestrator | Friday 02 January 2026 00:45:03 +0000 (0:00:00.483) 0:00:59.416 ******** 2026-01-02 00:45:04.743819 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-02 00:45:04.743830 | orchestrator | 2026-01-02 00:45:04.743841 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:45:04.743852 | orchestrator | Friday 02 January 2026 00:45:04 +0000 (0:00:00.390) 0:00:59.807 ******** 2026-01-02 00:45:04.743863 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-01-02 00:45:04.743874 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-01-02 00:45:04.743885 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-01-02 00:45:04.743896 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-01-02 00:45:04.743907 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-01-02 00:45:04.743918 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-01-02 00:45:04.743929 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-01-02 00:45:04.743940 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-01-02 00:45:04.743951 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-01-02 00:45:04.743962 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-01-02 00:45:04.743973 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-01-02 00:45:04.743998 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-01-02 00:45:14.073668 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-01-02 00:45:14.073757 | orchestrator | 2026-01-02 00:45:14.073768 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:45:14.073775 | orchestrator | Friday 02 January 2026 00:45:04 +0000 (0:00:00.423) 0:01:00.230 ******** 2026-01-02 00:45:14.073783 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:14.073791 | orchestrator | 2026-01-02 00:45:14.073798 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:45:14.073805 | orchestrator | Friday 02 January 2026 00:45:04 +0000 (0:00:00.187) 0:01:00.417 ******** 2026-01-02 00:45:14.073811 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:14.073817 | orchestrator | 2026-01-02 00:45:14.073824 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:45:14.073830 | orchestrator | Friday 02 January 2026 00:45:05 +0000 (0:00:00.738) 0:01:01.156 ******** 2026-01-02 00:45:14.073857 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:14.073864 | orchestrator | 2026-01-02 00:45:14.073871 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:45:14.073882 | orchestrator | Friday 02 January 2026 00:45:05 +0000 (0:00:00.213) 0:01:01.370 ******** 2026-01-02 00:45:14.073892 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:14.073902 | orchestrator | 2026-01-02 00:45:14.073912 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:45:14.073922 | orchestrator | Friday 02 January 2026 00:45:06 +0000 (0:00:00.233) 0:01:01.603 ******** 2026-01-02 00:45:14.073932 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:14.073942 | orchestrator | 2026-01-02 00:45:14.073952 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:45:14.073961 | orchestrator | Friday 02 January 2026 00:45:06 +0000 (0:00:00.213) 0:01:01.816 ******** 2026-01-02 00:45:14.073971 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:14.073981 | orchestrator | 2026-01-02 00:45:14.073991 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:45:14.074001 | orchestrator | Friday 02 January 2026 00:45:06 +0000 (0:00:00.230) 0:01:02.047 ******** 2026-01-02 00:45:14.074011 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:14.074147 | orchestrator | 2026-01-02 00:45:14.074160 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:45:14.074171 | orchestrator | Friday 02 January 2026 00:45:06 +0000 (0:00:00.216) 0:01:02.264 ******** 2026-01-02 00:45:14.074181 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:14.074191 | orchestrator | 2026-01-02 00:45:14.074200 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:45:14.074210 | orchestrator | Friday 02 January 2026 00:45:06 +0000 (0:00:00.205) 0:01:02.469 ******** 2026-01-02 00:45:14.074237 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-01-02 00:45:14.074250 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-01-02 00:45:14.074262 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-01-02 00:45:14.074273 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-01-02 00:45:14.074285 | orchestrator | 2026-01-02 00:45:14.074295 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:45:14.074307 | orchestrator | Friday 02 January 2026 00:45:07 +0000 (0:00:00.660) 0:01:03.129 ******** 2026-01-02 00:45:14.074319 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:14.074329 | orchestrator | 2026-01-02 00:45:14.074341 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:45:14.074352 | orchestrator | Friday 02 January 2026 00:45:07 +0000 (0:00:00.220) 0:01:03.350 ******** 2026-01-02 00:45:14.074363 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:14.074372 | orchestrator | 2026-01-02 00:45:14.074383 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:45:14.074393 | orchestrator | Friday 02 January 2026 00:45:08 +0000 (0:00:00.211) 0:01:03.561 ******** 2026-01-02 00:45:14.074403 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:14.074413 | orchestrator | 2026-01-02 00:45:14.074423 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:45:14.074433 | orchestrator | Friday 02 January 2026 00:45:08 +0000 (0:00:00.203) 0:01:03.764 ******** 2026-01-02 00:45:14.074443 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:14.074453 | orchestrator | 2026-01-02 00:45:14.074463 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-02 00:45:14.074473 | orchestrator | Friday 02 January 2026 00:45:08 +0000 (0:00:00.219) 0:01:03.984 ******** 2026-01-02 00:45:14.074483 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:14.074493 | orchestrator | 2026-01-02 00:45:14.074503 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-02 00:45:14.074513 | orchestrator | Friday 02 January 2026 00:45:08 +0000 (0:00:00.339) 0:01:04.323 ******** 2026-01-02 00:45:14.074524 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8c17e839-2cbb-5f17-abcc-9f26ae111b42'}}) 2026-01-02 00:45:14.074548 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '37cfd703-64b2-55b0-ad28-4f6812d5fa0d'}}) 2026-01-02 00:45:14.074558 | orchestrator | 2026-01-02 00:45:14.074568 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-02 00:45:14.074580 | orchestrator | Friday 02 January 2026 00:45:09 +0000 (0:00:00.200) 0:01:04.523 ******** 2026-01-02 00:45:14.074592 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8c17e839-2cbb-5f17-abcc-9f26ae111b42', 'data_vg': 'ceph-8c17e839-2cbb-5f17-abcc-9f26ae111b42'}) 2026-01-02 00:45:14.074604 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-37cfd703-64b2-55b0-ad28-4f6812d5fa0d', 'data_vg': 'ceph-37cfd703-64b2-55b0-ad28-4f6812d5fa0d'}) 2026-01-02 00:45:14.074614 | orchestrator | 2026-01-02 00:45:14.074625 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-02 00:45:14.074655 | orchestrator | Friday 02 January 2026 00:45:10 +0000 (0:00:01.928) 0:01:06.452 ******** 2026-01-02 00:45:14.074666 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c17e839-2cbb-5f17-abcc-9f26ae111b42', 'data_vg': 'ceph-8c17e839-2cbb-5f17-abcc-9f26ae111b42'})  2026-01-02 00:45:14.074678 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-37cfd703-64b2-55b0-ad28-4f6812d5fa0d', 'data_vg': 'ceph-37cfd703-64b2-55b0-ad28-4f6812d5fa0d'})  2026-01-02 00:45:14.074688 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:14.074698 | orchestrator | 2026-01-02 00:45:14.074708 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-02 00:45:14.074720 | orchestrator | Friday 02 January 2026 00:45:11 +0000 (0:00:00.181) 0:01:06.633 ******** 2026-01-02 00:45:14.074731 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8c17e839-2cbb-5f17-abcc-9f26ae111b42', 'data_vg': 'ceph-8c17e839-2cbb-5f17-abcc-9f26ae111b42'}) 2026-01-02 00:45:14.074741 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-37cfd703-64b2-55b0-ad28-4f6812d5fa0d', 'data_vg': 'ceph-37cfd703-64b2-55b0-ad28-4f6812d5fa0d'}) 2026-01-02 00:45:14.074752 | orchestrator | 2026-01-02 00:45:14.074762 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-02 00:45:14.074771 | orchestrator | Friday 02 January 2026 00:45:12 +0000 (0:00:01.327) 0:01:07.961 ******** 2026-01-02 00:45:14.074782 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c17e839-2cbb-5f17-abcc-9f26ae111b42', 'data_vg': 'ceph-8c17e839-2cbb-5f17-abcc-9f26ae111b42'})  2026-01-02 00:45:14.074793 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-37cfd703-64b2-55b0-ad28-4f6812d5fa0d', 'data_vg': 'ceph-37cfd703-64b2-55b0-ad28-4f6812d5fa0d'})  2026-01-02 00:45:14.074804 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:14.074814 | orchestrator | 2026-01-02 00:45:14.074825 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-02 00:45:14.074835 | orchestrator | Friday 02 January 2026 00:45:12 +0000 (0:00:00.160) 0:01:08.121 ******** 2026-01-02 00:45:14.074846 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:14.074856 | orchestrator | 2026-01-02 00:45:14.074867 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-02 00:45:14.074876 | orchestrator | Friday 02 January 2026 00:45:12 +0000 (0:00:00.151) 0:01:08.273 ******** 2026-01-02 00:45:14.074896 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c17e839-2cbb-5f17-abcc-9f26ae111b42', 'data_vg': 'ceph-8c17e839-2cbb-5f17-abcc-9f26ae111b42'})  2026-01-02 00:45:14.074908 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-37cfd703-64b2-55b0-ad28-4f6812d5fa0d', 'data_vg': 'ceph-37cfd703-64b2-55b0-ad28-4f6812d5fa0d'})  2026-01-02 00:45:14.074917 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:14.074927 | orchestrator | 2026-01-02 00:45:14.074937 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-02 00:45:14.074958 | orchestrator | Friday 02 January 2026 00:45:12 +0000 (0:00:00.167) 0:01:08.441 ******** 2026-01-02 00:45:14.074969 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:14.074979 | orchestrator | 2026-01-02 00:45:14.074989 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-02 00:45:14.074999 | orchestrator | Friday 02 January 2026 00:45:13 +0000 (0:00:00.140) 0:01:08.581 ******** 2026-01-02 00:45:14.075009 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c17e839-2cbb-5f17-abcc-9f26ae111b42', 'data_vg': 'ceph-8c17e839-2cbb-5f17-abcc-9f26ae111b42'})  2026-01-02 00:45:14.075020 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-37cfd703-64b2-55b0-ad28-4f6812d5fa0d', 'data_vg': 'ceph-37cfd703-64b2-55b0-ad28-4f6812d5fa0d'})  2026-01-02 00:45:14.075030 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:14.075067 | orchestrator | 2026-01-02 00:45:14.075078 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-02 00:45:14.075088 | orchestrator | Friday 02 January 2026 00:45:13 +0000 (0:00:00.161) 0:01:08.743 ******** 2026-01-02 00:45:14.075097 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:14.075108 | orchestrator | 2026-01-02 00:45:14.075118 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-02 00:45:14.075128 | orchestrator | Friday 02 January 2026 00:45:13 +0000 (0:00:00.153) 0:01:08.896 ******** 2026-01-02 00:45:14.075138 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c17e839-2cbb-5f17-abcc-9f26ae111b42', 'data_vg': 'ceph-8c17e839-2cbb-5f17-abcc-9f26ae111b42'})  2026-01-02 00:45:14.075150 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-37cfd703-64b2-55b0-ad28-4f6812d5fa0d', 'data_vg': 'ceph-37cfd703-64b2-55b0-ad28-4f6812d5fa0d'})  2026-01-02 00:45:14.075159 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:14.075170 | orchestrator | 2026-01-02 00:45:14.075181 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-02 00:45:14.075191 | orchestrator | Friday 02 January 2026 00:45:13 +0000 (0:00:00.160) 0:01:09.056 ******** 2026-01-02 00:45:14.075202 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:45:14.075212 | orchestrator | 2026-01-02 00:45:14.075223 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-02 00:45:14.075234 | orchestrator | Friday 02 January 2026 00:45:13 +0000 (0:00:00.364) 0:01:09.421 ******** 2026-01-02 00:45:14.075258 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c17e839-2cbb-5f17-abcc-9f26ae111b42', 'data_vg': 'ceph-8c17e839-2cbb-5f17-abcc-9f26ae111b42'})  2026-01-02 00:45:19.966135 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-37cfd703-64b2-55b0-ad28-4f6812d5fa0d', 'data_vg': 'ceph-37cfd703-64b2-55b0-ad28-4f6812d5fa0d'})  2026-01-02 00:45:19.966273 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:19.966302 | orchestrator | 2026-01-02 00:45:19.966318 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-02 00:45:19.966334 | orchestrator | Friday 02 January 2026 00:45:14 +0000 (0:00:00.150) 0:01:09.572 ******** 2026-01-02 00:45:19.966350 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c17e839-2cbb-5f17-abcc-9f26ae111b42', 'data_vg': 'ceph-8c17e839-2cbb-5f17-abcc-9f26ae111b42'})  2026-01-02 00:45:19.966364 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-37cfd703-64b2-55b0-ad28-4f6812d5fa0d', 'data_vg': 'ceph-37cfd703-64b2-55b0-ad28-4f6812d5fa0d'})  2026-01-02 00:45:19.966378 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:19.966393 | orchestrator | 2026-01-02 00:45:19.966407 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-02 00:45:19.966421 | orchestrator | Friday 02 January 2026 00:45:14 +0000 (0:00:00.144) 0:01:09.717 ******** 2026-01-02 00:45:19.966435 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c17e839-2cbb-5f17-abcc-9f26ae111b42', 'data_vg': 'ceph-8c17e839-2cbb-5f17-abcc-9f26ae111b42'})  2026-01-02 00:45:19.966450 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-37cfd703-64b2-55b0-ad28-4f6812d5fa0d', 'data_vg': 'ceph-37cfd703-64b2-55b0-ad28-4f6812d5fa0d'})  2026-01-02 00:45:19.966495 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:19.966511 | orchestrator | 2026-01-02 00:45:19.966525 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-02 00:45:19.966539 | orchestrator | Friday 02 January 2026 00:45:14 +0000 (0:00:00.149) 0:01:09.866 ******** 2026-01-02 00:45:19.966553 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:19.966567 | orchestrator | 2026-01-02 00:45:19.966580 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-02 00:45:19.966594 | orchestrator | Friday 02 January 2026 00:45:14 +0000 (0:00:00.183) 0:01:10.050 ******** 2026-01-02 00:45:19.966607 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:19.966621 | orchestrator | 2026-01-02 00:45:19.966637 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-02 00:45:19.966651 | orchestrator | Friday 02 January 2026 00:45:14 +0000 (0:00:00.129) 0:01:10.179 ******** 2026-01-02 00:45:19.966666 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:19.966680 | orchestrator | 2026-01-02 00:45:19.966696 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-02 00:45:19.966710 | orchestrator | Friday 02 January 2026 00:45:14 +0000 (0:00:00.108) 0:01:10.287 ******** 2026-01-02 00:45:19.966725 | orchestrator | ok: [testbed-node-5] => { 2026-01-02 00:45:19.966741 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-02 00:45:19.966754 | orchestrator | } 2026-01-02 00:45:19.966769 | orchestrator | 2026-01-02 00:45:19.966784 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-02 00:45:19.966799 | orchestrator | Friday 02 January 2026 00:45:14 +0000 (0:00:00.136) 0:01:10.423 ******** 2026-01-02 00:45:19.966814 | orchestrator | ok: [testbed-node-5] => { 2026-01-02 00:45:19.966829 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-02 00:45:19.966845 | orchestrator | } 2026-01-02 00:45:19.966859 | orchestrator | 2026-01-02 00:45:19.966874 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-02 00:45:19.966888 | orchestrator | Friday 02 January 2026 00:45:15 +0000 (0:00:00.132) 0:01:10.556 ******** 2026-01-02 00:45:19.966901 | orchestrator | ok: [testbed-node-5] => { 2026-01-02 00:45:19.966915 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-02 00:45:19.966928 | orchestrator | } 2026-01-02 00:45:19.966941 | orchestrator | 2026-01-02 00:45:19.966952 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-02 00:45:19.966965 | orchestrator | Friday 02 January 2026 00:45:15 +0000 (0:00:00.134) 0:01:10.690 ******** 2026-01-02 00:45:19.966977 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:45:19.966990 | orchestrator | 2026-01-02 00:45:19.967002 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-02 00:45:19.967015 | orchestrator | Friday 02 January 2026 00:45:15 +0000 (0:00:00.513) 0:01:11.204 ******** 2026-01-02 00:45:19.967028 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:45:19.967077 | orchestrator | 2026-01-02 00:45:19.967202 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-02 00:45:19.967217 | orchestrator | Friday 02 January 2026 00:45:16 +0000 (0:00:00.490) 0:01:11.694 ******** 2026-01-02 00:45:19.967229 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:45:19.967242 | orchestrator | 2026-01-02 00:45:19.967254 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-02 00:45:19.967268 | orchestrator | Friday 02 January 2026 00:45:16 +0000 (0:00:00.624) 0:01:12.318 ******** 2026-01-02 00:45:19.967281 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:45:19.967292 | orchestrator | 2026-01-02 00:45:19.967304 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-02 00:45:19.967316 | orchestrator | Friday 02 January 2026 00:45:16 +0000 (0:00:00.136) 0:01:12.454 ******** 2026-01-02 00:45:19.967328 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:19.967340 | orchestrator | 2026-01-02 00:45:19.967352 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-02 00:45:19.967382 | orchestrator | Friday 02 January 2026 00:45:17 +0000 (0:00:00.111) 0:01:12.566 ******** 2026-01-02 00:45:19.967395 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:19.967408 | orchestrator | 2026-01-02 00:45:19.967421 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-02 00:45:19.967455 | orchestrator | Friday 02 January 2026 00:45:17 +0000 (0:00:00.099) 0:01:12.666 ******** 2026-01-02 00:45:19.967469 | orchestrator | ok: [testbed-node-5] => { 2026-01-02 00:45:19.967482 | orchestrator |  "vgs_report": { 2026-01-02 00:45:19.967496 | orchestrator |  "vg": [] 2026-01-02 00:45:19.967536 | orchestrator |  } 2026-01-02 00:45:19.967553 | orchestrator | } 2026-01-02 00:45:19.967565 | orchestrator | 2026-01-02 00:45:19.967578 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-02 00:45:19.967592 | orchestrator | Friday 02 January 2026 00:45:17 +0000 (0:00:00.143) 0:01:12.809 ******** 2026-01-02 00:45:19.967605 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:19.967618 | orchestrator | 2026-01-02 00:45:19.967632 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-02 00:45:19.967646 | orchestrator | Friday 02 January 2026 00:45:17 +0000 (0:00:00.136) 0:01:12.946 ******** 2026-01-02 00:45:19.967660 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:19.967673 | orchestrator | 2026-01-02 00:45:19.967686 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-02 00:45:19.967700 | orchestrator | Friday 02 January 2026 00:45:17 +0000 (0:00:00.148) 0:01:13.095 ******** 2026-01-02 00:45:19.967714 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:19.967727 | orchestrator | 2026-01-02 00:45:19.967741 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-02 00:45:19.967754 | orchestrator | Friday 02 January 2026 00:45:17 +0000 (0:00:00.116) 0:01:13.212 ******** 2026-01-02 00:45:19.967768 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:19.967781 | orchestrator | 2026-01-02 00:45:19.967795 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-02 00:45:19.967807 | orchestrator | Friday 02 January 2026 00:45:17 +0000 (0:00:00.124) 0:01:13.336 ******** 2026-01-02 00:45:19.967821 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:19.967834 | orchestrator | 2026-01-02 00:45:19.967847 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-02 00:45:19.967861 | orchestrator | Friday 02 January 2026 00:45:17 +0000 (0:00:00.123) 0:01:13.460 ******** 2026-01-02 00:45:19.967876 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:19.967889 | orchestrator | 2026-01-02 00:45:19.967903 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-02 00:45:19.967916 | orchestrator | Friday 02 January 2026 00:45:18 +0000 (0:00:00.136) 0:01:13.597 ******** 2026-01-02 00:45:19.967929 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:19.967943 | orchestrator | 2026-01-02 00:45:19.967956 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-02 00:45:19.967970 | orchestrator | Friday 02 January 2026 00:45:18 +0000 (0:00:00.144) 0:01:13.741 ******** 2026-01-02 00:45:19.967983 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:19.967996 | orchestrator | 2026-01-02 00:45:19.968011 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-02 00:45:19.968025 | orchestrator | Friday 02 January 2026 00:45:18 +0000 (0:00:00.365) 0:01:14.107 ******** 2026-01-02 00:45:19.968065 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:19.968079 | orchestrator | 2026-01-02 00:45:19.968100 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-02 00:45:19.968115 | orchestrator | Friday 02 January 2026 00:45:18 +0000 (0:00:00.152) 0:01:14.259 ******** 2026-01-02 00:45:19.968128 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:19.968140 | orchestrator | 2026-01-02 00:45:19.968152 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-02 00:45:19.968176 | orchestrator | Friday 02 January 2026 00:45:18 +0000 (0:00:00.145) 0:01:14.405 ******** 2026-01-02 00:45:19.968190 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:19.968202 | orchestrator | 2026-01-02 00:45:19.968215 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-02 00:45:19.968227 | orchestrator | Friday 02 January 2026 00:45:19 +0000 (0:00:00.134) 0:01:14.540 ******** 2026-01-02 00:45:19.968240 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:19.968254 | orchestrator | 2026-01-02 00:45:19.968268 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-02 00:45:19.968281 | orchestrator | Friday 02 January 2026 00:45:19 +0000 (0:00:00.157) 0:01:14.698 ******** 2026-01-02 00:45:19.968297 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:19.968309 | orchestrator | 2026-01-02 00:45:19.968324 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-02 00:45:19.968336 | orchestrator | Friday 02 January 2026 00:45:19 +0000 (0:00:00.150) 0:01:14.848 ******** 2026-01-02 00:45:19.968348 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:19.968361 | orchestrator | 2026-01-02 00:45:19.968375 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-02 00:45:19.968388 | orchestrator | Friday 02 January 2026 00:45:19 +0000 (0:00:00.138) 0:01:14.987 ******** 2026-01-02 00:45:19.968403 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c17e839-2cbb-5f17-abcc-9f26ae111b42', 'data_vg': 'ceph-8c17e839-2cbb-5f17-abcc-9f26ae111b42'})  2026-01-02 00:45:19.968418 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-37cfd703-64b2-55b0-ad28-4f6812d5fa0d', 'data_vg': 'ceph-37cfd703-64b2-55b0-ad28-4f6812d5fa0d'})  2026-01-02 00:45:19.968431 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:19.968445 | orchestrator | 2026-01-02 00:45:19.968458 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-02 00:45:19.968471 | orchestrator | Friday 02 January 2026 00:45:19 +0000 (0:00:00.153) 0:01:15.140 ******** 2026-01-02 00:45:19.968485 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c17e839-2cbb-5f17-abcc-9f26ae111b42', 'data_vg': 'ceph-8c17e839-2cbb-5f17-abcc-9f26ae111b42'})  2026-01-02 00:45:19.968498 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-37cfd703-64b2-55b0-ad28-4f6812d5fa0d', 'data_vg': 'ceph-37cfd703-64b2-55b0-ad28-4f6812d5fa0d'})  2026-01-02 00:45:19.968512 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:19.968525 | orchestrator | 2026-01-02 00:45:19.968539 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-02 00:45:19.968553 | orchestrator | Friday 02 January 2026 00:45:19 +0000 (0:00:00.169) 0:01:15.309 ******** 2026-01-02 00:45:19.968581 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c17e839-2cbb-5f17-abcc-9f26ae111b42', 'data_vg': 'ceph-8c17e839-2cbb-5f17-abcc-9f26ae111b42'})  2026-01-02 00:45:23.216315 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-37cfd703-64b2-55b0-ad28-4f6812d5fa0d', 'data_vg': 'ceph-37cfd703-64b2-55b0-ad28-4f6812d5fa0d'})  2026-01-02 00:45:23.216412 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:23.216425 | orchestrator | 2026-01-02 00:45:23.216435 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-02 00:45:23.216445 | orchestrator | Friday 02 January 2026 00:45:19 +0000 (0:00:00.155) 0:01:15.465 ******** 2026-01-02 00:45:23.216453 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c17e839-2cbb-5f17-abcc-9f26ae111b42', 'data_vg': 'ceph-8c17e839-2cbb-5f17-abcc-9f26ae111b42'})  2026-01-02 00:45:23.216462 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-37cfd703-64b2-55b0-ad28-4f6812d5fa0d', 'data_vg': 'ceph-37cfd703-64b2-55b0-ad28-4f6812d5fa0d'})  2026-01-02 00:45:23.216470 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:23.216478 | orchestrator | 2026-01-02 00:45:23.216486 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-02 00:45:23.216515 | orchestrator | Friday 02 January 2026 00:45:20 +0000 (0:00:00.158) 0:01:15.623 ******** 2026-01-02 00:45:23.216524 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c17e839-2cbb-5f17-abcc-9f26ae111b42', 'data_vg': 'ceph-8c17e839-2cbb-5f17-abcc-9f26ae111b42'})  2026-01-02 00:45:23.216532 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-37cfd703-64b2-55b0-ad28-4f6812d5fa0d', 'data_vg': 'ceph-37cfd703-64b2-55b0-ad28-4f6812d5fa0d'})  2026-01-02 00:45:23.216540 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:23.216548 | orchestrator | 2026-01-02 00:45:23.216556 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-02 00:45:23.216564 | orchestrator | Friday 02 January 2026 00:45:20 +0000 (0:00:00.170) 0:01:15.793 ******** 2026-01-02 00:45:23.216572 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c17e839-2cbb-5f17-abcc-9f26ae111b42', 'data_vg': 'ceph-8c17e839-2cbb-5f17-abcc-9f26ae111b42'})  2026-01-02 00:45:23.216593 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-37cfd703-64b2-55b0-ad28-4f6812d5fa0d', 'data_vg': 'ceph-37cfd703-64b2-55b0-ad28-4f6812d5fa0d'})  2026-01-02 00:45:23.216602 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:23.216610 | orchestrator | 2026-01-02 00:45:23.216618 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-02 00:45:23.216626 | orchestrator | Friday 02 January 2026 00:45:20 +0000 (0:00:00.389) 0:01:16.183 ******** 2026-01-02 00:45:23.216634 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c17e839-2cbb-5f17-abcc-9f26ae111b42', 'data_vg': 'ceph-8c17e839-2cbb-5f17-abcc-9f26ae111b42'})  2026-01-02 00:45:23.216642 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-37cfd703-64b2-55b0-ad28-4f6812d5fa0d', 'data_vg': 'ceph-37cfd703-64b2-55b0-ad28-4f6812d5fa0d'})  2026-01-02 00:45:23.216650 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:23.216658 | orchestrator | 2026-01-02 00:45:23.216666 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-02 00:45:23.216674 | orchestrator | Friday 02 January 2026 00:45:20 +0000 (0:00:00.181) 0:01:16.364 ******** 2026-01-02 00:45:23.216682 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c17e839-2cbb-5f17-abcc-9f26ae111b42', 'data_vg': 'ceph-8c17e839-2cbb-5f17-abcc-9f26ae111b42'})  2026-01-02 00:45:23.216691 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-37cfd703-64b2-55b0-ad28-4f6812d5fa0d', 'data_vg': 'ceph-37cfd703-64b2-55b0-ad28-4f6812d5fa0d'})  2026-01-02 00:45:23.216699 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:23.216707 | orchestrator | 2026-01-02 00:45:23.216715 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-02 00:45:23.216723 | orchestrator | Friday 02 January 2026 00:45:21 +0000 (0:00:00.175) 0:01:16.540 ******** 2026-01-02 00:45:23.216731 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:45:23.216740 | orchestrator | 2026-01-02 00:45:23.216748 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-02 00:45:23.216756 | orchestrator | Friday 02 January 2026 00:45:21 +0000 (0:00:00.589) 0:01:17.129 ******** 2026-01-02 00:45:23.216765 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:45:23.216773 | orchestrator | 2026-01-02 00:45:23.216781 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-02 00:45:23.216789 | orchestrator | Friday 02 January 2026 00:45:22 +0000 (0:00:00.555) 0:01:17.685 ******** 2026-01-02 00:45:23.216797 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:45:23.216804 | orchestrator | 2026-01-02 00:45:23.216812 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-02 00:45:23.216820 | orchestrator | Friday 02 January 2026 00:45:22 +0000 (0:00:00.149) 0:01:17.835 ******** 2026-01-02 00:45:23.216829 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-37cfd703-64b2-55b0-ad28-4f6812d5fa0d', 'vg_name': 'ceph-37cfd703-64b2-55b0-ad28-4f6812d5fa0d'}) 2026-01-02 00:45:23.216838 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-8c17e839-2cbb-5f17-abcc-9f26ae111b42', 'vg_name': 'ceph-8c17e839-2cbb-5f17-abcc-9f26ae111b42'}) 2026-01-02 00:45:23.216854 | orchestrator | 2026-01-02 00:45:23.216864 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-02 00:45:23.216874 | orchestrator | Friday 02 January 2026 00:45:22 +0000 (0:00:00.205) 0:01:18.040 ******** 2026-01-02 00:45:23.216897 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c17e839-2cbb-5f17-abcc-9f26ae111b42', 'data_vg': 'ceph-8c17e839-2cbb-5f17-abcc-9f26ae111b42'})  2026-01-02 00:45:23.216907 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-37cfd703-64b2-55b0-ad28-4f6812d5fa0d', 'data_vg': 'ceph-37cfd703-64b2-55b0-ad28-4f6812d5fa0d'})  2026-01-02 00:45:23.216916 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:23.216926 | orchestrator | 2026-01-02 00:45:23.216935 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-02 00:45:23.216944 | orchestrator | Friday 02 January 2026 00:45:22 +0000 (0:00:00.190) 0:01:18.231 ******** 2026-01-02 00:45:23.216953 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c17e839-2cbb-5f17-abcc-9f26ae111b42', 'data_vg': 'ceph-8c17e839-2cbb-5f17-abcc-9f26ae111b42'})  2026-01-02 00:45:23.216963 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-37cfd703-64b2-55b0-ad28-4f6812d5fa0d', 'data_vg': 'ceph-37cfd703-64b2-55b0-ad28-4f6812d5fa0d'})  2026-01-02 00:45:23.216972 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:23.216981 | orchestrator | 2026-01-02 00:45:23.216991 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-02 00:45:23.217000 | orchestrator | Friday 02 January 2026 00:45:22 +0000 (0:00:00.150) 0:01:18.381 ******** 2026-01-02 00:45:23.217008 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8c17e839-2cbb-5f17-abcc-9f26ae111b42', 'data_vg': 'ceph-8c17e839-2cbb-5f17-abcc-9f26ae111b42'})  2026-01-02 00:45:23.217018 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-37cfd703-64b2-55b0-ad28-4f6812d5fa0d', 'data_vg': 'ceph-37cfd703-64b2-55b0-ad28-4f6812d5fa0d'})  2026-01-02 00:45:23.217027 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:23.217085 | orchestrator | 2026-01-02 00:45:23.217095 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-02 00:45:23.217104 | orchestrator | Friday 02 January 2026 00:45:23 +0000 (0:00:00.154) 0:01:18.536 ******** 2026-01-02 00:45:23.217114 | orchestrator | ok: [testbed-node-5] => { 2026-01-02 00:45:23.217124 | orchestrator |  "lvm_report": { 2026-01-02 00:45:23.217134 | orchestrator |  "lv": [ 2026-01-02 00:45:23.217143 | orchestrator |  { 2026-01-02 00:45:23.217158 | orchestrator |  "lv_name": "osd-block-37cfd703-64b2-55b0-ad28-4f6812d5fa0d", 2026-01-02 00:45:23.217168 | orchestrator |  "vg_name": "ceph-37cfd703-64b2-55b0-ad28-4f6812d5fa0d" 2026-01-02 00:45:23.217177 | orchestrator |  }, 2026-01-02 00:45:23.217187 | orchestrator |  { 2026-01-02 00:45:23.217197 | orchestrator |  "lv_name": "osd-block-8c17e839-2cbb-5f17-abcc-9f26ae111b42", 2026-01-02 00:45:23.217206 | orchestrator |  "vg_name": "ceph-8c17e839-2cbb-5f17-abcc-9f26ae111b42" 2026-01-02 00:45:23.217216 | orchestrator |  } 2026-01-02 00:45:23.217225 | orchestrator |  ], 2026-01-02 00:45:23.217236 | orchestrator |  "pv": [ 2026-01-02 00:45:23.217245 | orchestrator |  { 2026-01-02 00:45:23.217254 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-02 00:45:23.217262 | orchestrator |  "vg_name": "ceph-8c17e839-2cbb-5f17-abcc-9f26ae111b42" 2026-01-02 00:45:23.217270 | orchestrator |  }, 2026-01-02 00:45:23.217278 | orchestrator |  { 2026-01-02 00:45:23.217286 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-02 00:45:23.217294 | orchestrator |  "vg_name": "ceph-37cfd703-64b2-55b0-ad28-4f6812d5fa0d" 2026-01-02 00:45:23.217302 | orchestrator |  } 2026-01-02 00:45:23.217311 | orchestrator |  ] 2026-01-02 00:45:23.217325 | orchestrator |  } 2026-01-02 00:45:23.217334 | orchestrator | } 2026-01-02 00:45:23.217342 | orchestrator | 2026-01-02 00:45:23.217351 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:45:23.217359 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-02 00:45:23.217367 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-02 00:45:23.217375 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-02 00:45:23.217384 | orchestrator | 2026-01-02 00:45:23.217392 | orchestrator | 2026-01-02 00:45:23.217400 | orchestrator | 2026-01-02 00:45:23.217408 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:45:23.217416 | orchestrator | Friday 02 January 2026 00:45:23 +0000 (0:00:00.162) 0:01:18.699 ******** 2026-01-02 00:45:23.217424 | orchestrator | =============================================================================== 2026-01-02 00:45:23.217433 | orchestrator | Create block VGs -------------------------------------------------------- 5.92s 2026-01-02 00:45:23.217441 | orchestrator | Create block LVs -------------------------------------------------------- 4.20s 2026-01-02 00:45:23.217449 | orchestrator | Add known partitions to the list of available block devices ------------- 1.92s 2026-01-02 00:45:23.217457 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.82s 2026-01-02 00:45:23.217465 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.73s 2026-01-02 00:45:23.217473 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.65s 2026-01-02 00:45:23.217481 | orchestrator | Add known links to the list of available block devices ------------------ 1.60s 2026-01-02 00:45:23.217490 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.59s 2026-01-02 00:45:23.217503 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.58s 2026-01-02 00:45:23.701183 | orchestrator | Add known partitions to the list of available block devices ------------- 1.50s 2026-01-02 00:45:23.701285 | orchestrator | Add known links to the list of available block devices ------------------ 1.13s 2026-01-02 00:45:23.701296 | orchestrator | Add known partitions to the list of available block devices ------------- 0.96s 2026-01-02 00:45:23.701304 | orchestrator | Print LVM report data --------------------------------------------------- 0.93s 2026-01-02 00:45:23.701310 | orchestrator | Add known links to the list of available block devices ------------------ 0.92s 2026-01-02 00:45:23.701317 | orchestrator | Add known links to the list of available block devices ------------------ 0.88s 2026-01-02 00:45:23.701325 | orchestrator | Calculate size needed for LVs on ceph_wal_devices ----------------------- 0.78s 2026-01-02 00:45:23.701331 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.78s 2026-01-02 00:45:23.701338 | orchestrator | Get initial list of available block devices ----------------------------- 0.74s 2026-01-02 00:45:23.701345 | orchestrator | Print 'Create WAL LVs for ceph_db_wal_devices' -------------------------- 0.74s 2026-01-02 00:45:23.701352 | orchestrator | Add known partitions to the list of available block devices ------------- 0.74s 2026-01-02 00:45:36.263570 | orchestrator | 2026-01-02 00:45:36 | INFO  | Task d1f41c4a-08ce-4941-a18a-c77043563aab (facts) was prepared for execution. 2026-01-02 00:45:36.263693 | orchestrator | 2026-01-02 00:45:36 | INFO  | It takes a moment until task d1f41c4a-08ce-4941-a18a-c77043563aab (facts) has been started and output is visible here. 2026-01-02 00:45:48.975697 | orchestrator | 2026-01-02 00:45:48.975800 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-02 00:45:48.975813 | orchestrator | 2026-01-02 00:45:48.975823 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-02 00:45:48.975833 | orchestrator | Friday 02 January 2026 00:45:40 +0000 (0:00:00.290) 0:00:00.290 ******** 2026-01-02 00:45:48.975866 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:45:48.975878 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:45:48.975887 | orchestrator | ok: [testbed-manager] 2026-01-02 00:45:48.975896 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:45:48.975905 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:45:48.975914 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:45:48.975922 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:45:48.975931 | orchestrator | 2026-01-02 00:45:48.975940 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-02 00:45:48.975950 | orchestrator | Friday 02 January 2026 00:45:41 +0000 (0:00:01.122) 0:00:01.412 ******** 2026-01-02 00:45:48.975959 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:45:48.975969 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:45:48.975977 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:45:48.975986 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:45:48.975995 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:45:48.976004 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:45:48.976012 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:48.976065 | orchestrator | 2026-01-02 00:45:48.976075 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-02 00:45:48.976084 | orchestrator | 2026-01-02 00:45:48.976093 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-02 00:45:48.976102 | orchestrator | Friday 02 January 2026 00:45:43 +0000 (0:00:01.274) 0:00:02.687 ******** 2026-01-02 00:45:48.976111 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:45:48.976120 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:45:48.976128 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:45:48.976137 | orchestrator | ok: [testbed-manager] 2026-01-02 00:45:48.976146 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:45:48.976154 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:45:48.976163 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:45:48.976172 | orchestrator | 2026-01-02 00:45:48.976180 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-02 00:45:48.976189 | orchestrator | 2026-01-02 00:45:48.976198 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-02 00:45:48.976207 | orchestrator | Friday 02 January 2026 00:45:47 +0000 (0:00:04.832) 0:00:07.520 ******** 2026-01-02 00:45:48.976216 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:45:48.976224 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:45:48.976233 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:45:48.976242 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:45:48.976252 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:45:48.976262 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:45:48.976272 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:48.976282 | orchestrator | 2026-01-02 00:45:48.976292 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:45:48.976303 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:45:48.976314 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:45:48.976325 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:45:48.976336 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:45:48.976346 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:45:48.976356 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:45:48.976372 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:45:48.976382 | orchestrator | 2026-01-02 00:45:48.976393 | orchestrator | 2026-01-02 00:45:48.976403 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:45:48.976414 | orchestrator | Friday 02 January 2026 00:45:48 +0000 (0:00:00.558) 0:00:08.078 ******** 2026-01-02 00:45:48.976429 | orchestrator | =============================================================================== 2026-01-02 00:45:48.976443 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.83s 2026-01-02 00:45:48.976453 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.27s 2026-01-02 00:45:48.976463 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.12s 2026-01-02 00:45:48.976474 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.56s 2026-01-02 00:46:01.582839 | orchestrator | 2026-01-02 00:46:01 | INFO  | Task 8b925761-c879-4d1f-82b7-468cb644a7bf (frr) was prepared for execution. 2026-01-02 00:46:01.582925 | orchestrator | 2026-01-02 00:46:01 | INFO  | It takes a moment until task 8b925761-c879-4d1f-82b7-468cb644a7bf (frr) has been started and output is visible here. 2026-01-02 00:46:30.708614 | orchestrator | 2026-01-02 00:46:30.708731 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-01-02 00:46:30.708749 | orchestrator | 2026-01-02 00:46:30.708762 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-01-02 00:46:30.708792 | orchestrator | Friday 02 January 2026 00:46:06 +0000 (0:00:00.234) 0:00:00.234 ******** 2026-01-02 00:46:30.708805 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-01-02 00:46:30.708817 | orchestrator | 2026-01-02 00:46:30.708829 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-01-02 00:46:30.708840 | orchestrator | Friday 02 January 2026 00:46:06 +0000 (0:00:00.223) 0:00:00.457 ******** 2026-01-02 00:46:30.708851 | orchestrator | changed: [testbed-manager] 2026-01-02 00:46:30.708865 | orchestrator | 2026-01-02 00:46:30.708876 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-01-02 00:46:30.708894 | orchestrator | Friday 02 January 2026 00:46:07 +0000 (0:00:01.243) 0:00:01.701 ******** 2026-01-02 00:46:30.708906 | orchestrator | changed: [testbed-manager] 2026-01-02 00:46:30.708917 | orchestrator | 2026-01-02 00:46:30.708928 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-01-02 00:46:30.708939 | orchestrator | Friday 02 January 2026 00:46:18 +0000 (0:00:10.610) 0:00:12.311 ******** 2026-01-02 00:46:30.708949 | orchestrator | ok: [testbed-manager] 2026-01-02 00:46:30.708961 | orchestrator | 2026-01-02 00:46:30.708972 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-01-02 00:46:30.708984 | orchestrator | Friday 02 January 2026 00:46:20 +0000 (0:00:02.070) 0:00:14.381 ******** 2026-01-02 00:46:30.708995 | orchestrator | changed: [testbed-manager] 2026-01-02 00:46:30.709032 | orchestrator | 2026-01-02 00:46:30.709044 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-01-02 00:46:30.709055 | orchestrator | Friday 02 January 2026 00:46:21 +0000 (0:00:01.056) 0:00:15.438 ******** 2026-01-02 00:46:30.709066 | orchestrator | ok: [testbed-manager] 2026-01-02 00:46:30.709077 | orchestrator | 2026-01-02 00:46:30.709088 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-01-02 00:46:30.709100 | orchestrator | Friday 02 January 2026 00:46:22 +0000 (0:00:01.242) 0:00:16.680 ******** 2026-01-02 00:46:30.709111 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:46:30.709122 | orchestrator | 2026-01-02 00:46:30.709133 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-01-02 00:46:30.709146 | orchestrator | Friday 02 January 2026 00:46:22 +0000 (0:00:00.135) 0:00:16.816 ******** 2026-01-02 00:46:30.709181 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:46:30.709195 | orchestrator | 2026-01-02 00:46:30.709208 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-01-02 00:46:30.709220 | orchestrator | Friday 02 January 2026 00:46:22 +0000 (0:00:00.149) 0:00:16.965 ******** 2026-01-02 00:46:30.709233 | orchestrator | changed: [testbed-manager] 2026-01-02 00:46:30.709245 | orchestrator | 2026-01-02 00:46:30.709258 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-01-02 00:46:30.709271 | orchestrator | Friday 02 January 2026 00:46:23 +0000 (0:00:01.011) 0:00:17.977 ******** 2026-01-02 00:46:30.709284 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-01-02 00:46:30.709297 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-01-02 00:46:30.709311 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-01-02 00:46:30.709324 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-01-02 00:46:30.709337 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-01-02 00:46:30.709350 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-01-02 00:46:30.709363 | orchestrator | 2026-01-02 00:46:30.709375 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-01-02 00:46:30.709388 | orchestrator | Friday 02 January 2026 00:46:27 +0000 (0:00:03.428) 0:00:21.406 ******** 2026-01-02 00:46:30.709401 | orchestrator | ok: [testbed-manager] 2026-01-02 00:46:30.709413 | orchestrator | 2026-01-02 00:46:30.709426 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-01-02 00:46:30.709439 | orchestrator | Friday 02 January 2026 00:46:28 +0000 (0:00:01.641) 0:00:23.047 ******** 2026-01-02 00:46:30.709451 | orchestrator | changed: [testbed-manager] 2026-01-02 00:46:30.709464 | orchestrator | 2026-01-02 00:46:30.709477 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:46:30.709491 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:46:30.709504 | orchestrator | 2026-01-02 00:46:30.709515 | orchestrator | 2026-01-02 00:46:30.709526 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:46:30.709536 | orchestrator | Friday 02 January 2026 00:46:30 +0000 (0:00:01.477) 0:00:24.524 ******** 2026-01-02 00:46:30.709547 | orchestrator | =============================================================================== 2026-01-02 00:46:30.709558 | orchestrator | osism.services.frr : Install frr package ------------------------------- 10.61s 2026-01-02 00:46:30.709569 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.43s 2026-01-02 00:46:30.709580 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 2.07s 2026-01-02 00:46:30.709591 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.64s 2026-01-02 00:46:30.709602 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.48s 2026-01-02 00:46:30.709630 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.24s 2026-01-02 00:46:30.709642 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.24s 2026-01-02 00:46:30.709653 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.06s 2026-01-02 00:46:30.709664 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.01s 2026-01-02 00:46:30.709675 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.22s 2026-01-02 00:46:30.709686 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.15s 2026-01-02 00:46:30.709697 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.14s 2026-01-02 00:46:31.051155 | orchestrator | 2026-01-02 00:46:31.055136 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Fri Jan 2 00:46:31 UTC 2026 2026-01-02 00:46:31.055198 | orchestrator | 2026-01-02 00:46:33.106590 | orchestrator | 2026-01-02 00:46:33 | INFO  | Collection nutshell is prepared for execution 2026-01-02 00:46:33.106713 | orchestrator | 2026-01-02 00:46:33 | INFO  | A [0] - dotfiles 2026-01-02 00:46:43.138554 | orchestrator | 2026-01-02 00:46:43 | INFO  | A [0] - homer 2026-01-02 00:46:43.138894 | orchestrator | 2026-01-02 00:46:43 | INFO  | A [0] - netdata 2026-01-02 00:46:43.139244 | orchestrator | 2026-01-02 00:46:43 | INFO  | A [0] - openstackclient 2026-01-02 00:46:43.139277 | orchestrator | 2026-01-02 00:46:43 | INFO  | A [0] - phpmyadmin 2026-01-02 00:46:43.139519 | orchestrator | 2026-01-02 00:46:43 | INFO  | A [0] - common 2026-01-02 00:46:43.143651 | orchestrator | 2026-01-02 00:46:43 | INFO  | A [1] -- loadbalancer 2026-01-02 00:46:43.143710 | orchestrator | 2026-01-02 00:46:43 | INFO  | A [2] --- opensearch 2026-01-02 00:46:43.144105 | orchestrator | 2026-01-02 00:46:43 | INFO  | A [2] --- mariadb-ng 2026-01-02 00:46:43.145195 | orchestrator | 2026-01-02 00:46:43 | INFO  | A [3] ---- horizon 2026-01-02 00:46:43.145371 | orchestrator | 2026-01-02 00:46:43 | INFO  | A [3] ---- keystone 2026-01-02 00:46:43.145386 | orchestrator | 2026-01-02 00:46:43 | INFO  | A [4] ----- neutron 2026-01-02 00:46:43.145405 | orchestrator | 2026-01-02 00:46:43 | INFO  | A [5] ------ wait-for-nova 2026-01-02 00:46:43.145418 | orchestrator | 2026-01-02 00:46:43 | INFO  | A [6] ------- octavia 2026-01-02 00:46:43.147344 | orchestrator | 2026-01-02 00:46:43 | INFO  | A [4] ----- barbican 2026-01-02 00:46:43.147382 | orchestrator | 2026-01-02 00:46:43 | INFO  | A [4] ----- designate 2026-01-02 00:46:43.147599 | orchestrator | 2026-01-02 00:46:43 | INFO  | A [4] ----- ironic 2026-01-02 00:46:43.147778 | orchestrator | 2026-01-02 00:46:43 | INFO  | A [4] ----- placement 2026-01-02 00:46:43.148525 | orchestrator | 2026-01-02 00:46:43 | INFO  | A [4] ----- magnum 2026-01-02 00:46:43.149367 | orchestrator | 2026-01-02 00:46:43 | INFO  | A [1] -- openvswitch 2026-01-02 00:46:43.149649 | orchestrator | 2026-01-02 00:46:43 | INFO  | A [2] --- ovn 2026-01-02 00:46:43.150393 | orchestrator | 2026-01-02 00:46:43 | INFO  | A [1] -- memcached 2026-01-02 00:46:43.150599 | orchestrator | 2026-01-02 00:46:43 | INFO  | A [1] -- redis 2026-01-02 00:46:43.150902 | orchestrator | 2026-01-02 00:46:43 | INFO  | A [1] -- rabbitmq-ng 2026-01-02 00:46:43.151617 | orchestrator | 2026-01-02 00:46:43 | INFO  | A [0] - kubernetes 2026-01-02 00:46:43.155553 | orchestrator | 2026-01-02 00:46:43 | INFO  | A [1] -- kubeconfig 2026-01-02 00:46:43.155629 | orchestrator | 2026-01-02 00:46:43 | INFO  | A [1] -- copy-kubeconfig 2026-01-02 00:46:43.155644 | orchestrator | 2026-01-02 00:46:43 | INFO  | A [0] - ceph 2026-01-02 00:46:43.158577 | orchestrator | 2026-01-02 00:46:43 | INFO  | A [1] -- ceph-pools 2026-01-02 00:46:43.159209 | orchestrator | 2026-01-02 00:46:43 | INFO  | A [2] --- copy-ceph-keys 2026-01-02 00:46:43.159230 | orchestrator | 2026-01-02 00:46:43 | INFO  | A [3] ---- cephclient 2026-01-02 00:46:43.159240 | orchestrator | 2026-01-02 00:46:43 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-01-02 00:46:43.159251 | orchestrator | 2026-01-02 00:46:43 | INFO  | A [4] ----- wait-for-keystone 2026-01-02 00:46:43.159262 | orchestrator | 2026-01-02 00:46:43 | INFO  | A [5] ------ kolla-ceph-rgw 2026-01-02 00:46:43.159453 | orchestrator | 2026-01-02 00:46:43 | INFO  | A [5] ------ glance 2026-01-02 00:46:43.159652 | orchestrator | 2026-01-02 00:46:43 | INFO  | A [5] ------ cinder 2026-01-02 00:46:43.159983 | orchestrator | 2026-01-02 00:46:43 | INFO  | A [5] ------ nova 2026-01-02 00:46:43.160351 | orchestrator | 2026-01-02 00:46:43 | INFO  | A [4] ----- prometheus 2026-01-02 00:46:43.160445 | orchestrator | 2026-01-02 00:46:43 | INFO  | A [5] ------ grafana 2026-01-02 00:46:43.405511 | orchestrator | 2026-01-02 00:46:43 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-01-02 00:46:43.405611 | orchestrator | 2026-01-02 00:46:43 | INFO  | Tasks are running in the background 2026-01-02 00:46:46.834332 | orchestrator | 2026-01-02 00:46:46 | INFO  | No task IDs specified, wait for all currently running tasks 2026-01-02 00:46:48.979433 | orchestrator | 2026-01-02 00:46:48 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:46:48.979751 | orchestrator | 2026-01-02 00:46:48 | INFO  | Task dd4d3c35-9879-46df-b3fd-76b1c09e4437 is in state STARTED 2026-01-02 00:46:48.981780 | orchestrator | 2026-01-02 00:46:48 | INFO  | Task 8bdb245c-351e-47fc-abc4-71c51e647c48 is in state STARTED 2026-01-02 00:46:48.982565 | orchestrator | 2026-01-02 00:46:48 | INFO  | Task 8382080a-02d2-4dc0-a665-7e3b4381da66 is in state STARTED 2026-01-02 00:46:48.983164 | orchestrator | 2026-01-02 00:46:48 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:46:48.985240 | orchestrator | 2026-01-02 00:46:48 | INFO  | Task 0b874a16-29a4-4197-bf8b-175a3be81b20 is in state STARTED 2026-01-02 00:46:48.991506 | orchestrator | 2026-01-02 00:46:48 | INFO  | Task 04a32383-c016-47b5-bde6-babd476afd5c is in state STARTED 2026-01-02 00:46:48.991530 | orchestrator | 2026-01-02 00:46:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:46:52.068705 | orchestrator | 2026-01-02 00:46:52 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:46:52.069784 | orchestrator | 2026-01-02 00:46:52 | INFO  | Task dd4d3c35-9879-46df-b3fd-76b1c09e4437 is in state STARTED 2026-01-02 00:46:52.069853 | orchestrator | 2026-01-02 00:46:52 | INFO  | Task 8bdb245c-351e-47fc-abc4-71c51e647c48 is in state STARTED 2026-01-02 00:46:52.069879 | orchestrator | 2026-01-02 00:46:52 | INFO  | Task 8382080a-02d2-4dc0-a665-7e3b4381da66 is in state STARTED 2026-01-02 00:46:52.072527 | orchestrator | 2026-01-02 00:46:52 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:46:52.072684 | orchestrator | 2026-01-02 00:46:52 | INFO  | Task 0b874a16-29a4-4197-bf8b-175a3be81b20 is in state STARTED 2026-01-02 00:46:52.073186 | orchestrator | 2026-01-02 00:46:52 | INFO  | Task 04a32383-c016-47b5-bde6-babd476afd5c is in state STARTED 2026-01-02 00:46:52.073212 | orchestrator | 2026-01-02 00:46:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:46:55.095270 | orchestrator | 2026-01-02 00:46:55 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:46:55.095347 | orchestrator | 2026-01-02 00:46:55 | INFO  | Task dd4d3c35-9879-46df-b3fd-76b1c09e4437 is in state STARTED 2026-01-02 00:46:55.095774 | orchestrator | 2026-01-02 00:46:55 | INFO  | Task 8bdb245c-351e-47fc-abc4-71c51e647c48 is in state STARTED 2026-01-02 00:46:55.097228 | orchestrator | 2026-01-02 00:46:55 | INFO  | Task 8382080a-02d2-4dc0-a665-7e3b4381da66 is in state STARTED 2026-01-02 00:46:55.097732 | orchestrator | 2026-01-02 00:46:55 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:46:55.098297 | orchestrator | 2026-01-02 00:46:55 | INFO  | Task 0b874a16-29a4-4197-bf8b-175a3be81b20 is in state STARTED 2026-01-02 00:46:55.099916 | orchestrator | 2026-01-02 00:46:55 | INFO  | Task 04a32383-c016-47b5-bde6-babd476afd5c is in state STARTED 2026-01-02 00:46:55.099965 | orchestrator | 2026-01-02 00:46:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:46:58.192524 | orchestrator | 2026-01-02 00:46:58 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:46:58.192631 | orchestrator | 2026-01-02 00:46:58 | INFO  | Task dd4d3c35-9879-46df-b3fd-76b1c09e4437 is in state STARTED 2026-01-02 00:46:58.192647 | orchestrator | 2026-01-02 00:46:58 | INFO  | Task 8bdb245c-351e-47fc-abc4-71c51e647c48 is in state STARTED 2026-01-02 00:46:58.192660 | orchestrator | 2026-01-02 00:46:58 | INFO  | Task 8382080a-02d2-4dc0-a665-7e3b4381da66 is in state STARTED 2026-01-02 00:46:58.192671 | orchestrator | 2026-01-02 00:46:58 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:46:58.192682 | orchestrator | 2026-01-02 00:46:58 | INFO  | Task 0b874a16-29a4-4197-bf8b-175a3be81b20 is in state STARTED 2026-01-02 00:46:58.192694 | orchestrator | 2026-01-02 00:46:58 | INFO  | Task 04a32383-c016-47b5-bde6-babd476afd5c is in state STARTED 2026-01-02 00:46:58.192705 | orchestrator | 2026-01-02 00:46:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:47:01.215858 | orchestrator | 2026-01-02 00:47:01 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:47:01.219114 | orchestrator | 2026-01-02 00:47:01 | INFO  | Task dd4d3c35-9879-46df-b3fd-76b1c09e4437 is in state STARTED 2026-01-02 00:47:01.219497 | orchestrator | 2026-01-02 00:47:01 | INFO  | Task 8bdb245c-351e-47fc-abc4-71c51e647c48 is in state STARTED 2026-01-02 00:47:01.221833 | orchestrator | 2026-01-02 00:47:01 | INFO  | Task 8382080a-02d2-4dc0-a665-7e3b4381da66 is in state STARTED 2026-01-02 00:47:01.222515 | orchestrator | 2026-01-02 00:47:01 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:47:01.223673 | orchestrator | 2026-01-02 00:47:01 | INFO  | Task 0b874a16-29a4-4197-bf8b-175a3be81b20 is in state STARTED 2026-01-02 00:47:01.224500 | orchestrator | 2026-01-02 00:47:01 | INFO  | Task 04a32383-c016-47b5-bde6-babd476afd5c is in state STARTED 2026-01-02 00:47:01.224532 | orchestrator | 2026-01-02 00:47:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:47:04.333907 | orchestrator | 2026-01-02 00:47:04 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:47:04.334124 | orchestrator | 2026-01-02 00:47:04 | INFO  | Task dd4d3c35-9879-46df-b3fd-76b1c09e4437 is in state STARTED 2026-01-02 00:47:04.334146 | orchestrator | 2026-01-02 00:47:04 | INFO  | Task 8bdb245c-351e-47fc-abc4-71c51e647c48 is in state STARTED 2026-01-02 00:47:04.334158 | orchestrator | 2026-01-02 00:47:04 | INFO  | Task 8382080a-02d2-4dc0-a665-7e3b4381da66 is in state STARTED 2026-01-02 00:47:04.334169 | orchestrator | 2026-01-02 00:47:04 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:47:04.334180 | orchestrator | 2026-01-02 00:47:04 | INFO  | Task 0b874a16-29a4-4197-bf8b-175a3be81b20 is in state STARTED 2026-01-02 00:47:04.334191 | orchestrator | 2026-01-02 00:47:04 | INFO  | Task 04a32383-c016-47b5-bde6-babd476afd5c is in state STARTED 2026-01-02 00:47:04.334204 | orchestrator | 2026-01-02 00:47:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:47:07.405288 | orchestrator | 2026-01-02 00:47:07 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:47:07.405401 | orchestrator | 2026-01-02 00:47:07 | INFO  | Task dd4d3c35-9879-46df-b3fd-76b1c09e4437 is in state STARTED 2026-01-02 00:47:07.405418 | orchestrator | 2026-01-02 00:47:07 | INFO  | Task 8bdb245c-351e-47fc-abc4-71c51e647c48 is in state STARTED 2026-01-02 00:47:07.405457 | orchestrator | 2026-01-02 00:47:07 | INFO  | Task 8382080a-02d2-4dc0-a665-7e3b4381da66 is in state STARTED 2026-01-02 00:47:07.405470 | orchestrator | 2026-01-02 00:47:07 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:47:07.407279 | orchestrator | 2026-01-02 00:47:07 | INFO  | Task 0b874a16-29a4-4197-bf8b-175a3be81b20 is in state STARTED 2026-01-02 00:47:07.477058 | orchestrator | 2026-01-02 00:47:07 | INFO  | Task 04a32383-c016-47b5-bde6-babd476afd5c is in state STARTED 2026-01-02 00:47:07.477147 | orchestrator | 2026-01-02 00:47:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:47:10.610462 | orchestrator | 2026-01-02 00:47:10 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:47:10.616052 | orchestrator | 2026-01-02 00:47:10 | INFO  | Task dd4d3c35-9879-46df-b3fd-76b1c09e4437 is in state STARTED 2026-01-02 00:47:10.808680 | orchestrator | 2026-01-02 00:47:10 | INFO  | Task 8bdb245c-351e-47fc-abc4-71c51e647c48 is in state STARTED 2026-01-02 00:47:10.812107 | orchestrator | 2026-01-02 00:47:10 | INFO  | Task 8382080a-02d2-4dc0-a665-7e3b4381da66 is in state STARTED 2026-01-02 00:47:11.110425 | orchestrator | 2026-01-02 00:47:11 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:47:11.115837 | orchestrator | 2026-01-02 00:47:11 | INFO  | Task 0b874a16-29a4-4197-bf8b-175a3be81b20 is in state STARTED 2026-01-02 00:47:11.118252 | orchestrator | 2026-01-02 00:47:11 | INFO  | Task 04a32383-c016-47b5-bde6-babd476afd5c is in state STARTED 2026-01-02 00:47:11.118291 | orchestrator | 2026-01-02 00:47:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:47:14.266636 | orchestrator | 2026-01-02 00:47:14 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:47:14.285323 | orchestrator | 2026-01-02 00:47:14.285388 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-01-02 00:47:14.285394 | orchestrator | 2026-01-02 00:47:14.285398 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-01-02 00:47:14.285403 | orchestrator | Friday 02 January 2026 00:46:57 +0000 (0:00:00.621) 0:00:00.621 ******** 2026-01-02 00:47:14.285408 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:47:14.285413 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:47:14.285417 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:47:14.285421 | orchestrator | changed: [testbed-manager] 2026-01-02 00:47:14.285425 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:47:14.285429 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:47:14.285433 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:47:14.285437 | orchestrator | 2026-01-02 00:47:14.285441 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-01-02 00:47:14.285445 | orchestrator | Friday 02 January 2026 00:47:01 +0000 (0:00:04.349) 0:00:04.971 ******** 2026-01-02 00:47:14.285449 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-01-02 00:47:14.285454 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-01-02 00:47:14.285457 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-01-02 00:47:14.285461 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-01-02 00:47:14.285465 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-01-02 00:47:14.285469 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-01-02 00:47:14.285473 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-01-02 00:47:14.285477 | orchestrator | 2026-01-02 00:47:14.285480 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-01-02 00:47:14.285485 | orchestrator | Friday 02 January 2026 00:47:03 +0000 (0:00:01.852) 0:00:06.824 ******** 2026-01-02 00:47:14.285506 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-02 00:47:02.205079', 'end': '2026-01-02 00:47:02.213392', 'delta': '0:00:00.008313', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-02 00:47:14.285516 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-02 00:47:02.237189', 'end': '2026-01-02 00:47:02.247872', 'delta': '0:00:00.010683', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-02 00:47:14.285520 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-02 00:47:02.218290', 'end': '2026-01-02 00:47:02.223717', 'delta': '0:00:00.005427', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-02 00:47:14.285706 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-02 00:47:02.201029', 'end': '2026-01-02 00:47:02.209274', 'delta': '0:00:00.008245', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-02 00:47:14.285711 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-02 00:47:02.368948', 'end': '2026-01-02 00:47:02.377868', 'delta': '0:00:00.008920', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-02 00:47:14.285723 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-02 00:47:02.757026', 'end': '2026-01-02 00:47:02.763288', 'delta': '0:00:00.006262', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-02 00:47:14.285730 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-02 00:47:02.804552', 'end': '2026-01-02 00:47:02.810288', 'delta': '0:00:00.005736', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-02 00:47:14.285734 | orchestrator | 2026-01-02 00:47:14.285738 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-01-02 00:47:14.285742 | orchestrator | Friday 02 January 2026 00:47:06 +0000 (0:00:02.876) 0:00:09.700 ******** 2026-01-02 00:47:14.285745 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-01-02 00:47:14.285750 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-01-02 00:47:14.285753 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-01-02 00:47:14.285757 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-01-02 00:47:14.285761 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-01-02 00:47:14.285765 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-01-02 00:47:14.285768 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-01-02 00:47:14.285772 | orchestrator | 2026-01-02 00:47:14.285776 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-01-02 00:47:14.285780 | orchestrator | Friday 02 January 2026 00:47:08 +0000 (0:00:02.221) 0:00:11.921 ******** 2026-01-02 00:47:14.285784 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-01-02 00:47:14.285788 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-01-02 00:47:14.285791 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-01-02 00:47:14.285795 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-01-02 00:47:14.285799 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-01-02 00:47:14.285803 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-01-02 00:47:14.285807 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-01-02 00:47:14.285811 | orchestrator | 2026-01-02 00:47:14.285814 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:47:14.285822 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:47:14.285827 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:47:14.285831 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:47:14.285839 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:47:14.285843 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:47:14.285847 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:47:14.285851 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:47:14.285855 | orchestrator | 2026-01-02 00:47:14.285858 | orchestrator | 2026-01-02 00:47:14.285862 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:47:14.285866 | orchestrator | Friday 02 January 2026 00:47:10 +0000 (0:00:02.341) 0:00:14.263 ******** 2026-01-02 00:47:14.285870 | orchestrator | =============================================================================== 2026-01-02 00:47:14.285874 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.35s 2026-01-02 00:47:14.285878 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.88s 2026-01-02 00:47:14.285882 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.34s 2026-01-02 00:47:14.285887 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.22s 2026-01-02 00:47:14.285891 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.85s 2026-01-02 00:47:14.285896 | orchestrator | 2026-01-02 00:47:14 | INFO  | Task dd4d3c35-9879-46df-b3fd-76b1c09e4437 is in state STARTED 2026-01-02 00:47:14.285901 | orchestrator | 2026-01-02 00:47:14 | INFO  | Task 8bdb245c-351e-47fc-abc4-71c51e647c48 is in state STARTED 2026-01-02 00:47:14.285905 | orchestrator | 2026-01-02 00:47:14 | INFO  | Task 8382080a-02d2-4dc0-a665-7e3b4381da66 is in state SUCCESS 2026-01-02 00:47:14.285909 | orchestrator | 2026-01-02 00:47:14 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:47:14.285914 | orchestrator | 2026-01-02 00:47:14 | INFO  | Task 1312610c-dd1f-4757-b269-593e5139d4a2 is in state STARTED 2026-01-02 00:47:14.285918 | orchestrator | 2026-01-02 00:47:14 | INFO  | Task 0b874a16-29a4-4197-bf8b-175a3be81b20 is in state STARTED 2026-01-02 00:47:14.285925 | orchestrator | 2026-01-02 00:47:14 | INFO  | Task 04a32383-c016-47b5-bde6-babd476afd5c is in state STARTED 2026-01-02 00:47:14.285930 | orchestrator | 2026-01-02 00:47:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:47:17.401487 | orchestrator | 2026-01-02 00:47:17 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:47:17.402079 | orchestrator | 2026-01-02 00:47:17 | INFO  | Task dd4d3c35-9879-46df-b3fd-76b1c09e4437 is in state STARTED 2026-01-02 00:47:17.403817 | orchestrator | 2026-01-02 00:47:17 | INFO  | Task 8bdb245c-351e-47fc-abc4-71c51e647c48 is in state STARTED 2026-01-02 00:47:17.407564 | orchestrator | 2026-01-02 00:47:17 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:47:17.410617 | orchestrator | 2026-01-02 00:47:17 | INFO  | Task 1312610c-dd1f-4757-b269-593e5139d4a2 is in state STARTED 2026-01-02 00:47:17.412956 | orchestrator | 2026-01-02 00:47:17 | INFO  | Task 0b874a16-29a4-4197-bf8b-175a3be81b20 is in state STARTED 2026-01-02 00:47:17.415003 | orchestrator | 2026-01-02 00:47:17 | INFO  | Task 04a32383-c016-47b5-bde6-babd476afd5c is in state STARTED 2026-01-02 00:47:17.415494 | orchestrator | 2026-01-02 00:47:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:47:20.456785 | orchestrator | 2026-01-02 00:47:20 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:47:20.456884 | orchestrator | 2026-01-02 00:47:20 | INFO  | Task dd4d3c35-9879-46df-b3fd-76b1c09e4437 is in state STARTED 2026-01-02 00:47:20.456899 | orchestrator | 2026-01-02 00:47:20 | INFO  | Task 8bdb245c-351e-47fc-abc4-71c51e647c48 is in state STARTED 2026-01-02 00:47:20.456911 | orchestrator | 2026-01-02 00:47:20 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:47:20.456922 | orchestrator | 2026-01-02 00:47:20 | INFO  | Task 1312610c-dd1f-4757-b269-593e5139d4a2 is in state STARTED 2026-01-02 00:47:20.456934 | orchestrator | 2026-01-02 00:47:20 | INFO  | Task 0b874a16-29a4-4197-bf8b-175a3be81b20 is in state STARTED 2026-01-02 00:47:20.456945 | orchestrator | 2026-01-02 00:47:20 | INFO  | Task 04a32383-c016-47b5-bde6-babd476afd5c is in state STARTED 2026-01-02 00:47:20.456956 | orchestrator | 2026-01-02 00:47:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:47:23.860181 | orchestrator | 2026-01-02 00:47:23 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:47:23.877735 | orchestrator | 2026-01-02 00:47:23 | INFO  | Task dd4d3c35-9879-46df-b3fd-76b1c09e4437 is in state STARTED 2026-01-02 00:47:23.884894 | orchestrator | 2026-01-02 00:47:23 | INFO  | Task 8bdb245c-351e-47fc-abc4-71c51e647c48 is in state STARTED 2026-01-02 00:47:23.925453 | orchestrator | 2026-01-02 00:47:23 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:47:23.926175 | orchestrator | 2026-01-02 00:47:23 | INFO  | Task 1312610c-dd1f-4757-b269-593e5139d4a2 is in state STARTED 2026-01-02 00:47:23.927124 | orchestrator | 2026-01-02 00:47:23 | INFO  | Task 0b874a16-29a4-4197-bf8b-175a3be81b20 is in state STARTED 2026-01-02 00:47:23.930491 | orchestrator | 2026-01-02 00:47:23 | INFO  | Task 04a32383-c016-47b5-bde6-babd476afd5c is in state STARTED 2026-01-02 00:47:23.930534 | orchestrator | 2026-01-02 00:47:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:47:27.081437 | orchestrator | 2026-01-02 00:47:26 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:47:27.081548 | orchestrator | 2026-01-02 00:47:26 | INFO  | Task dd4d3c35-9879-46df-b3fd-76b1c09e4437 is in state STARTED 2026-01-02 00:47:27.081565 | orchestrator | 2026-01-02 00:47:26 | INFO  | Task 8bdb245c-351e-47fc-abc4-71c51e647c48 is in state STARTED 2026-01-02 00:47:27.081577 | orchestrator | 2026-01-02 00:47:26 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:47:27.081589 | orchestrator | 2026-01-02 00:47:27 | INFO  | Task 1312610c-dd1f-4757-b269-593e5139d4a2 is in state STARTED 2026-01-02 00:47:27.081600 | orchestrator | 2026-01-02 00:47:27 | INFO  | Task 0b874a16-29a4-4197-bf8b-175a3be81b20 is in state STARTED 2026-01-02 00:47:27.081611 | orchestrator | 2026-01-02 00:47:27 | INFO  | Task 04a32383-c016-47b5-bde6-babd476afd5c is in state STARTED 2026-01-02 00:47:27.081623 | orchestrator | 2026-01-02 00:47:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:47:30.060351 | orchestrator | 2026-01-02 00:47:30 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:47:30.060438 | orchestrator | 2026-01-02 00:47:30 | INFO  | Task dd4d3c35-9879-46df-b3fd-76b1c09e4437 is in state STARTED 2026-01-02 00:47:30.060464 | orchestrator | 2026-01-02 00:47:30 | INFO  | Task 8bdb245c-351e-47fc-abc4-71c51e647c48 is in state STARTED 2026-01-02 00:47:30.060472 | orchestrator | 2026-01-02 00:47:30 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:47:30.060504 | orchestrator | 2026-01-02 00:47:30 | INFO  | Task 1312610c-dd1f-4757-b269-593e5139d4a2 is in state STARTED 2026-01-02 00:47:30.060517 | orchestrator | 2026-01-02 00:47:30 | INFO  | Task 0b874a16-29a4-4197-bf8b-175a3be81b20 is in state STARTED 2026-01-02 00:47:30.060528 | orchestrator | 2026-01-02 00:47:30 | INFO  | Task 04a32383-c016-47b5-bde6-babd476afd5c is in state STARTED 2026-01-02 00:47:30.060539 | orchestrator | 2026-01-02 00:47:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:47:33.098592 | orchestrator | 2026-01-02 00:47:33 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:47:33.098701 | orchestrator | 2026-01-02 00:47:33 | INFO  | Task dd4d3c35-9879-46df-b3fd-76b1c09e4437 is in state STARTED 2026-01-02 00:47:33.100942 | orchestrator | 2026-01-02 00:47:33 | INFO  | Task 8bdb245c-351e-47fc-abc4-71c51e647c48 is in state STARTED 2026-01-02 00:47:33.104863 | orchestrator | 2026-01-02 00:47:33 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:47:33.118912 | orchestrator | 2026-01-02 00:47:33 | INFO  | Task 1312610c-dd1f-4757-b269-593e5139d4a2 is in state STARTED 2026-01-02 00:47:33.315516 | orchestrator | 2026-01-02 00:47:33 | INFO  | Task 0b874a16-29a4-4197-bf8b-175a3be81b20 is in state STARTED 2026-01-02 00:47:33.315589 | orchestrator | 2026-01-02 00:47:33 | INFO  | Task 04a32383-c016-47b5-bde6-babd476afd5c is in state STARTED 2026-01-02 00:47:33.315597 | orchestrator | 2026-01-02 00:47:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:47:36.198471 | orchestrator | 2026-01-02 00:47:36 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:47:36.199226 | orchestrator | 2026-01-02 00:47:36 | INFO  | Task dd4d3c35-9879-46df-b3fd-76b1c09e4437 is in state STARTED 2026-01-02 00:47:36.209944 | orchestrator | 2026-01-02 00:47:36 | INFO  | Task 8bdb245c-351e-47fc-abc4-71c51e647c48 is in state STARTED 2026-01-02 00:47:36.213017 | orchestrator | 2026-01-02 00:47:36 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:47:36.216420 | orchestrator | 2026-01-02 00:47:36 | INFO  | Task 1312610c-dd1f-4757-b269-593e5139d4a2 is in state STARTED 2026-01-02 00:47:36.218622 | orchestrator | 2026-01-02 00:47:36 | INFO  | Task 0b874a16-29a4-4197-bf8b-175a3be81b20 is in state STARTED 2026-01-02 00:47:36.221411 | orchestrator | 2026-01-02 00:47:36 | INFO  | Task 04a32383-c016-47b5-bde6-babd476afd5c is in state STARTED 2026-01-02 00:47:36.221916 | orchestrator | 2026-01-02 00:47:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:47:39.440034 | orchestrator | 2026-01-02 00:47:39 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:47:39.440150 | orchestrator | 2026-01-02 00:47:39 | INFO  | Task dd4d3c35-9879-46df-b3fd-76b1c09e4437 is in state STARTED 2026-01-02 00:47:39.440168 | orchestrator | 2026-01-02 00:47:39 | INFO  | Task 8bdb245c-351e-47fc-abc4-71c51e647c48 is in state STARTED 2026-01-02 00:47:39.440180 | orchestrator | 2026-01-02 00:47:39 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:47:39.440191 | orchestrator | 2026-01-02 00:47:39 | INFO  | Task 1312610c-dd1f-4757-b269-593e5139d4a2 is in state STARTED 2026-01-02 00:47:39.440202 | orchestrator | 2026-01-02 00:47:39 | INFO  | Task 0b874a16-29a4-4197-bf8b-175a3be81b20 is in state STARTED 2026-01-02 00:47:39.440214 | orchestrator | 2026-01-02 00:47:39 | INFO  | Task 04a32383-c016-47b5-bde6-babd476afd5c is in state STARTED 2026-01-02 00:47:39.440225 | orchestrator | 2026-01-02 00:47:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:47:42.351729 | orchestrator | 2026-01-02 00:47:42 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:47:42.351855 | orchestrator | 2026-01-02 00:47:42 | INFO  | Task dd4d3c35-9879-46df-b3fd-76b1c09e4437 is in state SUCCESS 2026-01-02 00:47:42.353609 | orchestrator | 2026-01-02 00:47:42 | INFO  | Task 8bdb245c-351e-47fc-abc4-71c51e647c48 is in state STARTED 2026-01-02 00:47:42.354315 | orchestrator | 2026-01-02 00:47:42 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:47:42.356876 | orchestrator | 2026-01-02 00:47:42 | INFO  | Task 1312610c-dd1f-4757-b269-593e5139d4a2 is in state STARTED 2026-01-02 00:47:42.357429 | orchestrator | 2026-01-02 00:47:42 | INFO  | Task 0b874a16-29a4-4197-bf8b-175a3be81b20 is in state STARTED 2026-01-02 00:47:42.359143 | orchestrator | 2026-01-02 00:47:42 | INFO  | Task 04a32383-c016-47b5-bde6-babd476afd5c is in state STARTED 2026-01-02 00:47:42.359509 | orchestrator | 2026-01-02 00:47:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:47:45.500344 | orchestrator | 2026-01-02 00:47:45 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:47:45.500493 | orchestrator | 2026-01-02 00:47:45 | INFO  | Task 8bdb245c-351e-47fc-abc4-71c51e647c48 is in state STARTED 2026-01-02 00:47:45.500511 | orchestrator | 2026-01-02 00:47:45 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:47:45.500524 | orchestrator | 2026-01-02 00:47:45 | INFO  | Task 1312610c-dd1f-4757-b269-593e5139d4a2 is in state STARTED 2026-01-02 00:47:45.500548 | orchestrator | 2026-01-02 00:47:45 | INFO  | Task 0b874a16-29a4-4197-bf8b-175a3be81b20 is in state STARTED 2026-01-02 00:47:45.502400 | orchestrator | 2026-01-02 00:47:45 | INFO  | Task 04a32383-c016-47b5-bde6-babd476afd5c is in state STARTED 2026-01-02 00:47:45.502435 | orchestrator | 2026-01-02 00:47:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:47:48.579614 | orchestrator | 2026-01-02 00:47:48 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:47:48.579808 | orchestrator | 2026-01-02 00:47:48 | INFO  | Task 8bdb245c-351e-47fc-abc4-71c51e647c48 is in state STARTED 2026-01-02 00:47:48.580618 | orchestrator | 2026-01-02 00:47:48 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:47:48.582759 | orchestrator | 2026-01-02 00:47:48 | INFO  | Task 1312610c-dd1f-4757-b269-593e5139d4a2 is in state STARTED 2026-01-02 00:47:48.584256 | orchestrator | 2026-01-02 00:47:48 | INFO  | Task 0b874a16-29a4-4197-bf8b-175a3be81b20 is in state STARTED 2026-01-02 00:47:48.585341 | orchestrator | 2026-01-02 00:47:48 | INFO  | Task 04a32383-c016-47b5-bde6-babd476afd5c is in state SUCCESS 2026-01-02 00:47:48.585740 | orchestrator | 2026-01-02 00:47:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:47:51.643790 | orchestrator | 2026-01-02 00:47:51 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:47:51.644224 | orchestrator | 2026-01-02 00:47:51 | INFO  | Task 8bdb245c-351e-47fc-abc4-71c51e647c48 is in state STARTED 2026-01-02 00:47:51.648840 | orchestrator | 2026-01-02 00:47:51 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:47:51.650006 | orchestrator | 2026-01-02 00:47:51 | INFO  | Task 1312610c-dd1f-4757-b269-593e5139d4a2 is in state STARTED 2026-01-02 00:47:51.651669 | orchestrator | 2026-01-02 00:47:51 | INFO  | Task 0b874a16-29a4-4197-bf8b-175a3be81b20 is in state STARTED 2026-01-02 00:47:51.651699 | orchestrator | 2026-01-02 00:47:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:47:54.710237 | orchestrator | 2026-01-02 00:47:54 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:47:54.710535 | orchestrator | 2026-01-02 00:47:54 | INFO  | Task 8bdb245c-351e-47fc-abc4-71c51e647c48 is in state STARTED 2026-01-02 00:47:54.711439 | orchestrator | 2026-01-02 00:47:54 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:47:54.711469 | orchestrator | 2026-01-02 00:47:54 | INFO  | Task 1312610c-dd1f-4757-b269-593e5139d4a2 is in state STARTED 2026-01-02 00:47:54.712137 | orchestrator | 2026-01-02 00:47:54 | INFO  | Task 0b874a16-29a4-4197-bf8b-175a3be81b20 is in state STARTED 2026-01-02 00:47:54.712170 | orchestrator | 2026-01-02 00:47:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:47:57.760601 | orchestrator | 2026-01-02 00:47:57 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:47:57.763093 | orchestrator | 2026-01-02 00:47:57 | INFO  | Task 8bdb245c-351e-47fc-abc4-71c51e647c48 is in state STARTED 2026-01-02 00:47:57.765818 | orchestrator | 2026-01-02 00:47:57 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:47:57.771563 | orchestrator | 2026-01-02 00:47:57 | INFO  | Task 1312610c-dd1f-4757-b269-593e5139d4a2 is in state STARTED 2026-01-02 00:47:57.773921 | orchestrator | 2026-01-02 00:47:57 | INFO  | Task 0b874a16-29a4-4197-bf8b-175a3be81b20 is in state STARTED 2026-01-02 00:47:57.773959 | orchestrator | 2026-01-02 00:47:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:48:00.822687 | orchestrator | 2026-01-02 00:48:00 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:48:00.825235 | orchestrator | 2026-01-02 00:48:00 | INFO  | Task 8bdb245c-351e-47fc-abc4-71c51e647c48 is in state STARTED 2026-01-02 00:48:00.825765 | orchestrator | 2026-01-02 00:48:00 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:48:00.830211 | orchestrator | 2026-01-02 00:48:00 | INFO  | Task 1312610c-dd1f-4757-b269-593e5139d4a2 is in state STARTED 2026-01-02 00:48:00.833709 | orchestrator | 2026-01-02 00:48:00 | INFO  | Task 0b874a16-29a4-4197-bf8b-175a3be81b20 is in state STARTED 2026-01-02 00:48:00.833776 | orchestrator | 2026-01-02 00:48:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:48:03.888137 | orchestrator | 2026-01-02 00:48:03 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:48:03.889579 | orchestrator | 2026-01-02 00:48:03 | INFO  | Task 8bdb245c-351e-47fc-abc4-71c51e647c48 is in state STARTED 2026-01-02 00:48:03.890402 | orchestrator | 2026-01-02 00:48:03 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:48:03.910810 | orchestrator | 2026-01-02 00:48:03 | INFO  | Task 1312610c-dd1f-4757-b269-593e5139d4a2 is in state STARTED 2026-01-02 00:48:03.912372 | orchestrator | 2026-01-02 00:48:03 | INFO  | Task 0b874a16-29a4-4197-bf8b-175a3be81b20 is in state STARTED 2026-01-02 00:48:03.912411 | orchestrator | 2026-01-02 00:48:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:48:06.962320 | orchestrator | 2026-01-02 00:48:06 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:48:06.962743 | orchestrator | 2026-01-02 00:48:06 | INFO  | Task 8bdb245c-351e-47fc-abc4-71c51e647c48 is in state STARTED 2026-01-02 00:48:06.964104 | orchestrator | 2026-01-02 00:48:06 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:48:06.968670 | orchestrator | 2026-01-02 00:48:06 | INFO  | Task 1312610c-dd1f-4757-b269-593e5139d4a2 is in state STARTED 2026-01-02 00:48:06.978778 | orchestrator | 2026-01-02 00:48:06 | INFO  | Task 0b874a16-29a4-4197-bf8b-175a3be81b20 is in state STARTED 2026-01-02 00:48:06.979084 | orchestrator | 2026-01-02 00:48:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:48:10.064291 | orchestrator | 2026-01-02 00:48:10 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:48:10.064407 | orchestrator | 2026-01-02 00:48:10 | INFO  | Task 8bdb245c-351e-47fc-abc4-71c51e647c48 is in state STARTED 2026-01-02 00:48:10.064423 | orchestrator | 2026-01-02 00:48:10 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:48:10.064436 | orchestrator | 2026-01-02 00:48:10 | INFO  | Task 1312610c-dd1f-4757-b269-593e5139d4a2 is in state STARTED 2026-01-02 00:48:10.064447 | orchestrator | 2026-01-02 00:48:10 | INFO  | Task 0b874a16-29a4-4197-bf8b-175a3be81b20 is in state STARTED 2026-01-02 00:48:10.064459 | orchestrator | 2026-01-02 00:48:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:48:13.105732 | orchestrator | 2026-01-02 00:48:13 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:48:13.105838 | orchestrator | 2026-01-02 00:48:13 | INFO  | Task 8bdb245c-351e-47fc-abc4-71c51e647c48 is in state STARTED 2026-01-02 00:48:13.105855 | orchestrator | 2026-01-02 00:48:13 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:48:13.105867 | orchestrator | 2026-01-02 00:48:13 | INFO  | Task 1312610c-dd1f-4757-b269-593e5139d4a2 is in state STARTED 2026-01-02 00:48:13.106283 | orchestrator | 2026-01-02 00:48:13 | INFO  | Task 0b874a16-29a4-4197-bf8b-175a3be81b20 is in state STARTED 2026-01-02 00:48:13.106312 | orchestrator | 2026-01-02 00:48:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:48:16.132517 | orchestrator | 2026-01-02 00:48:16 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:48:16.135924 | orchestrator | 2026-01-02 00:48:16 | INFO  | Task 8bdb245c-351e-47fc-abc4-71c51e647c48 is in state STARTED 2026-01-02 00:48:16.136064 | orchestrator | 2026-01-02 00:48:16 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:48:16.137661 | orchestrator | 2026-01-02 00:48:16 | INFO  | Task 1312610c-dd1f-4757-b269-593e5139d4a2 is in state STARTED 2026-01-02 00:48:16.139196 | orchestrator | 2026-01-02 00:48:16 | INFO  | Task 0b874a16-29a4-4197-bf8b-175a3be81b20 is in state STARTED 2026-01-02 00:48:16.139275 | orchestrator | 2026-01-02 00:48:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:48:19.167939 | orchestrator | 2026-01-02 00:48:19 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:48:19.168912 | orchestrator | 2026-01-02 00:48:19 | INFO  | Task 8bdb245c-351e-47fc-abc4-71c51e647c48 is in state STARTED 2026-01-02 00:48:19.169837 | orchestrator | 2026-01-02 00:48:19 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:48:19.170356 | orchestrator | 2026-01-02 00:48:19 | INFO  | Task 1312610c-dd1f-4757-b269-593e5139d4a2 is in state STARTED 2026-01-02 00:48:19.171106 | orchestrator | 2026-01-02 00:48:19 | INFO  | Task 0b874a16-29a4-4197-bf8b-175a3be81b20 is in state STARTED 2026-01-02 00:48:19.171256 | orchestrator | 2026-01-02 00:48:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:48:22.207373 | orchestrator | 2026-01-02 00:48:22 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:48:22.207884 | orchestrator | 2026-01-02 00:48:22 | INFO  | Task 8bdb245c-351e-47fc-abc4-71c51e647c48 is in state SUCCESS 2026-01-02 00:48:22.208585 | orchestrator | 2026-01-02 00:48:22.208680 | orchestrator | 2026-01-02 00:48:22.208696 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-01-02 00:48:22.208708 | orchestrator | 2026-01-02 00:48:22.208720 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-01-02 00:48:22.208732 | orchestrator | Friday 02 January 2026 00:46:56 +0000 (0:00:00.615) 0:00:00.615 ******** 2026-01-02 00:48:22.208743 | orchestrator | ok: [testbed-manager] => { 2026-01-02 00:48:22.208759 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-01-02 00:48:22.208773 | orchestrator | } 2026-01-02 00:48:22.208784 | orchestrator | 2026-01-02 00:48:22.208829 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-01-02 00:48:22.208841 | orchestrator | Friday 02 January 2026 00:46:57 +0000 (0:00:00.372) 0:00:00.987 ******** 2026-01-02 00:48:22.208852 | orchestrator | ok: [testbed-manager] 2026-01-02 00:48:22.208864 | orchestrator | 2026-01-02 00:48:22.208875 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-01-02 00:48:22.208886 | orchestrator | Friday 02 January 2026 00:47:00 +0000 (0:00:02.844) 0:00:03.832 ******** 2026-01-02 00:48:22.208897 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-01-02 00:48:22.208923 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-01-02 00:48:22.208935 | orchestrator | 2026-01-02 00:48:22.208946 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-01-02 00:48:22.208995 | orchestrator | Friday 02 January 2026 00:47:00 +0000 (0:00:00.928) 0:00:04.760 ******** 2026-01-02 00:48:22.209007 | orchestrator | changed: [testbed-manager] 2026-01-02 00:48:22.209018 | orchestrator | 2026-01-02 00:48:22.209029 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-01-02 00:48:22.209040 | orchestrator | Friday 02 January 2026 00:47:03 +0000 (0:00:02.835) 0:00:07.595 ******** 2026-01-02 00:48:22.209051 | orchestrator | changed: [testbed-manager] 2026-01-02 00:48:22.209062 | orchestrator | 2026-01-02 00:48:22.209073 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-01-02 00:48:22.209084 | orchestrator | Friday 02 January 2026 00:47:06 +0000 (0:00:02.710) 0:00:10.305 ******** 2026-01-02 00:48:22.209095 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-01-02 00:48:22.209105 | orchestrator | ok: [testbed-manager] 2026-01-02 00:48:22.209116 | orchestrator | 2026-01-02 00:48:22.209127 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-01-02 00:48:22.209138 | orchestrator | Friday 02 January 2026 00:47:34 +0000 (0:00:27.823) 0:00:38.129 ******** 2026-01-02 00:48:22.209149 | orchestrator | changed: [testbed-manager] 2026-01-02 00:48:22.209160 | orchestrator | 2026-01-02 00:48:22.209170 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:48:22.209182 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:48:22.209195 | orchestrator | 2026-01-02 00:48:22.209206 | orchestrator | 2026-01-02 00:48:22.209217 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:48:22.209228 | orchestrator | Friday 02 January 2026 00:47:39 +0000 (0:00:04.691) 0:00:42.820 ******** 2026-01-02 00:48:22.209239 | orchestrator | =============================================================================== 2026-01-02 00:48:22.209250 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 27.82s 2026-01-02 00:48:22.209261 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 4.69s 2026-01-02 00:48:22.209272 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.84s 2026-01-02 00:48:22.209282 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.84s 2026-01-02 00:48:22.209293 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.71s 2026-01-02 00:48:22.209304 | orchestrator | osism.services.homer : Create required directories ---------------------- 0.93s 2026-01-02 00:48:22.209325 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.37s 2026-01-02 00:48:22.209336 | orchestrator | 2026-01-02 00:48:22.209347 | orchestrator | 2026-01-02 00:48:22.209358 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-01-02 00:48:22.209369 | orchestrator | 2026-01-02 00:48:22.209380 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-01-02 00:48:22.209404 | orchestrator | Friday 02 January 2026 00:46:57 +0000 (0:00:00.624) 0:00:00.624 ******** 2026-01-02 00:48:22.209416 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-01-02 00:48:22.209429 | orchestrator | 2026-01-02 00:48:22.209440 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-01-02 00:48:22.209451 | orchestrator | Friday 02 January 2026 00:46:58 +0000 (0:00:01.097) 0:00:01.721 ******** 2026-01-02 00:48:22.209462 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-01-02 00:48:22.209473 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-01-02 00:48:22.209484 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-01-02 00:48:22.209495 | orchestrator | 2026-01-02 00:48:22.209512 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-01-02 00:48:22.209531 | orchestrator | Friday 02 January 2026 00:47:00 +0000 (0:00:02.525) 0:00:04.247 ******** 2026-01-02 00:48:22.209551 | orchestrator | changed: [testbed-manager] 2026-01-02 00:48:22.209568 | orchestrator | 2026-01-02 00:48:22.209587 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-01-02 00:48:22.209605 | orchestrator | Friday 02 January 2026 00:47:02 +0000 (0:00:02.026) 0:00:06.273 ******** 2026-01-02 00:48:22.209641 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-01-02 00:48:22.209659 | orchestrator | ok: [testbed-manager] 2026-01-02 00:48:22.209670 | orchestrator | 2026-01-02 00:48:22.209681 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-01-02 00:48:22.209692 | orchestrator | Friday 02 January 2026 00:47:39 +0000 (0:00:36.387) 0:00:42.661 ******** 2026-01-02 00:48:22.209703 | orchestrator | changed: [testbed-manager] 2026-01-02 00:48:22.209714 | orchestrator | 2026-01-02 00:48:22.209724 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-01-02 00:48:22.209735 | orchestrator | Friday 02 January 2026 00:47:40 +0000 (0:00:01.695) 0:00:44.356 ******** 2026-01-02 00:48:22.209746 | orchestrator | ok: [testbed-manager] 2026-01-02 00:48:22.209757 | orchestrator | 2026-01-02 00:48:22.209768 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-01-02 00:48:22.209779 | orchestrator | Friday 02 January 2026 00:47:41 +0000 (0:00:01.079) 0:00:45.436 ******** 2026-01-02 00:48:22.209790 | orchestrator | changed: [testbed-manager] 2026-01-02 00:48:22.209800 | orchestrator | 2026-01-02 00:48:22.209811 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-01-02 00:48:22.209822 | orchestrator | Friday 02 January 2026 00:47:44 +0000 (0:00:02.910) 0:00:48.347 ******** 2026-01-02 00:48:22.209833 | orchestrator | changed: [testbed-manager] 2026-01-02 00:48:22.209844 | orchestrator | 2026-01-02 00:48:22.209855 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-01-02 00:48:22.209866 | orchestrator | Friday 02 January 2026 00:47:46 +0000 (0:00:01.333) 0:00:49.681 ******** 2026-01-02 00:48:22.209877 | orchestrator | changed: [testbed-manager] 2026-01-02 00:48:22.209887 | orchestrator | 2026-01-02 00:48:22.209898 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-01-02 00:48:22.209909 | orchestrator | Friday 02 January 2026 00:47:47 +0000 (0:00:00.774) 0:00:50.456 ******** 2026-01-02 00:48:22.209920 | orchestrator | ok: [testbed-manager] 2026-01-02 00:48:22.209931 | orchestrator | 2026-01-02 00:48:22.209942 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:48:22.209983 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:48:22.209996 | orchestrator | 2026-01-02 00:48:22.210006 | orchestrator | 2026-01-02 00:48:22.210089 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:48:22.210103 | orchestrator | Friday 02 January 2026 00:47:47 +0000 (0:00:00.523) 0:00:50.979 ******** 2026-01-02 00:48:22.210114 | orchestrator | =============================================================================== 2026-01-02 00:48:22.210125 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 36.39s 2026-01-02 00:48:22.210136 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.91s 2026-01-02 00:48:22.210147 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.53s 2026-01-02 00:48:22.210157 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.03s 2026-01-02 00:48:22.210168 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.70s 2026-01-02 00:48:22.210179 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.33s 2026-01-02 00:48:22.210190 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 1.10s 2026-01-02 00:48:22.210200 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.08s 2026-01-02 00:48:22.210211 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.77s 2026-01-02 00:48:22.210222 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.52s 2026-01-02 00:48:22.210233 | orchestrator | 2026-01-02 00:48:22.210409 | orchestrator | 2026-01-02 00:48:22.210483 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-02 00:48:22.210493 | orchestrator | 2026-01-02 00:48:22.210500 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-02 00:48:22.210506 | orchestrator | Friday 02 January 2026 00:46:56 +0000 (0:00:00.609) 0:00:00.609 ******** 2026-01-02 00:48:22.210513 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-01-02 00:48:22.210520 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-01-02 00:48:22.210526 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-01-02 00:48:22.210534 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-01-02 00:48:22.210551 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-01-02 00:48:22.210558 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-01-02 00:48:22.210565 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-01-02 00:48:22.210571 | orchestrator | 2026-01-02 00:48:22.210577 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-01-02 00:48:22.210583 | orchestrator | 2026-01-02 00:48:22.210589 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-01-02 00:48:22.210595 | orchestrator | Friday 02 January 2026 00:46:58 +0000 (0:00:02.043) 0:00:02.653 ******** 2026-01-02 00:48:22.210611 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:48:22.210619 | orchestrator | 2026-01-02 00:48:22.210625 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-01-02 00:48:22.210630 | orchestrator | Friday 02 January 2026 00:47:00 +0000 (0:00:01.405) 0:00:04.058 ******** 2026-01-02 00:48:22.210636 | orchestrator | ok: [testbed-manager] 2026-01-02 00:48:22.210643 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:48:22.210650 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:48:22.210656 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:48:22.210662 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:48:22.210668 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:48:22.210691 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:48:22.210698 | orchestrator | 2026-01-02 00:48:22.210704 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-01-02 00:48:22.210710 | orchestrator | Friday 02 January 2026 00:47:02 +0000 (0:00:02.311) 0:00:06.369 ******** 2026-01-02 00:48:22.210716 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:48:22.210722 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:48:22.210728 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:48:22.210734 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:48:22.210740 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:48:22.210746 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:48:22.210752 | orchestrator | ok: [testbed-manager] 2026-01-02 00:48:22.210759 | orchestrator | 2026-01-02 00:48:22.210764 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-01-02 00:48:22.210770 | orchestrator | Friday 02 January 2026 00:47:06 +0000 (0:00:03.930) 0:00:10.299 ******** 2026-01-02 00:48:22.210777 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:48:22.210784 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:48:22.210790 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:48:22.210796 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:48:22.210802 | orchestrator | changed: [testbed-manager] 2026-01-02 00:48:22.210808 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:48:22.210814 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:48:22.210820 | orchestrator | 2026-01-02 00:48:22.210826 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-01-02 00:48:22.210833 | orchestrator | Friday 02 January 2026 00:47:08 +0000 (0:00:01.826) 0:00:12.125 ******** 2026-01-02 00:48:22.210839 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:48:22.210845 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:48:22.210852 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:48:22.210858 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:48:22.210864 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:48:22.210870 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:48:22.210876 | orchestrator | changed: [testbed-manager] 2026-01-02 00:48:22.210882 | orchestrator | 2026-01-02 00:48:22.210888 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-01-02 00:48:22.210894 | orchestrator | Friday 02 January 2026 00:47:21 +0000 (0:00:12.843) 0:00:24.969 ******** 2026-01-02 00:48:22.210900 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:48:22.210906 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:48:22.210912 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:48:22.210919 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:48:22.210926 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:48:22.210933 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:48:22.210939 | orchestrator | changed: [testbed-manager] 2026-01-02 00:48:22.210946 | orchestrator | 2026-01-02 00:48:22.210952 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-01-02 00:48:22.210978 | orchestrator | Friday 02 January 2026 00:48:01 +0000 (0:00:40.359) 0:01:05.329 ******** 2026-01-02 00:48:22.210989 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:48:22.211001 | orchestrator | 2026-01-02 00:48:22.211011 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-01-02 00:48:22.211022 | orchestrator | Friday 02 January 2026 00:48:03 +0000 (0:00:01.842) 0:01:07.171 ******** 2026-01-02 00:48:22.211032 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-01-02 00:48:22.211042 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-01-02 00:48:22.211051 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-01-02 00:48:22.211062 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-01-02 00:48:22.211085 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-01-02 00:48:22.211102 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-01-02 00:48:22.211112 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-01-02 00:48:22.211121 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-01-02 00:48:22.211131 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-01-02 00:48:22.211141 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-01-02 00:48:22.211152 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-01-02 00:48:22.211161 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-01-02 00:48:22.211171 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-01-02 00:48:22.211186 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-01-02 00:48:22.211196 | orchestrator | 2026-01-02 00:48:22.211206 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-01-02 00:48:22.211217 | orchestrator | Friday 02 January 2026 00:48:08 +0000 (0:00:05.242) 0:01:12.414 ******** 2026-01-02 00:48:22.211227 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:48:22.211238 | orchestrator | ok: [testbed-manager] 2026-01-02 00:48:22.211246 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:48:22.211252 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:48:22.211259 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:48:22.211265 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:48:22.211272 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:48:22.211278 | orchestrator | 2026-01-02 00:48:22.211285 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-01-02 00:48:22.211291 | orchestrator | Friday 02 January 2026 00:48:09 +0000 (0:00:01.126) 0:01:13.540 ******** 2026-01-02 00:48:22.211297 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:48:22.211304 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:48:22.211311 | orchestrator | changed: [testbed-manager] 2026-01-02 00:48:22.211317 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:48:22.211324 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:48:22.211330 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:48:22.211336 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:48:22.211342 | orchestrator | 2026-01-02 00:48:22.211348 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-01-02 00:48:22.211355 | orchestrator | Friday 02 January 2026 00:48:11 +0000 (0:00:01.555) 0:01:15.096 ******** 2026-01-02 00:48:22.211361 | orchestrator | ok: [testbed-manager] 2026-01-02 00:48:22.211367 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:48:22.211373 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:48:22.211379 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:48:22.211385 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:48:22.211391 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:48:22.211397 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:48:22.211404 | orchestrator | 2026-01-02 00:48:22.211410 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-01-02 00:48:22.211416 | orchestrator | Friday 02 January 2026 00:48:12 +0000 (0:00:01.372) 0:01:16.468 ******** 2026-01-02 00:48:22.211423 | orchestrator | ok: [testbed-manager] 2026-01-02 00:48:22.211429 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:48:22.211435 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:48:22.211441 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:48:22.211447 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:48:22.211453 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:48:22.211459 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:48:22.211465 | orchestrator | 2026-01-02 00:48:22.211472 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-01-02 00:48:22.211478 | orchestrator | Friday 02 January 2026 00:48:14 +0000 (0:00:01.985) 0:01:18.453 ******** 2026-01-02 00:48:22.211485 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-01-02 00:48:22.211494 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:48:22.211509 | orchestrator | 2026-01-02 00:48:22.211515 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-01-02 00:48:22.211521 | orchestrator | Friday 02 January 2026 00:48:16 +0000 (0:00:01.417) 0:01:19.871 ******** 2026-01-02 00:48:22.211527 | orchestrator | changed: [testbed-manager] 2026-01-02 00:48:22.211534 | orchestrator | 2026-01-02 00:48:22.211541 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-01-02 00:48:22.211547 | orchestrator | Friday 02 January 2026 00:48:18 +0000 (0:00:01.935) 0:01:21.807 ******** 2026-01-02 00:48:22.211553 | orchestrator | changed: [testbed-manager] 2026-01-02 00:48:22.211560 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:48:22.211566 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:48:22.211572 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:48:22.211579 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:48:22.211585 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:48:22.211592 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:48:22.211597 | orchestrator | 2026-01-02 00:48:22.211603 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:48:22.211610 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:48:22.211617 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:48:22.211623 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:48:22.211629 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:48:22.211644 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:48:22.211651 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:48:22.211656 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:48:22.211663 | orchestrator | 2026-01-02 00:48:22.211669 | orchestrator | 2026-01-02 00:48:22.211675 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:48:22.211681 | orchestrator | Friday 02 January 2026 00:48:20 +0000 (0:00:02.816) 0:01:24.624 ******** 2026-01-02 00:48:22.211693 | orchestrator | =============================================================================== 2026-01-02 00:48:22.211699 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 40.36s 2026-01-02 00:48:22.211705 | orchestrator | osism.services.netdata : Add repository -------------------------------- 12.84s 2026-01-02 00:48:22.211711 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 5.24s 2026-01-02 00:48:22.211717 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.93s 2026-01-02 00:48:22.211723 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 2.82s 2026-01-02 00:48:22.211730 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.31s 2026-01-02 00:48:22.211736 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.04s 2026-01-02 00:48:22.211742 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.99s 2026-01-02 00:48:22.211748 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.94s 2026-01-02 00:48:22.211754 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.84s 2026-01-02 00:48:22.211760 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.83s 2026-01-02 00:48:22.211773 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.56s 2026-01-02 00:48:22.211780 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.42s 2026-01-02 00:48:22.211786 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.41s 2026-01-02 00:48:22.211792 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.37s 2026-01-02 00:48:22.211798 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.13s 2026-01-02 00:48:22.211805 | orchestrator | 2026-01-02 00:48:22 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:48:22.215512 | orchestrator | 2026-01-02 00:48:22 | INFO  | Task 1312610c-dd1f-4757-b269-593e5139d4a2 is in state STARTED 2026-01-02 00:48:22.216477 | orchestrator | 2026-01-02 00:48:22 | INFO  | Task 0b874a16-29a4-4197-bf8b-175a3be81b20 is in state STARTED 2026-01-02 00:48:22.216779 | orchestrator | 2026-01-02 00:48:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:48:25.251791 | orchestrator | 2026-01-02 00:48:25 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:48:25.252534 | orchestrator | 2026-01-02 00:48:25 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:48:25.254647 | orchestrator | 2026-01-02 00:48:25 | INFO  | Task 1312610c-dd1f-4757-b269-593e5139d4a2 is in state STARTED 2026-01-02 00:48:25.255304 | orchestrator | 2026-01-02 00:48:25 | INFO  | Task 0b874a16-29a4-4197-bf8b-175a3be81b20 is in state STARTED 2026-01-02 00:48:25.255335 | orchestrator | 2026-01-02 00:48:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:48:28.310790 | orchestrator | 2026-01-02 00:48:28 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:48:28.312103 | orchestrator | 2026-01-02 00:48:28 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:48:28.313746 | orchestrator | 2026-01-02 00:48:28 | INFO  | Task 1312610c-dd1f-4757-b269-593e5139d4a2 is in state SUCCESS 2026-01-02 00:48:28.314889 | orchestrator | 2026-01-02 00:48:28 | INFO  | Task 0b874a16-29a4-4197-bf8b-175a3be81b20 is in state STARTED 2026-01-02 00:48:28.315013 | orchestrator | 2026-01-02 00:48:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:48:31.363169 | orchestrator | 2026-01-02 00:48:31 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:48:31.364401 | orchestrator | 2026-01-02 00:48:31 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:48:31.365769 | orchestrator | 2026-01-02 00:48:31 | INFO  | Task 0b874a16-29a4-4197-bf8b-175a3be81b20 is in state STARTED 2026-01-02 00:48:31.366831 | orchestrator | 2026-01-02 00:48:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:48:34.423565 | orchestrator | 2026-01-02 00:48:34 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:48:34.424768 | orchestrator | 2026-01-02 00:48:34 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:48:34.426571 | orchestrator | 2026-01-02 00:48:34 | INFO  | Task 0b874a16-29a4-4197-bf8b-175a3be81b20 is in state STARTED 2026-01-02 00:48:34.426647 | orchestrator | 2026-01-02 00:48:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:48:37.496358 | orchestrator | 2026-01-02 00:48:37 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:48:37.498163 | orchestrator | 2026-01-02 00:48:37 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:48:37.499787 | orchestrator | 2026-01-02 00:48:37 | INFO  | Task 0b874a16-29a4-4197-bf8b-175a3be81b20 is in state STARTED 2026-01-02 00:48:37.499997 | orchestrator | 2026-01-02 00:48:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:48:40.544397 | orchestrator | 2026-01-02 00:48:40 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:48:40.545437 | orchestrator | 2026-01-02 00:48:40 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:48:40.547447 | orchestrator | 2026-01-02 00:48:40 | INFO  | Task 0b874a16-29a4-4197-bf8b-175a3be81b20 is in state STARTED 2026-01-02 00:48:40.547547 | orchestrator | 2026-01-02 00:48:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:48:43.625380 | orchestrator | 2026-01-02 00:48:43 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:48:43.627888 | orchestrator | 2026-01-02 00:48:43 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:48:43.632500 | orchestrator | 2026-01-02 00:48:43 | INFO  | Task 0b874a16-29a4-4197-bf8b-175a3be81b20 is in state STARTED 2026-01-02 00:48:43.632870 | orchestrator | 2026-01-02 00:48:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:48:46.679174 | orchestrator | 2026-01-02 00:48:46 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:48:46.679391 | orchestrator | 2026-01-02 00:48:46 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:48:46.681285 | orchestrator | 2026-01-02 00:48:46 | INFO  | Task 0b874a16-29a4-4197-bf8b-175a3be81b20 is in state STARTED 2026-01-02 00:48:46.681308 | orchestrator | 2026-01-02 00:48:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:48:49.726313 | orchestrator | 2026-01-02 00:48:49 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:48:49.726812 | orchestrator | 2026-01-02 00:48:49 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:48:49.728473 | orchestrator | 2026-01-02 00:48:49 | INFO  | Task 0b874a16-29a4-4197-bf8b-175a3be81b20 is in state STARTED 2026-01-02 00:48:49.728570 | orchestrator | 2026-01-02 00:48:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:48:52.769231 | orchestrator | 2026-01-02 00:48:52 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:48:52.770284 | orchestrator | 2026-01-02 00:48:52 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:48:52.772372 | orchestrator | 2026-01-02 00:48:52 | INFO  | Task 0b874a16-29a4-4197-bf8b-175a3be81b20 is in state STARTED 2026-01-02 00:48:52.772399 | orchestrator | 2026-01-02 00:48:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:48:55.816546 | orchestrator | 2026-01-02 00:48:55 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:48:55.818404 | orchestrator | 2026-01-02 00:48:55 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:48:55.821021 | orchestrator | 2026-01-02 00:48:55 | INFO  | Task 0b874a16-29a4-4197-bf8b-175a3be81b20 is in state STARTED 2026-01-02 00:48:55.821066 | orchestrator | 2026-01-02 00:48:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:48:58.880296 | orchestrator | 2026-01-02 00:48:58 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:48:58.880429 | orchestrator | 2026-01-02 00:48:58 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:48:58.880830 | orchestrator | 2026-01-02 00:48:58 | INFO  | Task 0b874a16-29a4-4197-bf8b-175a3be81b20 is in state STARTED 2026-01-02 00:48:58.880900 | orchestrator | 2026-01-02 00:48:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:49:01.923466 | orchestrator | 2026-01-02 00:49:01 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:49:01.923567 | orchestrator | 2026-01-02 00:49:01 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:49:01.924664 | orchestrator | 2026-01-02 00:49:01 | INFO  | Task 0b874a16-29a4-4197-bf8b-175a3be81b20 is in state STARTED 2026-01-02 00:49:01.924741 | orchestrator | 2026-01-02 00:49:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:49:04.965025 | orchestrator | 2026-01-02 00:49:04 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:49:04.965154 | orchestrator | 2026-01-02 00:49:04 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:49:04.968539 | orchestrator | 2026-01-02 00:49:04 | INFO  | Task 0b874a16-29a4-4197-bf8b-175a3be81b20 is in state STARTED 2026-01-02 00:49:04.968584 | orchestrator | 2026-01-02 00:49:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:49:08.020754 | orchestrator | 2026-01-02 00:49:08 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:49:08.024529 | orchestrator | 2026-01-02 00:49:08 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:49:08.029048 | orchestrator | 2026-01-02 00:49:08 | INFO  | Task 0b874a16-29a4-4197-bf8b-175a3be81b20 is in state STARTED 2026-01-02 00:49:08.029330 | orchestrator | 2026-01-02 00:49:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:49:11.070402 | orchestrator | 2026-01-02 00:49:11 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:49:11.073541 | orchestrator | 2026-01-02 00:49:11 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:49:11.079366 | orchestrator | 2026-01-02 00:49:11 | INFO  | Task 0b874a16-29a4-4197-bf8b-175a3be81b20 is in state SUCCESS 2026-01-02 00:49:11.079464 | orchestrator | 2026-01-02 00:49:11.079473 | orchestrator | 2026-01-02 00:49:11.079478 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-01-02 00:49:11.079483 | orchestrator | 2026-01-02 00:49:11.079488 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-01-02 00:49:11.079493 | orchestrator | Friday 02 January 2026 00:47:16 +0000 (0:00:00.305) 0:00:00.305 ******** 2026-01-02 00:49:11.079498 | orchestrator | ok: [testbed-manager] 2026-01-02 00:49:11.079504 | orchestrator | 2026-01-02 00:49:11.079508 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-01-02 00:49:11.079512 | orchestrator | Friday 02 January 2026 00:47:17 +0000 (0:00:01.273) 0:00:01.578 ******** 2026-01-02 00:49:11.079517 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-01-02 00:49:11.079522 | orchestrator | 2026-01-02 00:49:11.079526 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-01-02 00:49:11.079531 | orchestrator | Friday 02 January 2026 00:47:18 +0000 (0:00:00.734) 0:00:02.313 ******** 2026-01-02 00:49:11.079536 | orchestrator | changed: [testbed-manager] 2026-01-02 00:49:11.079540 | orchestrator | 2026-01-02 00:49:11.079544 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-01-02 00:49:11.079549 | orchestrator | Friday 02 January 2026 00:47:19 +0000 (0:00:01.169) 0:00:03.483 ******** 2026-01-02 00:49:11.079553 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-01-02 00:49:11.079557 | orchestrator | ok: [testbed-manager] 2026-01-02 00:49:11.079562 | orchestrator | 2026-01-02 00:49:11.079566 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-01-02 00:49:11.079570 | orchestrator | Friday 02 January 2026 00:48:15 +0000 (0:00:55.901) 0:00:59.384 ******** 2026-01-02 00:49:11.079596 | orchestrator | changed: [testbed-manager] 2026-01-02 00:49:11.079601 | orchestrator | 2026-01-02 00:49:11.079605 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:49:11.079609 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:49:11.079615 | orchestrator | 2026-01-02 00:49:11.079619 | orchestrator | 2026-01-02 00:49:11.079624 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:49:11.079628 | orchestrator | Friday 02 January 2026 00:48:26 +0000 (0:00:10.927) 0:01:10.312 ******** 2026-01-02 00:49:11.079632 | orchestrator | =============================================================================== 2026-01-02 00:49:11.079636 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 55.90s 2026-01-02 00:49:11.079640 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ----------------- 10.93s 2026-01-02 00:49:11.079645 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.27s 2026-01-02 00:49:11.079649 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.17s 2026-01-02 00:49:11.079653 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.73s 2026-01-02 00:49:11.079657 | orchestrator | 2026-01-02 00:49:11.081584 | orchestrator | 2026-01-02 00:49:11.081639 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-01-02 00:49:11.081646 | orchestrator | 2026-01-02 00:49:11.081651 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-02 00:49:11.081655 | orchestrator | Friday 02 January 2026 00:46:48 +0000 (0:00:00.288) 0:00:00.288 ******** 2026-01-02 00:49:11.081661 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:49:11.081666 | orchestrator | 2026-01-02 00:49:11.081670 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-01-02 00:49:11.081675 | orchestrator | Friday 02 January 2026 00:46:50 +0000 (0:00:01.367) 0:00:01.656 ******** 2026-01-02 00:49:11.081680 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-02 00:49:11.081684 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-02 00:49:11.081688 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-02 00:49:11.081692 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-02 00:49:11.081700 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-02 00:49:11.081705 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-02 00:49:11.081709 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-02 00:49:11.081713 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-02 00:49:11.081717 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-02 00:49:11.081722 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-02 00:49:11.081726 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-02 00:49:11.081731 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-02 00:49:11.081735 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-02 00:49:11.081739 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-02 00:49:11.081744 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-02 00:49:11.081761 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-02 00:49:11.081775 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-02 00:49:11.081779 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-02 00:49:11.081784 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-02 00:49:11.081788 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-02 00:49:11.081792 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-02 00:49:11.081796 | orchestrator | 2026-01-02 00:49:11.081800 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-02 00:49:11.081805 | orchestrator | Friday 02 January 2026 00:46:54 +0000 (0:00:04.378) 0:00:06.034 ******** 2026-01-02 00:49:11.081809 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:49:11.081815 | orchestrator | 2026-01-02 00:49:11.081819 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-01-02 00:49:11.081823 | orchestrator | Friday 02 January 2026 00:46:55 +0000 (0:00:01.510) 0:00:07.545 ******** 2026-01-02 00:49:11.081831 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:49:11.081838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:49:11.081852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:49:11.081857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:49:11.081866 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:49:11.081874 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.081879 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:49:11.081883 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:49:11.081888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.081900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.081905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.081912 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.081920 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.081925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.081955 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.081963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.081968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.081977 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.081981 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.081988 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.081996 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.082000 | orchestrator | 2026-01-02 00:49:11.082004 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-01-02 00:49:11.082009 | orchestrator | Friday 02 January 2026 00:47:00 +0000 (0:00:05.021) 0:00:12.566 ******** 2026-01-02 00:49:11.082014 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-02 00:49:11.082058 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:49:11.082062 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:49:11.082067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-02 00:49:11.082079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:49:11.082084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:49:11.082095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-02 00:49:11.082100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:49:11.082104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:49:11.082109 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:49:11.082114 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:49:11.082118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-02 00:49:11.082123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:49:11.082127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:49:11.082131 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:49:11.082139 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-02 00:49:11.082146 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:49:11.082154 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:49:11.082159 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-02 00:49:11.082164 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:49:11.082169 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:49:11.082174 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:49:11.082179 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:49:11.082184 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:49:11.082189 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-02 00:49:11.082198 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:49:11.082203 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:49:11.082211 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:49:11.082216 | orchestrator | 2026-01-02 00:49:11.082221 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-01-02 00:49:11.082226 | orchestrator | Friday 02 January 2026 00:47:02 +0000 (0:00:01.676) 0:00:14.243 ******** 2026-01-02 00:49:11.082232 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-02 00:49:11.082241 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:49:11.082246 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:49:11.082251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-02 00:49:11.082256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:49:11.082261 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:49:11.082266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:49:11.082275 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:49:11.082287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-02 00:49:11.082295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:49:11.082300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:49:11.082305 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:49:11.082310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-02 00:49:11.082316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:49:11.082321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:49:11.082326 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:49:11.082331 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-02 00:49:11.082729 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:49:11.082804 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:49:11.082812 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:49:11.082822 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-02 00:49:11.082827 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:49:11.082831 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:49:11.082835 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:49:11.082839 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-02 00:49:11.082844 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:49:11.082848 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:49:11.082857 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:49:11.082861 | orchestrator | 2026-01-02 00:49:11.082866 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-01-02 00:49:11.082870 | orchestrator | Friday 02 January 2026 00:47:05 +0000 (0:00:02.885) 0:00:17.128 ******** 2026-01-02 00:49:11.082874 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:49:11.082878 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:49:11.082882 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:49:11.082886 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:49:11.082890 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:49:11.082902 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:49:11.082906 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:49:11.082910 | orchestrator | 2026-01-02 00:49:11.082914 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-01-02 00:49:11.082918 | orchestrator | Friday 02 January 2026 00:47:06 +0000 (0:00:01.011) 0:00:18.139 ******** 2026-01-02 00:49:11.082922 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:49:11.082926 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:49:11.082930 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:49:11.082953 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:49:11.082958 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:49:11.082962 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:49:11.082965 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:49:11.082969 | orchestrator | 2026-01-02 00:49:11.082973 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-01-02 00:49:11.082977 | orchestrator | Friday 02 January 2026 00:47:07 +0000 (0:00:01.203) 0:00:19.343 ******** 2026-01-02 00:49:11.082984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:49:11.082988 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:49:11.082992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:49:11.082996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.083003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:49:11.083007 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:49:11.083015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.083020 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:49:11.083026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.083030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.083034 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:49:11.083042 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.083046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.083050 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.083058 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.083064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.083068 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.083072 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.083076 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.083083 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.083087 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.083091 | orchestrator | 2026-01-02 00:49:11.083095 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-01-02 00:49:11.083099 | orchestrator | Friday 02 January 2026 00:47:15 +0000 (0:00:07.690) 0:00:27.034 ******** 2026-01-02 00:49:11.083103 | orchestrator | [WARNING]: Skipped 2026-01-02 00:49:11.083108 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-01-02 00:49:11.083112 | orchestrator | to this access issue: 2026-01-02 00:49:11.083116 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-01-02 00:49:11.083120 | orchestrator | directory 2026-01-02 00:49:11.083125 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-02 00:49:11.083129 | orchestrator | 2026-01-02 00:49:11.083132 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-01-02 00:49:11.083136 | orchestrator | Friday 02 January 2026 00:47:17 +0000 (0:00:01.983) 0:00:29.017 ******** 2026-01-02 00:49:11.083140 | orchestrator | [WARNING]: Skipped 2026-01-02 00:49:11.083144 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-01-02 00:49:11.083151 | orchestrator | to this access issue: 2026-01-02 00:49:11.083155 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-01-02 00:49:11.083159 | orchestrator | directory 2026-01-02 00:49:11.083162 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-02 00:49:11.083166 | orchestrator | 2026-01-02 00:49:11.083170 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-01-02 00:49:11.083174 | orchestrator | Friday 02 January 2026 00:47:18 +0000 (0:00:00.869) 0:00:29.886 ******** 2026-01-02 00:49:11.083178 | orchestrator | [WARNING]: Skipped 2026-01-02 00:49:11.083181 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-01-02 00:49:11.083185 | orchestrator | to this access issue: 2026-01-02 00:49:11.083189 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-01-02 00:49:11.083193 | orchestrator | directory 2026-01-02 00:49:11.083197 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-02 00:49:11.083201 | orchestrator | 2026-01-02 00:49:11.083204 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-01-02 00:49:11.083208 | orchestrator | Friday 02 January 2026 00:47:19 +0000 (0:00:00.934) 0:00:30.821 ******** 2026-01-02 00:49:11.083212 | orchestrator | [WARNING]: Skipped 2026-01-02 00:49:11.083216 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-01-02 00:49:11.083220 | orchestrator | to this access issue: 2026-01-02 00:49:11.083226 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-01-02 00:49:11.083233 | orchestrator | directory 2026-01-02 00:49:11.083237 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-02 00:49:11.083241 | orchestrator | 2026-01-02 00:49:11.083245 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-01-02 00:49:11.083248 | orchestrator | Friday 02 January 2026 00:47:20 +0000 (0:00:01.047) 0:00:31.868 ******** 2026-01-02 00:49:11.083252 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:49:11.083256 | orchestrator | changed: [testbed-manager] 2026-01-02 00:49:11.083260 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:49:11.083264 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:49:11.083267 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:49:11.083271 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:49:11.083275 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:49:11.083279 | orchestrator | 2026-01-02 00:49:11.083282 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-01-02 00:49:11.083286 | orchestrator | Friday 02 January 2026 00:47:25 +0000 (0:00:04.850) 0:00:36.719 ******** 2026-01-02 00:49:11.083290 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-02 00:49:11.083294 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-02 00:49:11.083299 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-02 00:49:11.083304 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-02 00:49:11.083309 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-02 00:49:11.083313 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-02 00:49:11.083317 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-02 00:49:11.083322 | orchestrator | 2026-01-02 00:49:11.083327 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-01-02 00:49:11.083331 | orchestrator | Friday 02 January 2026 00:47:29 +0000 (0:00:04.251) 0:00:40.970 ******** 2026-01-02 00:49:11.083336 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:49:11.083341 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:49:11.083345 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:49:11.083350 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:49:11.083355 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:49:11.083359 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:49:11.083364 | orchestrator | changed: [testbed-manager] 2026-01-02 00:49:11.083368 | orchestrator | 2026-01-02 00:49:11.083373 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-01-02 00:49:11.083377 | orchestrator | Friday 02 January 2026 00:47:33 +0000 (0:00:04.414) 0:00:45.385 ******** 2026-01-02 00:49:11.083382 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:49:11.083390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:49:11.083398 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:49:11.083405 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:49:11.083410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:49:11.083415 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:49:11.083420 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:49:11.083425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:49:11.083429 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.083441 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.083446 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:49:11.083451 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.083460 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:49:11.083465 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:49:11.083471 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:49:11.083476 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.083481 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.083491 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.083496 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:49:11.083504 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:49:11.083509 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.083514 | orchestrator | 2026-01-02 00:49:11.083518 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-01-02 00:49:11.083523 | orchestrator | Friday 02 January 2026 00:47:36 +0000 (0:00:03.252) 0:00:48.637 ******** 2026-01-02 00:49:11.083527 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-02 00:49:11.083539 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-02 00:49:11.083544 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-02 00:49:11.083549 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-02 00:49:11.083553 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-02 00:49:11.083564 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-02 00:49:11.083569 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-02 00:49:11.083573 | orchestrator | 2026-01-02 00:49:11.083578 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-01-02 00:49:11.083582 | orchestrator | Friday 02 January 2026 00:47:40 +0000 (0:00:03.510) 0:00:52.148 ******** 2026-01-02 00:49:11.083587 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-02 00:49:11.083591 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-02 00:49:11.083595 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-02 00:49:11.083600 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-02 00:49:11.083604 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-02 00:49:11.083613 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-02 00:49:11.083617 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-02 00:49:11.083622 | orchestrator | 2026-01-02 00:49:11.083626 | orchestrator | TASK [common : Check common containers] **************************************** 2026-01-02 00:49:11.083631 | orchestrator | Friday 02 January 2026 00:47:44 +0000 (0:00:03.546) 0:00:55.694 ******** 2026-01-02 00:49:11.083636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:49:11.083645 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:49:11.083650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:49:11.083658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:49:11.083663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.083668 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:49:11.083672 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.083679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.083687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.083691 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:49:11.083697 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.083701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.083706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.083710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.083716 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:49:11.083720 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.083727 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.083731 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.083737 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.083741 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.083745 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:49:11.083749 | orchestrator | 2026-01-02 00:49:11.083753 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-01-02 00:49:11.083757 | orchestrator | Friday 02 January 2026 00:47:48 +0000 (0:00:04.561) 0:01:00.255 ******** 2026-01-02 00:49:11.083761 | orchestrator | changed: [testbed-manager] 2026-01-02 00:49:11.083768 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:49:11.083772 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:49:11.083776 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:49:11.083780 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:49:11.083784 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:49:11.083788 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:49:11.083791 | orchestrator | 2026-01-02 00:49:11.083795 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-01-02 00:49:11.083799 | orchestrator | Friday 02 January 2026 00:47:50 +0000 (0:00:01.916) 0:01:02.172 ******** 2026-01-02 00:49:11.083803 | orchestrator | changed: [testbed-manager] 2026-01-02 00:49:11.083807 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:49:11.083811 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:49:11.083815 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:49:11.083820 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:49:11.083826 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:49:11.083831 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:49:11.083837 | orchestrator | 2026-01-02 00:49:11.083843 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-02 00:49:11.083849 | orchestrator | Friday 02 January 2026 00:47:51 +0000 (0:00:01.439) 0:01:03.611 ******** 2026-01-02 00:49:11.083855 | orchestrator | 2026-01-02 00:49:11.083862 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-02 00:49:11.083868 | orchestrator | Friday 02 January 2026 00:47:52 +0000 (0:00:00.080) 0:01:03.691 ******** 2026-01-02 00:49:11.083874 | orchestrator | 2026-01-02 00:49:11.083880 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-02 00:49:11.083885 | orchestrator | Friday 02 January 2026 00:47:52 +0000 (0:00:00.104) 0:01:03.796 ******** 2026-01-02 00:49:11.083891 | orchestrator | 2026-01-02 00:49:11.083897 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-02 00:49:11.083904 | orchestrator | Friday 02 January 2026 00:47:52 +0000 (0:00:00.296) 0:01:04.093 ******** 2026-01-02 00:49:11.083910 | orchestrator | 2026-01-02 00:49:11.083917 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-02 00:49:11.083923 | orchestrator | Friday 02 January 2026 00:47:52 +0000 (0:00:00.082) 0:01:04.176 ******** 2026-01-02 00:49:11.083929 | orchestrator | 2026-01-02 00:49:11.083984 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-02 00:49:11.083993 | orchestrator | Friday 02 January 2026 00:47:52 +0000 (0:00:00.071) 0:01:04.248 ******** 2026-01-02 00:49:11.084001 | orchestrator | 2026-01-02 00:49:11.084005 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-02 00:49:11.084009 | orchestrator | Friday 02 January 2026 00:47:52 +0000 (0:00:00.075) 0:01:04.323 ******** 2026-01-02 00:49:11.084015 | orchestrator | 2026-01-02 00:49:11.084021 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-01-02 00:49:11.084032 | orchestrator | Friday 02 January 2026 00:47:52 +0000 (0:00:00.095) 0:01:04.418 ******** 2026-01-02 00:49:11.084039 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:49:11.084045 | orchestrator | changed: [testbed-manager] 2026-01-02 00:49:11.084052 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:49:11.084059 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:49:11.084065 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:49:11.084071 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:49:11.084075 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:49:11.084079 | orchestrator | 2026-01-02 00:49:11.084083 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-01-02 00:49:11.084087 | orchestrator | Friday 02 January 2026 00:48:23 +0000 (0:00:31.060) 0:01:35.478 ******** 2026-01-02 00:49:11.084091 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:49:11.084094 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:49:11.084098 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:49:11.084102 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:49:11.084111 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:49:11.084115 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:49:11.084120 | orchestrator | changed: [testbed-manager] 2026-01-02 00:49:11.084123 | orchestrator | 2026-01-02 00:49:11.084127 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-01-02 00:49:11.084131 | orchestrator | Friday 02 January 2026 00:49:02 +0000 (0:00:38.673) 0:02:14.152 ******** 2026-01-02 00:49:11.084135 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:49:11.084139 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:49:11.084143 | orchestrator | ok: [testbed-manager] 2026-01-02 00:49:11.084151 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:49:11.084155 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:49:11.084159 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:49:11.084163 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:49:11.084167 | orchestrator | 2026-01-02 00:49:11.084171 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-01-02 00:49:11.084175 | orchestrator | Friday 02 January 2026 00:49:05 +0000 (0:00:02.734) 0:02:16.886 ******** 2026-01-02 00:49:11.084179 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:49:11.084183 | orchestrator | changed: [testbed-manager] 2026-01-02 00:49:11.084186 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:49:11.084190 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:49:11.084194 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:49:11.084198 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:49:11.084202 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:49:11.084206 | orchestrator | 2026-01-02 00:49:11.084210 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:49:11.084215 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-02 00:49:11.084220 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-02 00:49:11.084225 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-02 00:49:11.084229 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-02 00:49:11.084232 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-02 00:49:11.084236 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-02 00:49:11.084240 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-02 00:49:11.084244 | orchestrator | 2026-01-02 00:49:11.084248 | orchestrator | 2026-01-02 00:49:11.084252 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:49:11.084256 | orchestrator | Friday 02 January 2026 00:49:10 +0000 (0:00:05.099) 0:02:21.985 ******** 2026-01-02 00:49:11.084260 | orchestrator | =============================================================================== 2026-01-02 00:49:11.084264 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 38.67s 2026-01-02 00:49:11.084268 | orchestrator | common : Restart fluentd container ------------------------------------- 31.06s 2026-01-02 00:49:11.084273 | orchestrator | common : Copying over config.json files for services -------------------- 7.69s 2026-01-02 00:49:11.084277 | orchestrator | common : Restart cron container ----------------------------------------- 5.10s 2026-01-02 00:49:11.084280 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.02s 2026-01-02 00:49:11.084284 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.85s 2026-01-02 00:49:11.084292 | orchestrator | common : Check common containers ---------------------------------------- 4.56s 2026-01-02 00:49:11.084295 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 4.41s 2026-01-02 00:49:11.084299 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.38s 2026-01-02 00:49:11.084303 | orchestrator | common : Copying over cron logrotate config file ------------------------ 4.25s 2026-01-02 00:49:11.084307 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.55s 2026-01-02 00:49:11.084311 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.51s 2026-01-02 00:49:11.084315 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.25s 2026-01-02 00:49:11.084318 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.89s 2026-01-02 00:49:11.084325 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.73s 2026-01-02 00:49:11.084329 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.98s 2026-01-02 00:49:11.084334 | orchestrator | common : Creating log volume -------------------------------------------- 1.92s 2026-01-02 00:49:11.084338 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.68s 2026-01-02 00:49:11.084341 | orchestrator | common : include_tasks -------------------------------------------------- 1.51s 2026-01-02 00:49:11.084345 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.44s 2026-01-02 00:49:11.084349 | orchestrator | 2026-01-02 00:49:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:49:14.124472 | orchestrator | 2026-01-02 00:49:14 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:49:14.124998 | orchestrator | 2026-01-02 00:49:14 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:49:14.126477 | orchestrator | 2026-01-02 00:49:14 | INFO  | Task eabcdaf4-7f57-451d-9af8-719ace049053 is in state STARTED 2026-01-02 00:49:14.128905 | orchestrator | 2026-01-02 00:49:14 | INFO  | Task bf02c926-8290-4c89-9c66-995a3972bb95 is in state STARTED 2026-01-02 00:49:14.129733 | orchestrator | 2026-01-02 00:49:14 | INFO  | Task 5d4f7634-bd82-488b-ac5c-b4c027234773 is in state STARTED 2026-01-02 00:49:14.130538 | orchestrator | 2026-01-02 00:49:14 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:49:14.130693 | orchestrator | 2026-01-02 00:49:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:49:17.199212 | orchestrator | 2026-01-02 00:49:17 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:49:17.199556 | orchestrator | 2026-01-02 00:49:17 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:49:17.200626 | orchestrator | 2026-01-02 00:49:17 | INFO  | Task eabcdaf4-7f57-451d-9af8-719ace049053 is in state STARTED 2026-01-02 00:49:17.201440 | orchestrator | 2026-01-02 00:49:17 | INFO  | Task bf02c926-8290-4c89-9c66-995a3972bb95 is in state STARTED 2026-01-02 00:49:17.202454 | orchestrator | 2026-01-02 00:49:17 | INFO  | Task 5d4f7634-bd82-488b-ac5c-b4c027234773 is in state STARTED 2026-01-02 00:49:17.203607 | orchestrator | 2026-01-02 00:49:17 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:49:17.203759 | orchestrator | 2026-01-02 00:49:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:49:20.229906 | orchestrator | 2026-01-02 00:49:20 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:49:20.230200 | orchestrator | 2026-01-02 00:49:20 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:49:20.231015 | orchestrator | 2026-01-02 00:49:20 | INFO  | Task eabcdaf4-7f57-451d-9af8-719ace049053 is in state STARTED 2026-01-02 00:49:20.231481 | orchestrator | 2026-01-02 00:49:20 | INFO  | Task bf02c926-8290-4c89-9c66-995a3972bb95 is in state STARTED 2026-01-02 00:49:20.232203 | orchestrator | 2026-01-02 00:49:20 | INFO  | Task 5d4f7634-bd82-488b-ac5c-b4c027234773 is in state STARTED 2026-01-02 00:49:20.236100 | orchestrator | 2026-01-02 00:49:20 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:49:20.236135 | orchestrator | 2026-01-02 00:49:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:49:23.270166 | orchestrator | 2026-01-02 00:49:23 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:49:23.270583 | orchestrator | 2026-01-02 00:49:23 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:49:23.273760 | orchestrator | 2026-01-02 00:49:23 | INFO  | Task eabcdaf4-7f57-451d-9af8-719ace049053 is in state STARTED 2026-01-02 00:49:23.274217 | orchestrator | 2026-01-02 00:49:23 | INFO  | Task bf02c926-8290-4c89-9c66-995a3972bb95 is in state STARTED 2026-01-02 00:49:23.275002 | orchestrator | 2026-01-02 00:49:23 | INFO  | Task 5d4f7634-bd82-488b-ac5c-b4c027234773 is in state STARTED 2026-01-02 00:49:23.276531 | orchestrator | 2026-01-02 00:49:23 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:49:23.279115 | orchestrator | 2026-01-02 00:49:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:49:26.312341 | orchestrator | 2026-01-02 00:49:26 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:49:26.312834 | orchestrator | 2026-01-02 00:49:26 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:49:26.313949 | orchestrator | 2026-01-02 00:49:26 | INFO  | Task eabcdaf4-7f57-451d-9af8-719ace049053 is in state STARTED 2026-01-02 00:49:26.314684 | orchestrator | 2026-01-02 00:49:26 | INFO  | Task bf02c926-8290-4c89-9c66-995a3972bb95 is in state STARTED 2026-01-02 00:49:26.315619 | orchestrator | 2026-01-02 00:49:26 | INFO  | Task 5d4f7634-bd82-488b-ac5c-b4c027234773 is in state STARTED 2026-01-02 00:49:26.316913 | orchestrator | 2026-01-02 00:49:26 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:49:26.316986 | orchestrator | 2026-01-02 00:49:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:49:29.353308 | orchestrator | 2026-01-02 00:49:29 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:49:29.354249 | orchestrator | 2026-01-02 00:49:29 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:49:29.358395 | orchestrator | 2026-01-02 00:49:29 | INFO  | Task eabcdaf4-7f57-451d-9af8-719ace049053 is in state STARTED 2026-01-02 00:49:29.361132 | orchestrator | 2026-01-02 00:49:29 | INFO  | Task bf02c926-8290-4c89-9c66-995a3972bb95 is in state STARTED 2026-01-02 00:49:29.362712 | orchestrator | 2026-01-02 00:49:29 | INFO  | Task 5d4f7634-bd82-488b-ac5c-b4c027234773 is in state STARTED 2026-01-02 00:49:29.363849 | orchestrator | 2026-01-02 00:49:29 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:49:29.364266 | orchestrator | 2026-01-02 00:49:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:49:32.409518 | orchestrator | 2026-01-02 00:49:32 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:49:32.410220 | orchestrator | 2026-01-02 00:49:32 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:49:32.412790 | orchestrator | 2026-01-02 00:49:32 | INFO  | Task eabcdaf4-7f57-451d-9af8-719ace049053 is in state SUCCESS 2026-01-02 00:49:32.414524 | orchestrator | 2026-01-02 00:49:32 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:49:32.415578 | orchestrator | 2026-01-02 00:49:32 | INFO  | Task bf02c926-8290-4c89-9c66-995a3972bb95 is in state STARTED 2026-01-02 00:49:32.417604 | orchestrator | 2026-01-02 00:49:32 | INFO  | Task 5d4f7634-bd82-488b-ac5c-b4c027234773 is in state STARTED 2026-01-02 00:49:32.420999 | orchestrator | 2026-01-02 00:49:32 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:49:32.421043 | orchestrator | 2026-01-02 00:49:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:49:35.672659 | orchestrator | 2026-01-02 00:49:35 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:49:35.673480 | orchestrator | 2026-01-02 00:49:35 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:49:35.674425 | orchestrator | 2026-01-02 00:49:35 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:49:35.678127 | orchestrator | 2026-01-02 00:49:35 | INFO  | Task bf02c926-8290-4c89-9c66-995a3972bb95 is in state STARTED 2026-01-02 00:49:35.679363 | orchestrator | 2026-01-02 00:49:35 | INFO  | Task 5d4f7634-bd82-488b-ac5c-b4c027234773 is in state STARTED 2026-01-02 00:49:35.680109 | orchestrator | 2026-01-02 00:49:35 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:49:35.680151 | orchestrator | 2026-01-02 00:49:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:49:38.754188 | orchestrator | 2026-01-02 00:49:38 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:49:38.754267 | orchestrator | 2026-01-02 00:49:38 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:49:38.754277 | orchestrator | 2026-01-02 00:49:38 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:49:38.754284 | orchestrator | 2026-01-02 00:49:38 | INFO  | Task bf02c926-8290-4c89-9c66-995a3972bb95 is in state STARTED 2026-01-02 00:49:38.754290 | orchestrator | 2026-01-02 00:49:38 | INFO  | Task 5d4f7634-bd82-488b-ac5c-b4c027234773 is in state STARTED 2026-01-02 00:49:38.754297 | orchestrator | 2026-01-02 00:49:38 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:49:38.754305 | orchestrator | 2026-01-02 00:49:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:49:41.830515 | orchestrator | 2026-01-02 00:49:41 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:49:41.830710 | orchestrator | 2026-01-02 00:49:41 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:49:41.830837 | orchestrator | 2026-01-02 00:49:41 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:49:41.831665 | orchestrator | 2026-01-02 00:49:41 | INFO  | Task bf02c926-8290-4c89-9c66-995a3972bb95 is in state STARTED 2026-01-02 00:49:41.833215 | orchestrator | 2026-01-02 00:49:41 | INFO  | Task 5d4f7634-bd82-488b-ac5c-b4c027234773 is in state STARTED 2026-01-02 00:49:41.833835 | orchestrator | 2026-01-02 00:49:41 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:49:41.833867 | orchestrator | 2026-01-02 00:49:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:49:44.871012 | orchestrator | 2026-01-02 00:49:44 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:49:44.873296 | orchestrator | 2026-01-02 00:49:44 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:49:44.873454 | orchestrator | 2026-01-02 00:49:44 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:49:44.873782 | orchestrator | 2026-01-02 00:49:44 | INFO  | Task bf02c926-8290-4c89-9c66-995a3972bb95 is in state STARTED 2026-01-02 00:49:44.876560 | orchestrator | 2026-01-02 00:49:44.876625 | orchestrator | 2026-01-02 00:49:44.876637 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-02 00:49:44.876648 | orchestrator | 2026-01-02 00:49:44.876656 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-02 00:49:44.876665 | orchestrator | Friday 02 January 2026 00:49:16 +0000 (0:00:00.284) 0:00:00.284 ******** 2026-01-02 00:49:44.876675 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:49:44.876685 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:49:44.876695 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:49:44.876704 | orchestrator | 2026-01-02 00:49:44.876712 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-02 00:49:44.876721 | orchestrator | Friday 02 January 2026 00:49:16 +0000 (0:00:00.244) 0:00:00.529 ******** 2026-01-02 00:49:44.876730 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-01-02 00:49:44.876739 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-01-02 00:49:44.876748 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-01-02 00:49:44.876756 | orchestrator | 2026-01-02 00:49:44.876764 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-01-02 00:49:44.876772 | orchestrator | 2026-01-02 00:49:44.876779 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-01-02 00:49:44.876788 | orchestrator | Friday 02 January 2026 00:49:16 +0000 (0:00:00.420) 0:00:00.950 ******** 2026-01-02 00:49:44.876796 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:49:44.876805 | orchestrator | 2026-01-02 00:49:44.876813 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-01-02 00:49:44.876821 | orchestrator | Friday 02 January 2026 00:49:17 +0000 (0:00:00.659) 0:00:01.609 ******** 2026-01-02 00:49:44.876830 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-01-02 00:49:44.876838 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-01-02 00:49:44.876847 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-01-02 00:49:44.876855 | orchestrator | 2026-01-02 00:49:44.876863 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-01-02 00:49:44.876872 | orchestrator | Friday 02 January 2026 00:49:18 +0000 (0:00:00.983) 0:00:02.593 ******** 2026-01-02 00:49:44.876881 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-01-02 00:49:44.876890 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-01-02 00:49:44.876899 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-01-02 00:49:44.876908 | orchestrator | 2026-01-02 00:49:44.876955 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-01-02 00:49:44.876965 | orchestrator | Friday 02 January 2026 00:49:20 +0000 (0:00:02.206) 0:00:04.799 ******** 2026-01-02 00:49:44.876974 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:49:44.876983 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:49:44.876991 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:49:44.877043 | orchestrator | 2026-01-02 00:49:44.877053 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-01-02 00:49:44.877062 | orchestrator | Friday 02 January 2026 00:49:22 +0000 (0:00:01.823) 0:00:06.623 ******** 2026-01-02 00:49:44.877071 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:49:44.877079 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:49:44.877088 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:49:44.877097 | orchestrator | 2026-01-02 00:49:44.877105 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:49:44.877148 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:49:44.877183 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:49:44.877194 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:49:44.877203 | orchestrator | 2026-01-02 00:49:44.877211 | orchestrator | 2026-01-02 00:49:44.877220 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:49:44.877234 | orchestrator | Friday 02 January 2026 00:49:29 +0000 (0:00:06.971) 0:00:13.594 ******** 2026-01-02 00:49:44.877243 | orchestrator | =============================================================================== 2026-01-02 00:49:44.877253 | orchestrator | memcached : Restart memcached container --------------------------------- 6.97s 2026-01-02 00:49:44.877262 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.21s 2026-01-02 00:49:44.877271 | orchestrator | memcached : Check memcached container ----------------------------------- 1.82s 2026-01-02 00:49:44.877280 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.98s 2026-01-02 00:49:44.877290 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.66s 2026-01-02 00:49:44.877355 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.42s 2026-01-02 00:49:44.877364 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.24s 2026-01-02 00:49:44.877373 | orchestrator | 2026-01-02 00:49:44.877382 | orchestrator | 2026-01-02 00:49:44.877391 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-02 00:49:44.877473 | orchestrator | 2026-01-02 00:49:44.877486 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-02 00:49:44.877495 | orchestrator | Friday 02 January 2026 00:49:17 +0000 (0:00:00.465) 0:00:00.465 ******** 2026-01-02 00:49:44.877505 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:49:44.877515 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:49:44.877524 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:49:44.877534 | orchestrator | 2026-01-02 00:49:44.877544 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-02 00:49:44.877590 | orchestrator | Friday 02 January 2026 00:49:17 +0000 (0:00:00.471) 0:00:00.936 ******** 2026-01-02 00:49:44.877602 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-01-02 00:49:44.877612 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-01-02 00:49:44.877622 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-01-02 00:49:44.877630 | orchestrator | 2026-01-02 00:49:44.877639 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-01-02 00:49:44.877648 | orchestrator | 2026-01-02 00:49:44.877656 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-01-02 00:49:44.877665 | orchestrator | Friday 02 January 2026 00:49:18 +0000 (0:00:00.645) 0:00:01.581 ******** 2026-01-02 00:49:44.877673 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:49:44.877682 | orchestrator | 2026-01-02 00:49:44.877691 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-01-02 00:49:44.877699 | orchestrator | Friday 02 January 2026 00:49:18 +0000 (0:00:00.510) 0:00:02.092 ******** 2026-01-02 00:49:44.877710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-02 00:49:44.877734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-02 00:49:44.877744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-02 00:49:44.877754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-02 00:49:44.877763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-02 00:49:44.877791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-02 00:49:44.877801 | orchestrator | 2026-01-02 00:49:44.877810 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-01-02 00:49:44.877819 | orchestrator | Friday 02 January 2026 00:49:20 +0000 (0:00:01.333) 0:00:03.426 ******** 2026-01-02 00:49:44.877827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-02 00:49:44.877843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-02 00:49:44.877852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-02 00:49:44.877862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-02 00:49:44.877871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-02 00:49:44.877890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-02 00:49:44.877900 | orchestrator | 2026-01-02 00:49:44.877908 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-01-02 00:49:44.877941 | orchestrator | Friday 02 January 2026 00:49:22 +0000 (0:00:02.995) 0:00:06.421 ******** 2026-01-02 00:49:44.877951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-02 00:49:44.877975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-02 00:49:44.877984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-02 00:49:44.877993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-02 00:49:44.878002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-02 00:49:44.878077 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-02 00:49:44.878092 | orchestrator | 2026-01-02 00:49:44.878108 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-01-02 00:49:44.878117 | orchestrator | Friday 02 January 2026 00:49:25 +0000 (0:00:02.790) 0:00:09.212 ******** 2026-01-02 00:49:44.878126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-02 00:49:44.878144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-02 00:49:44.878153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-02 00:49:44.878162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-02 00:49:44.878171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-02 00:49:44.878180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-02 00:49:44.878189 | orchestrator | 2026-01-02 00:49:44.878198 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-02 00:49:44.878212 | orchestrator | Friday 02 January 2026 00:49:27 +0000 (0:00:02.143) 0:00:11.356 ******** 2026-01-02 00:49:44.878220 | orchestrator | 2026-01-02 00:49:44.878229 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-02 00:49:44.878243 | orchestrator | Friday 02 January 2026 00:49:28 +0000 (0:00:00.068) 0:00:11.425 ******** 2026-01-02 00:49:44.878252 | orchestrator | 2026-01-02 00:49:44.878260 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-02 00:49:44.878274 | orchestrator | Friday 02 January 2026 00:49:28 +0000 (0:00:00.099) 0:00:11.524 ******** 2026-01-02 00:49:44.878283 | orchestrator | 2026-01-02 00:49:44.878292 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-01-02 00:49:44.878301 | orchestrator | Friday 02 January 2026 00:49:28 +0000 (0:00:00.097) 0:00:11.621 ******** 2026-01-02 00:49:44.878309 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:49:44.878319 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:49:44.878328 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:49:44.878337 | orchestrator | 2026-01-02 00:49:44.878347 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-01-02 00:49:44.878355 | orchestrator | Friday 02 January 2026 00:49:35 +0000 (0:00:07.174) 0:00:18.796 ******** 2026-01-02 00:49:44.878364 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:49:44.878372 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:49:44.878380 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:49:44.878388 | orchestrator | 2026-01-02 00:49:44.878460 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:49:44.878471 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:49:44.878481 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:49:44.878491 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:49:44.878500 | orchestrator | 2026-01-02 00:49:44.878509 | orchestrator | 2026-01-02 00:49:44.878519 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:49:44.878571 | orchestrator | Friday 02 January 2026 00:49:41 +0000 (0:00:06.365) 0:00:25.162 ******** 2026-01-02 00:49:44.878581 | orchestrator | =============================================================================== 2026-01-02 00:49:44.878590 | orchestrator | redis : Restart redis container ----------------------------------------- 7.17s 2026-01-02 00:49:44.878599 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 6.37s 2026-01-02 00:49:44.878607 | orchestrator | redis : Copying over default config.json files -------------------------- 3.00s 2026-01-02 00:49:44.878616 | orchestrator | redis : Copying over redis config files --------------------------------- 2.79s 2026-01-02 00:49:44.878625 | orchestrator | redis : Check redis containers ------------------------------------------ 2.14s 2026-01-02 00:49:44.878633 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.33s 2026-01-02 00:49:44.878643 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.65s 2026-01-02 00:49:44.878652 | orchestrator | redis : include_tasks --------------------------------------------------- 0.51s 2026-01-02 00:49:44.878662 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.47s 2026-01-02 00:49:44.878671 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.27s 2026-01-02 00:49:44.878680 | orchestrator | 2026-01-02 00:49:44 | INFO  | Task 5d4f7634-bd82-488b-ac5c-b4c027234773 is in state SUCCESS 2026-01-02 00:49:44.878689 | orchestrator | 2026-01-02 00:49:44 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:49:44.878698 | orchestrator | 2026-01-02 00:49:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:49:47.918869 | orchestrator | 2026-01-02 00:49:47 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:49:47.919070 | orchestrator | 2026-01-02 00:49:47 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:49:47.922265 | orchestrator | 2026-01-02 00:49:47 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:49:47.926732 | orchestrator | 2026-01-02 00:49:47 | INFO  | Task bf02c926-8290-4c89-9c66-995a3972bb95 is in state STARTED 2026-01-02 00:49:47.927967 | orchestrator | 2026-01-02 00:49:47 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:49:47.927992 | orchestrator | 2026-01-02 00:49:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:49:50.988425 | orchestrator | 2026-01-02 00:49:50 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:49:50.989745 | orchestrator | 2026-01-02 00:49:50 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:49:50.993333 | orchestrator | 2026-01-02 00:49:50 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:49:50.993369 | orchestrator | 2026-01-02 00:49:50 | INFO  | Task bf02c926-8290-4c89-9c66-995a3972bb95 is in state STARTED 2026-01-02 00:49:50.994946 | orchestrator | 2026-01-02 00:49:50 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:49:50.994971 | orchestrator | 2026-01-02 00:49:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:49:54.071222 | orchestrator | 2026-01-02 00:49:54 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:49:54.075397 | orchestrator | 2026-01-02 00:49:54 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:49:54.075498 | orchestrator | 2026-01-02 00:49:54 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:49:54.076172 | orchestrator | 2026-01-02 00:49:54 | INFO  | Task bf02c926-8290-4c89-9c66-995a3972bb95 is in state STARTED 2026-01-02 00:49:54.077106 | orchestrator | 2026-01-02 00:49:54 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:49:54.077142 | orchestrator | 2026-01-02 00:49:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:49:57.164224 | orchestrator | 2026-01-02 00:49:57 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:49:57.164294 | orchestrator | 2026-01-02 00:49:57 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:49:57.166397 | orchestrator | 2026-01-02 00:49:57 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:49:57.166471 | orchestrator | 2026-01-02 00:49:57 | INFO  | Task bf02c926-8290-4c89-9c66-995a3972bb95 is in state STARTED 2026-01-02 00:49:57.166487 | orchestrator | 2026-01-02 00:49:57 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:49:57.166500 | orchestrator | 2026-01-02 00:49:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:50:00.251251 | orchestrator | 2026-01-02 00:50:00 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:50:00.251331 | orchestrator | 2026-01-02 00:50:00 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:50:00.251337 | orchestrator | 2026-01-02 00:50:00 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:50:00.251342 | orchestrator | 2026-01-02 00:50:00 | INFO  | Task bf02c926-8290-4c89-9c66-995a3972bb95 is in state STARTED 2026-01-02 00:50:00.251347 | orchestrator | 2026-01-02 00:50:00 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:50:00.251352 | orchestrator | 2026-01-02 00:50:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:50:03.246987 | orchestrator | 2026-01-02 00:50:03 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:50:03.250188 | orchestrator | 2026-01-02 00:50:03 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:50:03.251997 | orchestrator | 2026-01-02 00:50:03 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:50:03.253868 | orchestrator | 2026-01-02 00:50:03 | INFO  | Task bf02c926-8290-4c89-9c66-995a3972bb95 is in state STARTED 2026-01-02 00:50:03.255885 | orchestrator | 2026-01-02 00:50:03 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:50:03.255934 | orchestrator | 2026-01-02 00:50:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:50:06.302191 | orchestrator | 2026-01-02 00:50:06 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:50:06.306881 | orchestrator | 2026-01-02 00:50:06 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:50:06.311126 | orchestrator | 2026-01-02 00:50:06 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:50:06.311852 | orchestrator | 2026-01-02 00:50:06 | INFO  | Task bf02c926-8290-4c89-9c66-995a3972bb95 is in state STARTED 2026-01-02 00:50:06.316927 | orchestrator | 2026-01-02 00:50:06 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:50:06.316950 | orchestrator | 2026-01-02 00:50:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:50:09.363481 | orchestrator | 2026-01-02 00:50:09 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:50:09.364029 | orchestrator | 2026-01-02 00:50:09 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:50:09.366499 | orchestrator | 2026-01-02 00:50:09 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:50:09.367852 | orchestrator | 2026-01-02 00:50:09 | INFO  | Task bf02c926-8290-4c89-9c66-995a3972bb95 is in state STARTED 2026-01-02 00:50:09.368785 | orchestrator | 2026-01-02 00:50:09 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:50:09.368962 | orchestrator | 2026-01-02 00:50:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:50:12.417293 | orchestrator | 2026-01-02 00:50:12 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:50:12.417406 | orchestrator | 2026-01-02 00:50:12 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:50:12.418218 | orchestrator | 2026-01-02 00:50:12 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:50:12.418779 | orchestrator | 2026-01-02 00:50:12 | INFO  | Task bf02c926-8290-4c89-9c66-995a3972bb95 is in state STARTED 2026-01-02 00:50:12.419625 | orchestrator | 2026-01-02 00:50:12 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:50:12.419652 | orchestrator | 2026-01-02 00:50:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:50:15.463502 | orchestrator | 2026-01-02 00:50:15 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:50:15.463786 | orchestrator | 2026-01-02 00:50:15 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:50:15.464437 | orchestrator | 2026-01-02 00:50:15 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:50:15.465464 | orchestrator | 2026-01-02 00:50:15 | INFO  | Task bf02c926-8290-4c89-9c66-995a3972bb95 is in state STARTED 2026-01-02 00:50:15.466273 | orchestrator | 2026-01-02 00:50:15 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:50:15.466316 | orchestrator | 2026-01-02 00:50:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:50:18.515149 | orchestrator | 2026-01-02 00:50:18 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:50:18.519207 | orchestrator | 2026-01-02 00:50:18 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:50:18.522071 | orchestrator | 2026-01-02 00:50:18 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:50:18.526528 | orchestrator | 2026-01-02 00:50:18 | INFO  | Task bf02c926-8290-4c89-9c66-995a3972bb95 is in state STARTED 2026-01-02 00:50:18.526607 | orchestrator | 2026-01-02 00:50:18 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:50:18.526626 | orchestrator | 2026-01-02 00:50:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:50:21.664985 | orchestrator | 2026-01-02 00:50:21 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:50:21.665200 | orchestrator | 2026-01-02 00:50:21 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:50:21.667776 | orchestrator | 2026-01-02 00:50:21 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:50:21.668253 | orchestrator | 2026-01-02 00:50:21 | INFO  | Task bf02c926-8290-4c89-9c66-995a3972bb95 is in state STARTED 2026-01-02 00:50:21.671375 | orchestrator | 2026-01-02 00:50:21 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:50:21.671449 | orchestrator | 2026-01-02 00:50:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:50:24.788457 | orchestrator | 2026-01-02 00:50:24 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:50:24.788635 | orchestrator | 2026-01-02 00:50:24 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:50:24.790236 | orchestrator | 2026-01-02 00:50:24 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:50:24.790834 | orchestrator | 2026-01-02 00:50:24 | INFO  | Task bf02c926-8290-4c89-9c66-995a3972bb95 is in state STARTED 2026-01-02 00:50:24.791499 | orchestrator | 2026-01-02 00:50:24 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:50:24.791541 | orchestrator | 2026-01-02 00:50:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:50:27.841275 | orchestrator | 2026-01-02 00:50:27 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:50:27.844161 | orchestrator | 2026-01-02 00:50:27 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:50:27.846550 | orchestrator | 2026-01-02 00:50:27 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:50:27.849617 | orchestrator | 2026-01-02 00:50:27 | INFO  | Task bf02c926-8290-4c89-9c66-995a3972bb95 is in state STARTED 2026-01-02 00:50:27.851016 | orchestrator | 2026-01-02 00:50:27 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:50:27.851731 | orchestrator | 2026-01-02 00:50:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:50:30.936872 | orchestrator | 2026-01-02 00:50:30 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:50:30.937712 | orchestrator | 2026-01-02 00:50:30 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:50:30.940049 | orchestrator | 2026-01-02 00:50:30 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:50:30.941282 | orchestrator | 2026-01-02 00:50:30 | INFO  | Task bf02c926-8290-4c89-9c66-995a3972bb95 is in state SUCCESS 2026-01-02 00:50:30.942888 | orchestrator | 2026-01-02 00:50:30.942986 | orchestrator | 2026-01-02 00:50:30.942995 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-02 00:50:30.943002 | orchestrator | 2026-01-02 00:50:30.943007 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-02 00:50:30.943012 | orchestrator | Friday 02 January 2026 00:49:16 +0000 (0:00:00.415) 0:00:00.415 ******** 2026-01-02 00:50:30.943017 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:50:30.943023 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:50:30.943028 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:50:30.943033 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:50:30.943038 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:50:30.943042 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:50:30.943047 | orchestrator | 2026-01-02 00:50:30.943052 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-02 00:50:30.943056 | orchestrator | Friday 02 January 2026 00:49:17 +0000 (0:00:00.980) 0:00:01.398 ******** 2026-01-02 00:50:30.943061 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-02 00:50:30.943066 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-02 00:50:30.943070 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-02 00:50:30.943075 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-02 00:50:30.943080 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-02 00:50:30.943084 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-02 00:50:30.943089 | orchestrator | 2026-01-02 00:50:30.943093 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-01-02 00:50:30.943098 | orchestrator | 2026-01-02 00:50:30.943103 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-01-02 00:50:30.943107 | orchestrator | Friday 02 January 2026 00:49:18 +0000 (0:00:00.739) 0:00:02.138 ******** 2026-01-02 00:50:30.943113 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:50:30.943119 | orchestrator | 2026-01-02 00:50:30.943124 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-02 00:50:30.943129 | orchestrator | Friday 02 January 2026 00:49:19 +0000 (0:00:01.273) 0:00:03.411 ******** 2026-01-02 00:50:30.943133 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-01-02 00:50:30.943138 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-01-02 00:50:30.943159 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-01-02 00:50:30.943165 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-01-02 00:50:30.943170 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-01-02 00:50:30.943174 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-01-02 00:50:30.943179 | orchestrator | 2026-01-02 00:50:30.943184 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-02 00:50:30.943188 | orchestrator | Friday 02 January 2026 00:49:21 +0000 (0:00:01.498) 0:00:04.910 ******** 2026-01-02 00:50:30.943193 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-01-02 00:50:30.943198 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-01-02 00:50:30.943202 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-01-02 00:50:30.943207 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-01-02 00:50:30.943212 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-01-02 00:50:30.943216 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-01-02 00:50:30.943221 | orchestrator | 2026-01-02 00:50:30.943226 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-02 00:50:30.943230 | orchestrator | Friday 02 January 2026 00:49:22 +0000 (0:00:01.726) 0:00:06.636 ******** 2026-01-02 00:50:30.943248 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-01-02 00:50:30.943253 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:50:30.943258 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-01-02 00:50:30.943263 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:50:30.943268 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-01-02 00:50:30.943272 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:50:30.943277 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-01-02 00:50:30.943282 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:50:30.943286 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-01-02 00:50:30.943291 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:50:30.943296 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-01-02 00:50:30.943300 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:50:30.943315 | orchestrator | 2026-01-02 00:50:30.943320 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-01-02 00:50:30.943324 | orchestrator | Friday 02 January 2026 00:49:24 +0000 (0:00:01.357) 0:00:07.994 ******** 2026-01-02 00:50:30.943329 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:50:30.943334 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:50:30.943338 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:50:30.943343 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:50:30.943348 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:50:30.943352 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:50:30.943357 | orchestrator | 2026-01-02 00:50:30.943362 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-01-02 00:50:30.943366 | orchestrator | Friday 02 January 2026 00:49:25 +0000 (0:00:00.862) 0:00:08.856 ******** 2026-01-02 00:50:30.943384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-02 00:50:30.943391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-02 00:50:30.943396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-02 00:50:30.943405 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-02 00:50:30.943410 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-02 00:50:30.943416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-02 00:50:30.943425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-02 00:50:30.943430 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-02 00:50:30.943435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-02 00:50:30.943443 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-02 00:50:30.943454 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-02 00:50:30.943463 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-02 00:50:30.943468 | orchestrator | 2026-01-02 00:50:30.943473 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-01-02 00:50:30.943478 | orchestrator | Friday 02 January 2026 00:49:27 +0000 (0:00:01.961) 0:00:10.817 ******** 2026-01-02 00:50:30.943484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-02 00:50:30.943489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-02 00:50:30.943498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-02 00:50:30.943504 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-02 00:50:30.943512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-02 00:50:30.943523 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-02 00:50:30.943528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-02 00:50:30.943534 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-02 00:50:30.943543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-02 00:50:30.943549 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-02 00:50:30.943558 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-02 00:50:30.943568 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-02 00:50:30.943574 | orchestrator | 2026-01-02 00:50:30.943580 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-01-02 00:50:30.943585 | orchestrator | Friday 02 January 2026 00:49:30 +0000 (0:00:03.631) 0:00:14.449 ******** 2026-01-02 00:50:30.943591 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:50:30.943596 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:50:30.943601 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:50:30.943607 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:50:30.943612 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:50:30.943618 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:50:30.943623 | orchestrator | 2026-01-02 00:50:30.943628 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-01-02 00:50:30.943636 | orchestrator | Friday 02 January 2026 00:49:32 +0000 (0:00:01.797) 0:00:16.247 ******** 2026-01-02 00:50:30.943642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-02 00:50:30.943648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-02 00:50:30.943654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-02 00:50:30.943662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-02 00:50:30.943671 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-02 00:50:30.943677 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-02 00:50:30.943689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-02 00:50:30.943694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-02 00:50:30.943700 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-02 00:50:30.943711 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-02 00:50:30.943720 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-02 00:50:30.943730 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-02 00:50:30.943736 | orchestrator | 2026-01-02 00:50:30.943741 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-02 00:50:30.943746 | orchestrator | Friday 02 January 2026 00:49:35 +0000 (0:00:03.101) 0:00:19.349 ******** 2026-01-02 00:50:30.943751 | orchestrator | 2026-01-02 00:50:30.943757 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-02 00:50:30.943762 | orchestrator | Friday 02 January 2026 00:49:37 +0000 (0:00:01.340) 0:00:20.690 ******** 2026-01-02 00:50:30.943767 | orchestrator | 2026-01-02 00:50:30.943772 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-02 00:50:30.943777 | orchestrator | Friday 02 January 2026 00:49:37 +0000 (0:00:00.836) 0:00:21.527 ******** 2026-01-02 00:50:30.943783 | orchestrator | 2026-01-02 00:50:30.943788 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-02 00:50:30.943793 | orchestrator | Friday 02 January 2026 00:49:38 +0000 (0:00:00.736) 0:00:22.263 ******** 2026-01-02 00:50:30.943798 | orchestrator | 2026-01-02 00:50:30.943804 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-02 00:50:30.943810 | orchestrator | Friday 02 January 2026 00:49:38 +0000 (0:00:00.282) 0:00:22.545 ******** 2026-01-02 00:50:30.943815 | orchestrator | 2026-01-02 00:50:30.943820 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-02 00:50:30.943826 | orchestrator | Friday 02 January 2026 00:49:39 +0000 (0:00:00.239) 0:00:22.785 ******** 2026-01-02 00:50:30.943831 | orchestrator | 2026-01-02 00:50:30.943836 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-01-02 00:50:30.943840 | orchestrator | Friday 02 January 2026 00:49:39 +0000 (0:00:00.316) 0:00:23.101 ******** 2026-01-02 00:50:30.943845 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:50:30.943849 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:50:30.943854 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:50:30.943859 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:50:30.943863 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:50:30.943868 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:50:30.943873 | orchestrator | 2026-01-02 00:50:30.943877 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-01-02 00:50:30.943882 | orchestrator | Friday 02 January 2026 00:49:52 +0000 (0:00:12.762) 0:00:35.864 ******** 2026-01-02 00:50:30.943887 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:50:30.943891 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:50:30.943912 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:50:30.943920 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:50:30.943927 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:50:30.943935 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:50:30.943943 | orchestrator | 2026-01-02 00:50:30.943950 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-01-02 00:50:30.943958 | orchestrator | Friday 02 January 2026 00:49:54 +0000 (0:00:02.582) 0:00:38.447 ******** 2026-01-02 00:50:30.943963 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:50:30.943968 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:50:30.943973 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:50:30.943980 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:50:30.943987 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:50:30.943994 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:50:30.944007 | orchestrator | 2026-01-02 00:50:30.944016 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-01-02 00:50:30.944024 | orchestrator | Friday 02 January 2026 00:50:05 +0000 (0:00:11.196) 0:00:49.645 ******** 2026-01-02 00:50:30.944032 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-01-02 00:50:30.944039 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-01-02 00:50:30.944046 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-01-02 00:50:30.944054 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-01-02 00:50:30.944058 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-01-02 00:50:30.944066 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-01-02 00:50:30.944071 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-01-02 00:50:30.944076 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-01-02 00:50:30.944080 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-01-02 00:50:30.944085 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-01-02 00:50:30.944089 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-01-02 00:50:30.944094 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-01-02 00:50:30.944098 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-02 00:50:30.944103 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-02 00:50:30.944107 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-02 00:50:30.944112 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-02 00:50:30.944116 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-02 00:50:30.944121 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-02 00:50:30.944125 | orchestrator | 2026-01-02 00:50:30.944130 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-01-02 00:50:30.944134 | orchestrator | Friday 02 January 2026 00:50:14 +0000 (0:00:08.590) 0:00:58.236 ******** 2026-01-02 00:50:30.944139 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-01-02 00:50:30.944144 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:50:30.944148 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-01-02 00:50:30.944153 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:50:30.944157 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-01-02 00:50:30.944162 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:50:30.944167 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-01-02 00:50:30.944171 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-01-02 00:50:30.944176 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-01-02 00:50:30.944180 | orchestrator | 2026-01-02 00:50:30.944185 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-01-02 00:50:30.944190 | orchestrator | Friday 02 January 2026 00:50:17 +0000 (0:00:02.597) 0:01:00.833 ******** 2026-01-02 00:50:30.944198 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-01-02 00:50:30.944203 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:50:30.944208 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-01-02 00:50:30.944212 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:50:30.944217 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-01-02 00:50:30.944221 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:50:30.944226 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-01-02 00:50:30.944230 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-01-02 00:50:30.944235 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-01-02 00:50:30.944239 | orchestrator | 2026-01-02 00:50:30.944244 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-01-02 00:50:30.944249 | orchestrator | Friday 02 January 2026 00:50:21 +0000 (0:00:04.345) 0:01:05.179 ******** 2026-01-02 00:50:30.944253 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:50:30.944258 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:50:30.944262 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:50:30.944268 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:50:30.944275 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:50:30.944282 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:50:30.944289 | orchestrator | 2026-01-02 00:50:30.944296 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:50:30.944307 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-02 00:50:30.944315 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-02 00:50:30.944322 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-02 00:50:30.944329 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-02 00:50:30.944336 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-02 00:50:30.944347 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-02 00:50:30.944355 | orchestrator | 2026-01-02 00:50:30.944363 | orchestrator | 2026-01-02 00:50:30.944371 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:50:30.944378 | orchestrator | Friday 02 January 2026 00:50:29 +0000 (0:00:08.171) 0:01:13.350 ******** 2026-01-02 00:50:30.944386 | orchestrator | =============================================================================== 2026-01-02 00:50:30.944393 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 19.37s 2026-01-02 00:50:30.944400 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 12.76s 2026-01-02 00:50:30.944408 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.59s 2026-01-02 00:50:30.944415 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.35s 2026-01-02 00:50:30.944423 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 3.75s 2026-01-02 00:50:30.944431 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.63s 2026-01-02 00:50:30.944439 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.10s 2026-01-02 00:50:30.944446 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.60s 2026-01-02 00:50:30.944454 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.58s 2026-01-02 00:50:30.944463 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.96s 2026-01-02 00:50:30.944468 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.80s 2026-01-02 00:50:30.944472 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.73s 2026-01-02 00:50:30.944477 | orchestrator | module-load : Load modules ---------------------------------------------- 1.50s 2026-01-02 00:50:30.944481 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.36s 2026-01-02 00:50:30.944486 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.27s 2026-01-02 00:50:30.944490 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.98s 2026-01-02 00:50:30.944495 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.86s 2026-01-02 00:50:30.944499 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.74s 2026-01-02 00:50:30.944576 | orchestrator | 2026-01-02 00:50:30 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:50:30.944582 | orchestrator | 2026-01-02 00:50:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:50:33.991329 | orchestrator | 2026-01-02 00:50:33 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:50:33.992817 | orchestrator | 2026-01-02 00:50:33 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:50:33.993368 | orchestrator | 2026-01-02 00:50:33 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:50:33.994218 | orchestrator | 2026-01-02 00:50:33 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state STARTED 2026-01-02 00:50:33.998286 | orchestrator | 2026-01-02 00:50:33 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:50:33.998326 | orchestrator | 2026-01-02 00:50:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:50:37.049988 | orchestrator | 2026-01-02 00:50:37 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:50:37.050472 | orchestrator | 2026-01-02 00:50:37 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:50:37.051051 | orchestrator | 2026-01-02 00:50:37 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:50:37.051731 | orchestrator | 2026-01-02 00:50:37 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state STARTED 2026-01-02 00:50:37.052363 | orchestrator | 2026-01-02 00:50:37 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:50:37.052474 | orchestrator | 2026-01-02 00:50:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:50:40.100211 | orchestrator | 2026-01-02 00:50:40 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:50:40.105212 | orchestrator | 2026-01-02 00:50:40 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:50:40.108840 | orchestrator | 2026-01-02 00:50:40 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:50:40.113281 | orchestrator | 2026-01-02 00:50:40 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state STARTED 2026-01-02 00:50:40.115864 | orchestrator | 2026-01-02 00:50:40 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:50:40.116114 | orchestrator | 2026-01-02 00:50:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:50:43.161867 | orchestrator | 2026-01-02 00:50:43 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:50:43.162619 | orchestrator | 2026-01-02 00:50:43 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:50:43.164151 | orchestrator | 2026-01-02 00:50:43 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:50:43.165607 | orchestrator | 2026-01-02 00:50:43 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state STARTED 2026-01-02 00:50:43.168347 | orchestrator | 2026-01-02 00:50:43 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:50:43.168749 | orchestrator | 2026-01-02 00:50:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:50:46.221504 | orchestrator | 2026-01-02 00:50:46 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:50:46.222005 | orchestrator | 2026-01-02 00:50:46 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:50:46.225471 | orchestrator | 2026-01-02 00:50:46 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:50:46.225854 | orchestrator | 2026-01-02 00:50:46 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state STARTED 2026-01-02 00:50:46.226806 | orchestrator | 2026-01-02 00:50:46 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:50:46.228065 | orchestrator | 2026-01-02 00:50:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:50:49.266432 | orchestrator | 2026-01-02 00:50:49 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:50:49.268309 | orchestrator | 2026-01-02 00:50:49 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:50:49.268359 | orchestrator | 2026-01-02 00:50:49 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:50:49.268372 | orchestrator | 2026-01-02 00:50:49 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state STARTED 2026-01-02 00:50:49.270877 | orchestrator | 2026-01-02 00:50:49 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:50:49.270951 | orchestrator | 2026-01-02 00:50:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:50:52.313439 | orchestrator | 2026-01-02 00:50:52 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:50:52.315197 | orchestrator | 2026-01-02 00:50:52 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:50:52.316735 | orchestrator | 2026-01-02 00:50:52 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:50:52.317592 | orchestrator | 2026-01-02 00:50:52 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state STARTED 2026-01-02 00:50:52.318964 | orchestrator | 2026-01-02 00:50:52 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:50:52.319112 | orchestrator | 2026-01-02 00:50:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:50:55.366479 | orchestrator | 2026-01-02 00:50:55 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:50:55.370781 | orchestrator | 2026-01-02 00:50:55 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:50:55.373939 | orchestrator | 2026-01-02 00:50:55 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:50:55.376160 | orchestrator | 2026-01-02 00:50:55 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state STARTED 2026-01-02 00:50:55.378577 | orchestrator | 2026-01-02 00:50:55 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:50:55.379033 | orchestrator | 2026-01-02 00:50:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:50:58.419659 | orchestrator | 2026-01-02 00:50:58 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:50:58.421227 | orchestrator | 2026-01-02 00:50:58 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:50:58.424310 | orchestrator | 2026-01-02 00:50:58 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:50:58.427089 | orchestrator | 2026-01-02 00:50:58 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state STARTED 2026-01-02 00:50:58.428347 | orchestrator | 2026-01-02 00:50:58 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:50:58.428511 | orchestrator | 2026-01-02 00:50:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:51:01.469560 | orchestrator | 2026-01-02 00:51:01 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:51:01.470910 | orchestrator | 2026-01-02 00:51:01 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:51:01.472823 | orchestrator | 2026-01-02 00:51:01 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:51:01.473745 | orchestrator | 2026-01-02 00:51:01 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state STARTED 2026-01-02 00:51:01.475270 | orchestrator | 2026-01-02 00:51:01 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:51:01.475319 | orchestrator | 2026-01-02 00:51:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:51:04.519081 | orchestrator | 2026-01-02 00:51:04 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:51:04.519170 | orchestrator | 2026-01-02 00:51:04 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:51:04.519229 | orchestrator | 2026-01-02 00:51:04 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:51:04.519702 | orchestrator | 2026-01-02 00:51:04 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state STARTED 2026-01-02 00:51:04.522465 | orchestrator | 2026-01-02 00:51:04 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:51:04.522500 | orchestrator | 2026-01-02 00:51:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:51:07.575963 | orchestrator | 2026-01-02 00:51:07 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:51:07.576558 | orchestrator | 2026-01-02 00:51:07 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:51:07.580237 | orchestrator | 2026-01-02 00:51:07 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:51:07.580316 | orchestrator | 2026-01-02 00:51:07 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state STARTED 2026-01-02 00:51:07.582083 | orchestrator | 2026-01-02 00:51:07 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:51:07.582126 | orchestrator | 2026-01-02 00:51:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:51:10.626543 | orchestrator | 2026-01-02 00:51:10 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:51:10.626722 | orchestrator | 2026-01-02 00:51:10 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:51:10.628116 | orchestrator | 2026-01-02 00:51:10 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:51:10.628803 | orchestrator | 2026-01-02 00:51:10 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state STARTED 2026-01-02 00:51:10.629944 | orchestrator | 2026-01-02 00:51:10 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:51:10.629981 | orchestrator | 2026-01-02 00:51:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:51:13.668682 | orchestrator | 2026-01-02 00:51:13 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:51:13.672370 | orchestrator | 2026-01-02 00:51:13 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:51:13.674962 | orchestrator | 2026-01-02 00:51:13 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:51:13.678262 | orchestrator | 2026-01-02 00:51:13 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state STARTED 2026-01-02 00:51:13.679570 | orchestrator | 2026-01-02 00:51:13 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:51:13.679941 | orchestrator | 2026-01-02 00:51:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:51:16.804238 | orchestrator | 2026-01-02 00:51:16 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:51:16.805431 | orchestrator | 2026-01-02 00:51:16 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:51:16.807320 | orchestrator | 2026-01-02 00:51:16 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:51:16.809260 | orchestrator | 2026-01-02 00:51:16 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state STARTED 2026-01-02 00:51:16.813841 | orchestrator | 2026-01-02 00:51:16 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:51:16.813926 | orchestrator | 2026-01-02 00:51:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:51:19.851331 | orchestrator | 2026-01-02 00:51:19 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:51:19.851925 | orchestrator | 2026-01-02 00:51:19 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:51:19.855625 | orchestrator | 2026-01-02 00:51:19 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:51:19.857153 | orchestrator | 2026-01-02 00:51:19 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state STARTED 2026-01-02 00:51:19.858937 | orchestrator | 2026-01-02 00:51:19 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:51:19.859005 | orchestrator | 2026-01-02 00:51:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:51:22.970503 | orchestrator | 2026-01-02 00:51:22 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:51:22.971102 | orchestrator | 2026-01-02 00:51:22 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:51:22.971809 | orchestrator | 2026-01-02 00:51:22 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:51:22.972831 | orchestrator | 2026-01-02 00:51:22 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state STARTED 2026-01-02 00:51:22.973471 | orchestrator | 2026-01-02 00:51:22 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:51:22.973575 | orchestrator | 2026-01-02 00:51:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:51:26.157571 | orchestrator | 2026-01-02 00:51:26 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:51:26.157695 | orchestrator | 2026-01-02 00:51:26 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:51:26.157734 | orchestrator | 2026-01-02 00:51:26 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:51:26.157744 | orchestrator | 2026-01-02 00:51:26 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state STARTED 2026-01-02 00:51:26.157753 | orchestrator | 2026-01-02 00:51:26 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:51:26.157761 | orchestrator | 2026-01-02 00:51:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:51:29.218611 | orchestrator | 2026-01-02 00:51:29 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:51:29.219943 | orchestrator | 2026-01-02 00:51:29 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:51:29.222180 | orchestrator | 2026-01-02 00:51:29 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:51:29.223280 | orchestrator | 2026-01-02 00:51:29 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state STARTED 2026-01-02 00:51:29.225241 | orchestrator | 2026-01-02 00:51:29 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:51:29.225334 | orchestrator | 2026-01-02 00:51:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:51:32.315973 | orchestrator | 2026-01-02 00:51:32 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:51:32.318533 | orchestrator | 2026-01-02 00:51:32 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:51:32.318636 | orchestrator | 2026-01-02 00:51:32 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:51:32.319229 | orchestrator | 2026-01-02 00:51:32 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state STARTED 2026-01-02 00:51:32.319727 | orchestrator | 2026-01-02 00:51:32 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state STARTED 2026-01-02 00:51:32.320063 | orchestrator | 2026-01-02 00:51:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:51:35.372021 | orchestrator | 2026-01-02 00:51:35 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:51:35.372171 | orchestrator | 2026-01-02 00:51:35 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:51:35.372187 | orchestrator | 2026-01-02 00:51:35 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:51:35.372198 | orchestrator | 2026-01-02 00:51:35 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state STARTED 2026-01-02 00:51:35.372209 | orchestrator | 2026-01-02 00:51:35 | INFO  | Task 3875b75d-dcad-489a-a5ba-b61f5eb2d215 is in state SUCCESS 2026-01-02 00:51:35.372220 | orchestrator | 2026-01-02 00:51:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:51:35.373115 | orchestrator | 2026-01-02 00:51:35.373262 | orchestrator | 2026-01-02 00:51:35.373286 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-01-02 00:51:35.373302 | orchestrator | 2026-01-02 00:51:35.373319 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-01-02 00:51:35.373337 | orchestrator | Friday 02 January 2026 00:46:49 +0000 (0:00:00.168) 0:00:00.168 ******** 2026-01-02 00:51:35.373355 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:51:35.373375 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:51:35.373392 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:51:35.373409 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:51:35.373425 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:51:35.373442 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:51:35.373456 | orchestrator | 2026-01-02 00:51:35.373471 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-01-02 00:51:35.373522 | orchestrator | Friday 02 January 2026 00:46:49 +0000 (0:00:00.646) 0:00:00.815 ******** 2026-01-02 00:51:35.373541 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:51:35.373558 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:51:35.373574 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:51:35.373592 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:51:35.373608 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:51:35.373626 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:51:35.373642 | orchestrator | 2026-01-02 00:51:35.373659 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-01-02 00:51:35.373677 | orchestrator | Friday 02 January 2026 00:46:50 +0000 (0:00:00.622) 0:00:01.438 ******** 2026-01-02 00:51:35.373693 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:51:35.373711 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:51:35.373730 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:51:35.373748 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:51:35.373765 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:51:35.373778 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:51:35.373790 | orchestrator | 2026-01-02 00:51:35.373803 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-01-02 00:51:35.373815 | orchestrator | Friday 02 January 2026 00:46:50 +0000 (0:00:00.579) 0:00:02.018 ******** 2026-01-02 00:51:35.373826 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:51:35.373837 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:51:35.373849 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:51:35.373886 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:51:35.373900 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:51:35.373912 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:51:35.373923 | orchestrator | 2026-01-02 00:51:35.373935 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-01-02 00:51:35.373947 | orchestrator | Friday 02 January 2026 00:46:53 +0000 (0:00:02.505) 0:00:04.523 ******** 2026-01-02 00:51:35.373959 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:51:35.373970 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:51:35.373981 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:51:35.373992 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:51:35.374004 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:51:35.374064 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:51:35.374079 | orchestrator | 2026-01-02 00:51:35.374091 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-01-02 00:51:35.374102 | orchestrator | Friday 02 January 2026 00:46:54 +0000 (0:00:01.020) 0:00:05.543 ******** 2026-01-02 00:51:35.374112 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:51:35.374122 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:51:35.374131 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:51:35.374141 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:51:35.374151 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:51:35.374161 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:51:35.374170 | orchestrator | 2026-01-02 00:51:35.374180 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-01-02 00:51:35.374190 | orchestrator | Friday 02 January 2026 00:46:55 +0000 (0:00:00.911) 0:00:06.454 ******** 2026-01-02 00:51:35.374200 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:51:35.374210 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:51:35.374219 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:51:35.374229 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:51:35.374239 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:51:35.374248 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:51:35.374258 | orchestrator | 2026-01-02 00:51:35.374267 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-01-02 00:51:35.374277 | orchestrator | Friday 02 January 2026 00:46:55 +0000 (0:00:00.651) 0:00:07.106 ******** 2026-01-02 00:51:35.374287 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:51:35.374307 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:51:35.374317 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:51:35.374327 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:51:35.374336 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:51:35.374346 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:51:35.374356 | orchestrator | 2026-01-02 00:51:35.374365 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-01-02 00:51:35.374375 | orchestrator | Friday 02 January 2026 00:46:56 +0000 (0:00:00.667) 0:00:07.773 ******** 2026-01-02 00:51:35.374385 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-02 00:51:35.374395 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-02 00:51:35.374404 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:51:35.374414 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-02 00:51:35.374424 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-02 00:51:35.374434 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:51:35.374443 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-02 00:51:35.374453 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-02 00:51:35.375126 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-02 00:51:35.375150 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-02 00:51:35.375176 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:51:35.375187 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-02 00:51:35.375196 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-02 00:51:35.375206 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:51:35.375216 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:51:35.375226 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-02 00:51:35.375236 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-02 00:51:35.375245 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:51:35.375255 | orchestrator | 2026-01-02 00:51:35.375265 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-01-02 00:51:35.375275 | orchestrator | Friday 02 January 2026 00:46:57 +0000 (0:00:00.609) 0:00:08.383 ******** 2026-01-02 00:51:35.375289 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:51:35.375300 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:51:35.375309 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:51:35.375319 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:51:35.375329 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:51:35.375339 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:51:35.375348 | orchestrator | 2026-01-02 00:51:35.375358 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-01-02 00:51:35.375369 | orchestrator | Friday 02 January 2026 00:46:58 +0000 (0:00:01.234) 0:00:09.617 ******** 2026-01-02 00:51:35.375379 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:51:35.375389 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:51:35.375399 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:51:35.375409 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:51:35.375419 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:51:35.375428 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:51:35.375438 | orchestrator | 2026-01-02 00:51:35.375448 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-01-02 00:51:35.375458 | orchestrator | Friday 02 January 2026 00:46:59 +0000 (0:00:01.119) 0:00:10.737 ******** 2026-01-02 00:51:35.375468 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:51:35.375478 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:51:35.375488 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:51:35.375508 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:51:35.375519 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:51:35.375529 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:51:35.375538 | orchestrator | 2026-01-02 00:51:35.375548 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-01-02 00:51:35.375558 | orchestrator | Friday 02 January 2026 00:47:05 +0000 (0:00:05.499) 0:00:16.236 ******** 2026-01-02 00:51:35.375568 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:51:35.375578 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:51:35.375587 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:51:35.375597 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:51:35.375607 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:51:35.375617 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:51:35.375627 | orchestrator | 2026-01-02 00:51:35.375636 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-01-02 00:51:35.375646 | orchestrator | Friday 02 January 2026 00:47:06 +0000 (0:00:01.078) 0:00:17.315 ******** 2026-01-02 00:51:35.375656 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:51:35.375666 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:51:35.375675 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:51:35.375686 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:51:35.375695 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:51:35.375705 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:51:35.375715 | orchestrator | 2026-01-02 00:51:35.375725 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-01-02 00:51:35.375736 | orchestrator | Friday 02 January 2026 00:47:08 +0000 (0:00:02.217) 0:00:19.532 ******** 2026-01-02 00:51:35.375746 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:51:35.375756 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:51:35.375765 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:51:35.375775 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:51:35.375785 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:51:35.375794 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:51:35.375804 | orchestrator | 2026-01-02 00:51:35.375814 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-01-02 00:51:35.375824 | orchestrator | Friday 02 January 2026 00:47:10 +0000 (0:00:01.813) 0:00:21.346 ******** 2026-01-02 00:51:35.375834 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-01-02 00:51:35.375844 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-01-02 00:51:35.375853 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:51:35.375957 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-01-02 00:51:35.375977 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-01-02 00:51:35.375994 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:51:35.376009 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-01-02 00:51:35.376024 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-01-02 00:51:35.376039 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:51:35.376052 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-01-02 00:51:35.376067 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-01-02 00:51:35.376083 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:51:35.376099 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-01-02 00:51:35.376114 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-01-02 00:51:35.376130 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:51:35.376146 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-01-02 00:51:35.376162 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-01-02 00:51:35.376180 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:51:35.376197 | orchestrator | 2026-01-02 00:51:35.376211 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-01-02 00:51:35.376242 | orchestrator | Friday 02 January 2026 00:47:12 +0000 (0:00:01.871) 0:00:23.218 ******** 2026-01-02 00:51:35.376274 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:51:35.376292 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:51:35.376309 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:51:35.376324 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:51:35.376342 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:51:35.376358 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:51:35.376373 | orchestrator | 2026-01-02 00:51:35.376387 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-01-02 00:51:35.376402 | orchestrator | Friday 02 January 2026 00:47:13 +0000 (0:00:01.467) 0:00:24.685 ******** 2026-01-02 00:51:35.376417 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:51:35.376431 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:51:35.376444 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:51:35.376457 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:51:35.376471 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:51:35.376488 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:51:35.376497 | orchestrator | 2026-01-02 00:51:35.376505 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-01-02 00:51:35.376513 | orchestrator | 2026-01-02 00:51:35.376521 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-01-02 00:51:35.376529 | orchestrator | Friday 02 January 2026 00:47:15 +0000 (0:00:01.942) 0:00:26.628 ******** 2026-01-02 00:51:35.376537 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:51:35.376545 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:51:35.376554 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:51:35.376562 | orchestrator | 2026-01-02 00:51:35.376570 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-01-02 00:51:35.376578 | orchestrator | Friday 02 January 2026 00:47:17 +0000 (0:00:02.121) 0:00:28.749 ******** 2026-01-02 00:51:35.376586 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:51:35.376594 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:51:35.376602 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:51:35.376610 | orchestrator | 2026-01-02 00:51:35.376618 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-01-02 00:51:35.376626 | orchestrator | Friday 02 January 2026 00:47:18 +0000 (0:00:01.370) 0:00:30.120 ******** 2026-01-02 00:51:35.376634 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:51:35.376642 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:51:35.376650 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:51:35.376658 | orchestrator | 2026-01-02 00:51:35.376667 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-01-02 00:51:35.376675 | orchestrator | Friday 02 January 2026 00:47:19 +0000 (0:00:01.020) 0:00:31.140 ******** 2026-01-02 00:51:35.376683 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:51:35.376691 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:51:35.376699 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:51:35.376707 | orchestrator | 2026-01-02 00:51:35.376715 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-01-02 00:51:35.376723 | orchestrator | Friday 02 January 2026 00:47:20 +0000 (0:00:00.798) 0:00:31.939 ******** 2026-01-02 00:51:35.376731 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:51:35.376739 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:51:35.376747 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:51:35.376755 | orchestrator | 2026-01-02 00:51:35.376763 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-01-02 00:51:35.376771 | orchestrator | Friday 02 January 2026 00:47:21 +0000 (0:00:00.475) 0:00:32.415 ******** 2026-01-02 00:51:35.376780 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:51:35.376788 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:51:35.376796 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:51:35.376803 | orchestrator | 2026-01-02 00:51:35.376811 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-01-02 00:51:35.376819 | orchestrator | Friday 02 January 2026 00:47:22 +0000 (0:00:01.635) 0:00:34.050 ******** 2026-01-02 00:51:35.376833 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:51:35.376842 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:51:35.376850 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:51:35.376858 | orchestrator | 2026-01-02 00:51:35.376894 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-01-02 00:51:35.376902 | orchestrator | Friday 02 January 2026 00:47:24 +0000 (0:00:01.806) 0:00:35.857 ******** 2026-01-02 00:51:35.376910 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:51:35.376918 | orchestrator | 2026-01-02 00:51:35.376926 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-01-02 00:51:35.376934 | orchestrator | Friday 02 January 2026 00:47:25 +0000 (0:00:00.418) 0:00:36.276 ******** 2026-01-02 00:51:35.376942 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:51:35.376950 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:51:35.376958 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:51:35.376966 | orchestrator | 2026-01-02 00:51:35.376974 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-01-02 00:51:35.376982 | orchestrator | Friday 02 January 2026 00:47:27 +0000 (0:00:02.746) 0:00:39.022 ******** 2026-01-02 00:51:35.376990 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:51:35.376997 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:51:35.377005 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:51:35.377013 | orchestrator | 2026-01-02 00:51:35.377021 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-01-02 00:51:35.377030 | orchestrator | Friday 02 January 2026 00:47:28 +0000 (0:00:00.761) 0:00:39.784 ******** 2026-01-02 00:51:35.377037 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:51:35.377045 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:51:35.377053 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:51:35.377061 | orchestrator | 2026-01-02 00:51:35.377069 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-01-02 00:51:35.377077 | orchestrator | Friday 02 January 2026 00:47:29 +0000 (0:00:01.007) 0:00:40.791 ******** 2026-01-02 00:51:35.377085 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:51:35.377093 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:51:35.377101 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:51:35.377109 | orchestrator | 2026-01-02 00:51:35.377117 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-01-02 00:51:35.377132 | orchestrator | Friday 02 January 2026 00:47:31 +0000 (0:00:01.802) 0:00:42.594 ******** 2026-01-02 00:51:35.377140 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:51:35.377148 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:51:35.377156 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:51:35.377164 | orchestrator | 2026-01-02 00:51:35.377172 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-01-02 00:51:35.377180 | orchestrator | Friday 02 January 2026 00:47:32 +0000 (0:00:00.760) 0:00:43.354 ******** 2026-01-02 00:51:35.377188 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:51:35.377196 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:51:35.377204 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:51:35.377211 | orchestrator | 2026-01-02 00:51:35.377219 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-01-02 00:51:35.377227 | orchestrator | Friday 02 January 2026 00:47:32 +0000 (0:00:00.324) 0:00:43.679 ******** 2026-01-02 00:51:35.377235 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:51:35.377248 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:51:35.377256 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:51:35.377264 | orchestrator | 2026-01-02 00:51:35.377272 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-01-02 00:51:35.377280 | orchestrator | Friday 02 January 2026 00:47:34 +0000 (0:00:01.840) 0:00:45.519 ******** 2026-01-02 00:51:35.377288 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:51:35.377301 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:51:35.377310 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:51:35.377318 | orchestrator | 2026-01-02 00:51:35.377326 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-01-02 00:51:35.377334 | orchestrator | Friday 02 January 2026 00:47:36 +0000 (0:00:02.278) 0:00:47.798 ******** 2026-01-02 00:51:35.377342 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:51:35.377350 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:51:35.377358 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:51:35.377366 | orchestrator | 2026-01-02 00:51:35.377374 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-01-02 00:51:35.377382 | orchestrator | Friday 02 January 2026 00:47:37 +0000 (0:00:00.855) 0:00:48.654 ******** 2026-01-02 00:51:35.377391 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-02 00:51:35.377399 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-02 00:51:35.377408 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-02 00:51:35.377416 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-02 00:51:35.377424 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-02 00:51:35.377432 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-02 00:51:35.377439 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-02 00:51:35.377448 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-02 00:51:35.377455 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-02 00:51:35.377463 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-02 00:51:35.377471 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-02 00:51:35.377479 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-02 00:51:35.377487 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-01-02 00:51:35.377495 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:51:35.377503 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:51:35.377511 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:51:35.377519 | orchestrator | 2026-01-02 00:51:35.377527 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-01-02 00:51:35.377535 | orchestrator | Friday 02 January 2026 00:48:31 +0000 (0:00:53.606) 0:01:42.261 ******** 2026-01-02 00:51:35.377543 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:51:35.377551 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:51:35.377559 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:51:35.377567 | orchestrator | 2026-01-02 00:51:35.377575 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-01-02 00:51:35.377583 | orchestrator | Friday 02 January 2026 00:48:31 +0000 (0:00:00.311) 0:01:42.573 ******** 2026-01-02 00:51:35.377591 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:51:35.377599 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:51:35.377613 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:51:35.377622 | orchestrator | 2026-01-02 00:51:35.377629 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-01-02 00:51:35.377643 | orchestrator | Friday 02 January 2026 00:48:32 +0000 (0:00:01.102) 0:01:43.675 ******** 2026-01-02 00:51:35.377651 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:51:35.377659 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:51:35.377667 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:51:35.377675 | orchestrator | 2026-01-02 00:51:35.377683 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-01-02 00:51:35.377691 | orchestrator | Friday 02 January 2026 00:48:34 +0000 (0:00:01.545) 0:01:45.221 ******** 2026-01-02 00:51:35.377699 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:51:35.377707 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:51:35.377715 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:51:35.377723 | orchestrator | 2026-01-02 00:51:35.377731 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-01-02 00:51:35.377740 | orchestrator | Friday 02 January 2026 00:49:01 +0000 (0:00:27.638) 0:02:12.860 ******** 2026-01-02 00:51:35.377748 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:51:35.377756 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:51:35.377764 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:51:35.377772 | orchestrator | 2026-01-02 00:51:35.377784 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-01-02 00:51:35.377792 | orchestrator | Friday 02 January 2026 00:49:02 +0000 (0:00:00.816) 0:02:13.677 ******** 2026-01-02 00:51:35.377800 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:51:35.377808 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:51:35.377816 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:51:35.377824 | orchestrator | 2026-01-02 00:51:35.377832 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-01-02 00:51:35.377840 | orchestrator | Friday 02 January 2026 00:49:03 +0000 (0:00:00.773) 0:02:14.450 ******** 2026-01-02 00:51:35.377848 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:51:35.377856 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:51:35.377887 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:51:35.377898 | orchestrator | 2026-01-02 00:51:35.377906 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-01-02 00:51:35.377914 | orchestrator | Friday 02 January 2026 00:49:04 +0000 (0:00:01.069) 0:02:15.519 ******** 2026-01-02 00:51:35.377922 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:51:35.377930 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:51:35.377938 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:51:35.377946 | orchestrator | 2026-01-02 00:51:35.377954 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-01-02 00:51:35.377962 | orchestrator | Friday 02 January 2026 00:49:05 +0000 (0:00:00.965) 0:02:16.485 ******** 2026-01-02 00:51:35.377970 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:51:35.377978 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:51:35.377986 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:51:35.377994 | orchestrator | 2026-01-02 00:51:35.378002 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-01-02 00:51:35.378010 | orchestrator | Friday 02 January 2026 00:49:05 +0000 (0:00:00.560) 0:02:17.045 ******** 2026-01-02 00:51:35.378075 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:51:35.378084 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:51:35.378092 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:51:35.378100 | orchestrator | 2026-01-02 00:51:35.378108 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-01-02 00:51:35.378116 | orchestrator | Friday 02 January 2026 00:49:06 +0000 (0:00:00.852) 0:02:17.898 ******** 2026-01-02 00:51:35.378124 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:51:35.378132 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:51:35.378140 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:51:35.378147 | orchestrator | 2026-01-02 00:51:35.378155 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-01-02 00:51:35.378171 | orchestrator | Friday 02 January 2026 00:49:07 +0000 (0:00:01.008) 0:02:18.906 ******** 2026-01-02 00:51:35.378179 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:51:35.378187 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:51:35.378195 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:51:35.378203 | orchestrator | 2026-01-02 00:51:35.378211 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-01-02 00:51:35.378219 | orchestrator | Friday 02 January 2026 00:49:08 +0000 (0:00:01.212) 0:02:20.119 ******** 2026-01-02 00:51:35.378226 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:51:35.378234 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:51:35.378242 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:51:35.378250 | orchestrator | 2026-01-02 00:51:35.378258 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-01-02 00:51:35.378266 | orchestrator | Friday 02 January 2026 00:49:09 +0000 (0:00:00.898) 0:02:21.017 ******** 2026-01-02 00:51:35.378274 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:51:35.378282 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:51:35.378290 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:51:35.378298 | orchestrator | 2026-01-02 00:51:35.378306 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-01-02 00:51:35.378313 | orchestrator | Friday 02 January 2026 00:49:10 +0000 (0:00:00.308) 0:02:21.326 ******** 2026-01-02 00:51:35.378321 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:51:35.378329 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:51:35.378337 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:51:35.378345 | orchestrator | 2026-01-02 00:51:35.378353 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-01-02 00:51:35.378361 | orchestrator | Friday 02 January 2026 00:49:10 +0000 (0:00:00.324) 0:02:21.651 ******** 2026-01-02 00:51:35.378369 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:51:35.378377 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:51:35.378385 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:51:35.378393 | orchestrator | 2026-01-02 00:51:35.378401 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-01-02 00:51:35.378409 | orchestrator | Friday 02 January 2026 00:49:11 +0000 (0:00:00.931) 0:02:22.582 ******** 2026-01-02 00:51:35.378416 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:51:35.378425 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:51:35.378433 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:51:35.378440 | orchestrator | 2026-01-02 00:51:35.378449 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-01-02 00:51:35.378457 | orchestrator | Friday 02 January 2026 00:49:12 +0000 (0:00:00.782) 0:02:23.365 ******** 2026-01-02 00:51:35.378471 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-02 00:51:35.378480 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-02 00:51:35.378488 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-02 00:51:35.378496 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-02 00:51:35.378504 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-02 00:51:35.378512 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-02 00:51:35.378524 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-02 00:51:35.378533 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-02 00:51:35.378541 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-02 00:51:35.378555 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-01-02 00:51:35.378563 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-02 00:51:35.378572 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-02 00:51:35.378579 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-01-02 00:51:35.378587 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-02 00:51:35.378595 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-02 00:51:35.378603 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-02 00:51:35.378611 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-02 00:51:35.378619 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-02 00:51:35.378627 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-02 00:51:35.378635 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-02 00:51:35.378643 | orchestrator | 2026-01-02 00:51:35.378651 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-01-02 00:51:35.378659 | orchestrator | 2026-01-02 00:51:35.378667 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-01-02 00:51:35.378675 | orchestrator | Friday 02 January 2026 00:49:15 +0000 (0:00:03.226) 0:02:26.591 ******** 2026-01-02 00:51:35.378683 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:51:35.378691 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:51:35.378699 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:51:35.378707 | orchestrator | 2026-01-02 00:51:35.378715 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-01-02 00:51:35.378723 | orchestrator | Friday 02 January 2026 00:49:15 +0000 (0:00:00.529) 0:02:27.121 ******** 2026-01-02 00:51:35.378731 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:51:35.378739 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:51:35.378747 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:51:35.378755 | orchestrator | 2026-01-02 00:51:35.378763 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-01-02 00:51:35.378771 | orchestrator | Friday 02 January 2026 00:49:16 +0000 (0:00:00.619) 0:02:27.741 ******** 2026-01-02 00:51:35.378779 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:51:35.378787 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:51:35.378795 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:51:35.378803 | orchestrator | 2026-01-02 00:51:35.378811 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-01-02 00:51:35.378819 | orchestrator | Friday 02 January 2026 00:49:16 +0000 (0:00:00.332) 0:02:28.073 ******** 2026-01-02 00:51:35.378827 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:51:35.378835 | orchestrator | 2026-01-02 00:51:35.378843 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-01-02 00:51:35.378851 | orchestrator | Friday 02 January 2026 00:49:17 +0000 (0:00:00.588) 0:02:28.661 ******** 2026-01-02 00:51:35.378889 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:51:35.378899 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:51:35.378907 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:51:35.378915 | orchestrator | 2026-01-02 00:51:35.378923 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-01-02 00:51:35.378931 | orchestrator | Friday 02 January 2026 00:49:17 +0000 (0:00:00.235) 0:02:28.897 ******** 2026-01-02 00:51:35.378939 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:51:35.378947 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:51:35.378955 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:51:35.378969 | orchestrator | 2026-01-02 00:51:35.378977 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-01-02 00:51:35.378985 | orchestrator | Friday 02 January 2026 00:49:18 +0000 (0:00:00.304) 0:02:29.202 ******** 2026-01-02 00:51:35.378993 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:51:35.379000 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:51:35.379008 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:51:35.379016 | orchestrator | 2026-01-02 00:51:35.379024 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-01-02 00:51:35.379037 | orchestrator | Friday 02 January 2026 00:49:18 +0000 (0:00:00.282) 0:02:29.485 ******** 2026-01-02 00:51:35.379046 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:51:35.379059 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:51:35.379077 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:51:35.379097 | orchestrator | 2026-01-02 00:51:35.379110 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-01-02 00:51:35.379122 | orchestrator | Friday 02 January 2026 00:49:19 +0000 (0:00:00.736) 0:02:30.221 ******** 2026-01-02 00:51:35.379135 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:51:35.379147 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:51:35.379159 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:51:35.379172 | orchestrator | 2026-01-02 00:51:35.379185 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-01-02 00:51:35.379198 | orchestrator | Friday 02 January 2026 00:49:20 +0000 (0:00:01.179) 0:02:31.400 ******** 2026-01-02 00:51:35.379210 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:51:35.379224 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:51:35.379245 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:51:35.379258 | orchestrator | 2026-01-02 00:51:35.379272 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-01-02 00:51:35.379285 | orchestrator | Friday 02 January 2026 00:49:21 +0000 (0:00:01.203) 0:02:32.604 ******** 2026-01-02 00:51:35.379293 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:51:35.379301 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:51:35.379309 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:51:35.379317 | orchestrator | 2026-01-02 00:51:35.379325 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-01-02 00:51:35.379333 | orchestrator | 2026-01-02 00:51:35.379341 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-01-02 00:51:35.379349 | orchestrator | Friday 02 January 2026 00:49:32 +0000 (0:00:10.916) 0:02:43.521 ******** 2026-01-02 00:51:35.379356 | orchestrator | ok: [testbed-manager] 2026-01-02 00:51:35.379364 | orchestrator | 2026-01-02 00:51:35.379372 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-01-02 00:51:35.379380 | orchestrator | Friday 02 January 2026 00:49:33 +0000 (0:00:00.917) 0:02:44.438 ******** 2026-01-02 00:51:35.379388 | orchestrator | changed: [testbed-manager] 2026-01-02 00:51:35.379396 | orchestrator | 2026-01-02 00:51:35.379404 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-02 00:51:35.379411 | orchestrator | Friday 02 January 2026 00:49:33 +0000 (0:00:00.605) 0:02:45.043 ******** 2026-01-02 00:51:35.379419 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-02 00:51:35.379427 | orchestrator | 2026-01-02 00:51:35.379435 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-02 00:51:35.379443 | orchestrator | Friday 02 January 2026 00:49:34 +0000 (0:00:00.743) 0:02:45.787 ******** 2026-01-02 00:51:35.379451 | orchestrator | changed: [testbed-manager] 2026-01-02 00:51:35.379459 | orchestrator | 2026-01-02 00:51:35.379467 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-01-02 00:51:35.379474 | orchestrator | Friday 02 January 2026 00:49:35 +0000 (0:00:01.272) 0:02:47.059 ******** 2026-01-02 00:51:35.379482 | orchestrator | changed: [testbed-manager] 2026-01-02 00:51:35.379490 | orchestrator | 2026-01-02 00:51:35.379498 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-01-02 00:51:35.379517 | orchestrator | Friday 02 January 2026 00:49:36 +0000 (0:00:00.715) 0:02:47.775 ******** 2026-01-02 00:51:35.379525 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-02 00:51:35.379533 | orchestrator | 2026-01-02 00:51:35.379541 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-01-02 00:51:35.379549 | orchestrator | Friday 02 January 2026 00:49:38 +0000 (0:00:01.848) 0:02:49.624 ******** 2026-01-02 00:51:35.379557 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-02 00:51:35.379565 | orchestrator | 2026-01-02 00:51:35.379573 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-01-02 00:51:35.379581 | orchestrator | Friday 02 January 2026 00:49:39 +0000 (0:00:01.240) 0:02:50.864 ******** 2026-01-02 00:51:35.379590 | orchestrator | changed: [testbed-manager] 2026-01-02 00:51:35.379597 | orchestrator | 2026-01-02 00:51:35.379605 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-01-02 00:51:35.379613 | orchestrator | Friday 02 January 2026 00:49:40 +0000 (0:00:00.417) 0:02:51.282 ******** 2026-01-02 00:51:35.379621 | orchestrator | changed: [testbed-manager] 2026-01-02 00:51:35.379629 | orchestrator | 2026-01-02 00:51:35.379637 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-01-02 00:51:35.379645 | orchestrator | 2026-01-02 00:51:35.379653 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-01-02 00:51:35.379661 | orchestrator | Friday 02 January 2026 00:49:41 +0000 (0:00:00.909) 0:02:52.191 ******** 2026-01-02 00:51:35.379669 | orchestrator | ok: [testbed-manager] 2026-01-02 00:51:35.379676 | orchestrator | 2026-01-02 00:51:35.379684 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-01-02 00:51:35.379692 | orchestrator | Friday 02 January 2026 00:49:41 +0000 (0:00:00.242) 0:02:52.434 ******** 2026-01-02 00:51:35.379700 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-01-02 00:51:35.379708 | orchestrator | 2026-01-02 00:51:35.379716 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-01-02 00:51:35.379724 | orchestrator | Friday 02 January 2026 00:49:41 +0000 (0:00:00.230) 0:02:52.664 ******** 2026-01-02 00:51:35.379732 | orchestrator | ok: [testbed-manager] 2026-01-02 00:51:35.379739 | orchestrator | 2026-01-02 00:51:35.379747 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-01-02 00:51:35.379755 | orchestrator | Friday 02 January 2026 00:49:42 +0000 (0:00:00.813) 0:02:53.478 ******** 2026-01-02 00:51:35.379763 | orchestrator | ok: [testbed-manager] 2026-01-02 00:51:35.379771 | orchestrator | 2026-01-02 00:51:35.379779 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-01-02 00:51:35.379787 | orchestrator | Friday 02 January 2026 00:49:44 +0000 (0:00:01.751) 0:02:55.229 ******** 2026-01-02 00:51:35.379795 | orchestrator | changed: [testbed-manager] 2026-01-02 00:51:35.379803 | orchestrator | 2026-01-02 00:51:35.379818 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-01-02 00:51:35.379826 | orchestrator | Friday 02 January 2026 00:49:44 +0000 (0:00:00.780) 0:02:56.009 ******** 2026-01-02 00:51:35.379834 | orchestrator | ok: [testbed-manager] 2026-01-02 00:51:35.379842 | orchestrator | 2026-01-02 00:51:35.379850 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-01-02 00:51:35.379858 | orchestrator | Friday 02 January 2026 00:49:45 +0000 (0:00:00.483) 0:02:56.493 ******** 2026-01-02 00:51:35.379912 | orchestrator | changed: [testbed-manager] 2026-01-02 00:51:35.379920 | orchestrator | 2026-01-02 00:51:35.379928 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-01-02 00:51:35.379936 | orchestrator | Friday 02 January 2026 00:49:55 +0000 (0:00:10.353) 0:03:06.847 ******** 2026-01-02 00:51:35.379944 | orchestrator | changed: [testbed-manager] 2026-01-02 00:51:35.379952 | orchestrator | 2026-01-02 00:51:35.379960 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-01-02 00:51:35.379973 | orchestrator | Friday 02 January 2026 00:50:10 +0000 (0:00:14.556) 0:03:21.403 ******** 2026-01-02 00:51:35.379987 | orchestrator | ok: [testbed-manager] 2026-01-02 00:51:35.379996 | orchestrator | 2026-01-02 00:51:35.380004 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-01-02 00:51:35.380012 | orchestrator | 2026-01-02 00:51:35.380020 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-01-02 00:51:35.380028 | orchestrator | Friday 02 January 2026 00:50:10 +0000 (0:00:00.671) 0:03:22.075 ******** 2026-01-02 00:51:35.380036 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:51:35.380044 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:51:35.380052 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:51:35.380060 | orchestrator | 2026-01-02 00:51:35.380068 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-01-02 00:51:35.380076 | orchestrator | Friday 02 January 2026 00:50:11 +0000 (0:00:00.392) 0:03:22.468 ******** 2026-01-02 00:51:35.380084 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:51:35.380093 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:51:35.380101 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:51:35.380109 | orchestrator | 2026-01-02 00:51:35.380117 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-01-02 00:51:35.380125 | orchestrator | Friday 02 January 2026 00:50:11 +0000 (0:00:00.393) 0:03:22.862 ******** 2026-01-02 00:51:35.380133 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:51:35.380141 | orchestrator | 2026-01-02 00:51:35.380149 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-01-02 00:51:35.380157 | orchestrator | Friday 02 January 2026 00:50:12 +0000 (0:00:00.912) 0:03:23.774 ******** 2026-01-02 00:51:35.380165 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-02 00:51:35.380173 | orchestrator | 2026-01-02 00:51:35.380181 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-01-02 00:51:35.380190 | orchestrator | Friday 02 January 2026 00:50:13 +0000 (0:00:00.951) 0:03:24.726 ******** 2026-01-02 00:51:35.380198 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-02 00:51:35.380206 | orchestrator | 2026-01-02 00:51:35.380214 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-01-02 00:51:35.380222 | orchestrator | Friday 02 January 2026 00:50:14 +0000 (0:00:00.913) 0:03:25.639 ******** 2026-01-02 00:51:35.380230 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:51:35.380238 | orchestrator | 2026-01-02 00:51:35.380246 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-01-02 00:51:35.380254 | orchestrator | Friday 02 January 2026 00:50:14 +0000 (0:00:00.125) 0:03:25.765 ******** 2026-01-02 00:51:35.380262 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-02 00:51:35.380270 | orchestrator | 2026-01-02 00:51:35.380278 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-01-02 00:51:35.380286 | orchestrator | Friday 02 January 2026 00:50:15 +0000 (0:00:01.055) 0:03:26.821 ******** 2026-01-02 00:51:35.380294 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:51:35.380302 | orchestrator | 2026-01-02 00:51:35.380310 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-01-02 00:51:35.380318 | orchestrator | Friday 02 January 2026 00:50:15 +0000 (0:00:00.102) 0:03:26.923 ******** 2026-01-02 00:51:35.380325 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:51:35.380332 | orchestrator | 2026-01-02 00:51:35.380339 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-01-02 00:51:35.380345 | orchestrator | Friday 02 January 2026 00:50:15 +0000 (0:00:00.111) 0:03:27.035 ******** 2026-01-02 00:51:35.380352 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:51:35.380359 | orchestrator | 2026-01-02 00:51:35.380366 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-01-02 00:51:35.380373 | orchestrator | Friday 02 January 2026 00:50:16 +0000 (0:00:00.118) 0:03:27.154 ******** 2026-01-02 00:51:35.380384 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:51:35.380391 | orchestrator | 2026-01-02 00:51:35.380398 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-01-02 00:51:35.380405 | orchestrator | Friday 02 January 2026 00:50:16 +0000 (0:00:00.119) 0:03:27.273 ******** 2026-01-02 00:51:35.380411 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-02 00:51:35.380418 | orchestrator | 2026-01-02 00:51:35.380425 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-01-02 00:51:35.380432 | orchestrator | Friday 02 January 2026 00:50:21 +0000 (0:00:05.196) 0:03:32.470 ******** 2026-01-02 00:51:35.380439 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-01-02 00:51:35.380445 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-01-02 00:51:35.380452 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-01-02 00:51:35.380459 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-01-02 00:51:35.380466 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-01-02 00:51:35.380473 | orchestrator | 2026-01-02 00:51:35.380484 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-01-02 00:51:35.380491 | orchestrator | Friday 02 January 2026 00:51:03 +0000 (0:00:42.008) 0:04:14.478 ******** 2026-01-02 00:51:35.380498 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-02 00:51:35.380504 | orchestrator | 2026-01-02 00:51:35.380511 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-01-02 00:51:35.380518 | orchestrator | Friday 02 January 2026 00:51:04 +0000 (0:00:01.394) 0:04:15.873 ******** 2026-01-02 00:51:35.380525 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-02 00:51:35.380532 | orchestrator | 2026-01-02 00:51:35.380539 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-01-02 00:51:35.380546 | orchestrator | Friday 02 January 2026 00:51:06 +0000 (0:00:01.977) 0:04:17.850 ******** 2026-01-02 00:51:35.380553 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-02 00:51:35.380560 | orchestrator | 2026-01-02 00:51:35.380570 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-01-02 00:51:35.380577 | orchestrator | Friday 02 January 2026 00:51:07 +0000 (0:00:01.109) 0:04:18.960 ******** 2026-01-02 00:51:35.380584 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:51:35.380591 | orchestrator | 2026-01-02 00:51:35.380598 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-01-02 00:51:35.380605 | orchestrator | Friday 02 January 2026 00:51:07 +0000 (0:00:00.131) 0:04:19.092 ******** 2026-01-02 00:51:35.380611 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-01-02 00:51:35.380618 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-01-02 00:51:35.380625 | orchestrator | 2026-01-02 00:51:35.380632 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-01-02 00:51:35.380639 | orchestrator | Friday 02 January 2026 00:51:09 +0000 (0:00:01.988) 0:04:21.081 ******** 2026-01-02 00:51:35.380646 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:51:35.380652 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:51:35.380659 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:51:35.380666 | orchestrator | 2026-01-02 00:51:35.380673 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-01-02 00:51:35.380680 | orchestrator | Friday 02 January 2026 00:51:10 +0000 (0:00:00.442) 0:04:21.524 ******** 2026-01-02 00:51:35.380687 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:51:35.380693 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:51:35.380700 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:51:35.380707 | orchestrator | 2026-01-02 00:51:35.380714 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-01-02 00:51:35.380721 | orchestrator | 2026-01-02 00:51:35.380728 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-01-02 00:51:35.380740 | orchestrator | Friday 02 January 2026 00:51:11 +0000 (0:00:01.322) 0:04:22.846 ******** 2026-01-02 00:51:35.380747 | orchestrator | ok: [testbed-manager] 2026-01-02 00:51:35.380753 | orchestrator | 2026-01-02 00:51:35.380760 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-01-02 00:51:35.380767 | orchestrator | Friday 02 January 2026 00:51:11 +0000 (0:00:00.155) 0:04:23.001 ******** 2026-01-02 00:51:35.380774 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-01-02 00:51:35.380781 | orchestrator | 2026-01-02 00:51:35.380788 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-01-02 00:51:35.380794 | orchestrator | Friday 02 January 2026 00:51:12 +0000 (0:00:00.216) 0:04:23.218 ******** 2026-01-02 00:51:35.380801 | orchestrator | changed: [testbed-manager] 2026-01-02 00:51:35.380808 | orchestrator | 2026-01-02 00:51:35.380815 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-01-02 00:51:35.380821 | orchestrator | 2026-01-02 00:51:35.380828 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-01-02 00:51:35.380835 | orchestrator | Friday 02 January 2026 00:51:17 +0000 (0:00:05.578) 0:04:28.796 ******** 2026-01-02 00:51:35.380842 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:51:35.380849 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:51:35.380855 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:51:35.380875 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:51:35.380882 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:51:35.380889 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:51:35.380896 | orchestrator | 2026-01-02 00:51:35.380903 | orchestrator | TASK [Manage labels] *********************************************************** 2026-01-02 00:51:35.380910 | orchestrator | Friday 02 January 2026 00:51:18 +0000 (0:00:00.819) 0:04:29.616 ******** 2026-01-02 00:51:35.380917 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-02 00:51:35.380927 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-02 00:51:35.380938 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-02 00:51:35.380948 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-02 00:51:35.380959 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-02 00:51:35.380970 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-02 00:51:35.380980 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-02 00:51:35.380991 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-02 00:51:35.381001 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-02 00:51:35.381008 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-02 00:51:35.381015 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-02 00:51:35.381027 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-02 00:51:35.381034 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-02 00:51:35.381040 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-02 00:51:35.381047 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-02 00:51:35.381054 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-02 00:51:35.381061 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-02 00:51:35.381068 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-02 00:51:35.381084 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-02 00:51:35.381091 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-02 00:51:35.381098 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-02 00:51:35.381105 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-02 00:51:35.381111 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-02 00:51:35.381118 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-02 00:51:35.381125 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-02 00:51:35.381132 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-02 00:51:35.381138 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-02 00:51:35.381145 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-02 00:51:35.381152 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-02 00:51:35.381158 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-02 00:51:35.381165 | orchestrator | 2026-01-02 00:51:35.381172 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-01-02 00:51:35.381179 | orchestrator | Friday 02 January 2026 00:51:32 +0000 (0:00:14.171) 0:04:43.787 ******** 2026-01-02 00:51:35.381185 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:51:35.381192 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:51:35.381199 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:51:35.381206 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:51:35.381213 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:51:35.381219 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:51:35.381226 | orchestrator | 2026-01-02 00:51:35.381233 | orchestrator | TASK [Manage taints] *********************************************************** 2026-01-02 00:51:35.381240 | orchestrator | Friday 02 January 2026 00:51:33 +0000 (0:00:00.859) 0:04:44.647 ******** 2026-01-02 00:51:35.381246 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:51:35.381253 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:51:35.381260 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:51:35.381267 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:51:35.381274 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:51:35.381280 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:51:35.381287 | orchestrator | 2026-01-02 00:51:35.381294 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:51:35.381301 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:51:35.381310 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-01-02 00:51:35.381317 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-02 00:51:35.381324 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-02 00:51:35.381331 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-02 00:51:35.381337 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-02 00:51:35.381344 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-02 00:51:35.381356 | orchestrator | 2026-01-02 00:51:35.381362 | orchestrator | 2026-01-02 00:51:35.381369 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:51:35.381376 | orchestrator | Friday 02 January 2026 00:51:34 +0000 (0:00:00.514) 0:04:45.161 ******** 2026-01-02 00:51:35.381383 | orchestrator | =============================================================================== 2026-01-02 00:51:35.381390 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 53.61s 2026-01-02 00:51:35.381400 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.01s 2026-01-02 00:51:35.381408 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 27.64s 2026-01-02 00:51:35.381414 | orchestrator | kubectl : Install required packages ------------------------------------ 14.56s 2026-01-02 00:51:35.381421 | orchestrator | Manage labels ---------------------------------------------------------- 14.17s 2026-01-02 00:51:35.381428 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.92s 2026-01-02 00:51:35.381434 | orchestrator | kubectl : Add repository Debian ---------------------------------------- 10.35s 2026-01-02 00:51:35.381441 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.58s 2026-01-02 00:51:35.381448 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.50s 2026-01-02 00:51:35.381455 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.20s 2026-01-02 00:51:35.381465 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.23s 2026-01-02 00:51:35.381472 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.75s 2026-01-02 00:51:35.381479 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.51s 2026-01-02 00:51:35.381485 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.28s 2026-01-02 00:51:35.381492 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.22s 2026-01-02 00:51:35.381499 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 2.12s 2026-01-02 00:51:35.381506 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.99s 2026-01-02 00:51:35.381512 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.98s 2026-01-02 00:51:35.381519 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 1.94s 2026-01-02 00:51:35.381526 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 1.87s 2026-01-02 00:51:38.418835 | orchestrator | 2026-01-02 00:51:38 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:51:38.420616 | orchestrator | 2026-01-02 00:51:38 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:51:38.424183 | orchestrator | 2026-01-02 00:51:38 | INFO  | Task ea52bca3-ce75-4afa-8a29-fd56ba81f5b3 is in state STARTED 2026-01-02 00:51:38.426013 | orchestrator | 2026-01-02 00:51:38 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:51:38.428113 | orchestrator | 2026-01-02 00:51:38 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state STARTED 2026-01-02 00:51:38.430428 | orchestrator | 2026-01-02 00:51:38 | INFO  | Task 12aaf499-f89d-4325-9e5d-9e76d9863543 is in state STARTED 2026-01-02 00:51:38.431021 | orchestrator | 2026-01-02 00:51:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:51:41.472601 | orchestrator | 2026-01-02 00:51:41 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:51:41.472731 | orchestrator | 2026-01-02 00:51:41 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:51:41.472774 | orchestrator | 2026-01-02 00:51:41 | INFO  | Task ea52bca3-ce75-4afa-8a29-fd56ba81f5b3 is in state STARTED 2026-01-02 00:51:41.472786 | orchestrator | 2026-01-02 00:51:41 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:51:41.472797 | orchestrator | 2026-01-02 00:51:41 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state STARTED 2026-01-02 00:51:41.472807 | orchestrator | 2026-01-02 00:51:41 | INFO  | Task 12aaf499-f89d-4325-9e5d-9e76d9863543 is in state STARTED 2026-01-02 00:51:41.472817 | orchestrator | 2026-01-02 00:51:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:51:44.507625 | orchestrator | 2026-01-02 00:51:44 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:51:44.510805 | orchestrator | 2026-01-02 00:51:44 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:51:44.511438 | orchestrator | 2026-01-02 00:51:44 | INFO  | Task ea52bca3-ce75-4afa-8a29-fd56ba81f5b3 is in state STARTED 2026-01-02 00:51:44.512830 | orchestrator | 2026-01-02 00:51:44 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:51:44.513703 | orchestrator | 2026-01-02 00:51:44 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state STARTED 2026-01-02 00:51:44.515417 | orchestrator | 2026-01-02 00:51:44 | INFO  | Task 12aaf499-f89d-4325-9e5d-9e76d9863543 is in state SUCCESS 2026-01-02 00:51:44.515472 | orchestrator | 2026-01-02 00:51:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:51:47.546995 | orchestrator | 2026-01-02 00:51:47 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:51:47.549906 | orchestrator | 2026-01-02 00:51:47 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:51:47.550478 | orchestrator | 2026-01-02 00:51:47 | INFO  | Task ea52bca3-ce75-4afa-8a29-fd56ba81f5b3 is in state SUCCESS 2026-01-02 00:51:47.552405 | orchestrator | 2026-01-02 00:51:47 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:51:47.553159 | orchestrator | 2026-01-02 00:51:47 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state STARTED 2026-01-02 00:51:47.553192 | orchestrator | 2026-01-02 00:51:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:51:50.584510 | orchestrator | 2026-01-02 00:51:50 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:51:50.584833 | orchestrator | 2026-01-02 00:51:50 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:51:50.585840 | orchestrator | 2026-01-02 00:51:50 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:51:50.586616 | orchestrator | 2026-01-02 00:51:50 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state STARTED 2026-01-02 00:51:50.586665 | orchestrator | 2026-01-02 00:51:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:51:53.631086 | orchestrator | 2026-01-02 00:51:53 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:51:53.632035 | orchestrator | 2026-01-02 00:51:53 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:51:53.634576 | orchestrator | 2026-01-02 00:51:53 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:51:53.637347 | orchestrator | 2026-01-02 00:51:53 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state STARTED 2026-01-02 00:51:53.637638 | orchestrator | 2026-01-02 00:51:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:51:56.668751 | orchestrator | 2026-01-02 00:51:56 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:51:56.670555 | orchestrator | 2026-01-02 00:51:56 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:51:56.670975 | orchestrator | 2026-01-02 00:51:56 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:51:56.672304 | orchestrator | 2026-01-02 00:51:56 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state STARTED 2026-01-02 00:51:56.672348 | orchestrator | 2026-01-02 00:51:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:51:59.715672 | orchestrator | 2026-01-02 00:51:59 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:51:59.717380 | orchestrator | 2026-01-02 00:51:59 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:51:59.719176 | orchestrator | 2026-01-02 00:51:59 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:51:59.721162 | orchestrator | 2026-01-02 00:51:59 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state STARTED 2026-01-02 00:51:59.721331 | orchestrator | 2026-01-02 00:51:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:52:02.765710 | orchestrator | 2026-01-02 00:52:02 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:52:02.767106 | orchestrator | 2026-01-02 00:52:02 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:52:02.770099 | orchestrator | 2026-01-02 00:52:02 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:52:02.772878 | orchestrator | 2026-01-02 00:52:02 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state STARTED 2026-01-02 00:52:02.773046 | orchestrator | 2026-01-02 00:52:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:52:05.816127 | orchestrator | 2026-01-02 00:52:05 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:52:05.818631 | orchestrator | 2026-01-02 00:52:05 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:52:05.821379 | orchestrator | 2026-01-02 00:52:05 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:52:05.822897 | orchestrator | 2026-01-02 00:52:05 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state STARTED 2026-01-02 00:52:05.822977 | orchestrator | 2026-01-02 00:52:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:52:08.863593 | orchestrator | 2026-01-02 00:52:08 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:52:08.866497 | orchestrator | 2026-01-02 00:52:08 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:52:08.868732 | orchestrator | 2026-01-02 00:52:08 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state STARTED 2026-01-02 00:52:08.871291 | orchestrator | 2026-01-02 00:52:08 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state STARTED 2026-01-02 00:52:08.871363 | orchestrator | 2026-01-02 00:52:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:52:11.922571 | orchestrator | 2026-01-02 00:52:11 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:52:11.925589 | orchestrator | 2026-01-02 00:52:11 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:52:11.930078 | orchestrator | 2026-01-02 00:52:11.930140 | orchestrator | 2026-01-02 00:52:11.930154 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-01-02 00:52:11.930167 | orchestrator | 2026-01-02 00:52:11.930207 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-02 00:52:11.930220 | orchestrator | Friday 02 January 2026 00:51:39 +0000 (0:00:00.177) 0:00:00.177 ******** 2026-01-02 00:52:11.930231 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-02 00:52:11.930242 | orchestrator | 2026-01-02 00:52:11.930253 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-02 00:52:11.930264 | orchestrator | Friday 02 January 2026 00:51:40 +0000 (0:00:00.846) 0:00:01.024 ******** 2026-01-02 00:52:11.930276 | orchestrator | changed: [testbed-manager] 2026-01-02 00:52:11.930288 | orchestrator | 2026-01-02 00:52:11.930300 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-01-02 00:52:11.930311 | orchestrator | Friday 02 January 2026 00:51:41 +0000 (0:00:01.050) 0:00:02.074 ******** 2026-01-02 00:52:11.930322 | orchestrator | changed: [testbed-manager] 2026-01-02 00:52:11.930332 | orchestrator | 2026-01-02 00:52:11.930348 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:52:11.930367 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:52:11.930387 | orchestrator | 2026-01-02 00:52:11.930465 | orchestrator | 2026-01-02 00:52:11.930484 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:52:11.930503 | orchestrator | Friday 02 January 2026 00:51:42 +0000 (0:00:00.479) 0:00:02.554 ******** 2026-01-02 00:52:11.930522 | orchestrator | =============================================================================== 2026-01-02 00:52:11.930540 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.05s 2026-01-02 00:52:11.930559 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.85s 2026-01-02 00:52:11.930578 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.48s 2026-01-02 00:52:11.930598 | orchestrator | 2026-01-02 00:52:11.930618 | orchestrator | 2026-01-02 00:52:11.930638 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-01-02 00:52:11.930655 | orchestrator | 2026-01-02 00:52:11.930669 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-01-02 00:52:11.930681 | orchestrator | Friday 02 January 2026 00:51:39 +0000 (0:00:00.178) 0:00:00.178 ******** 2026-01-02 00:52:11.930694 | orchestrator | ok: [testbed-manager] 2026-01-02 00:52:11.930706 | orchestrator | 2026-01-02 00:52:11.930718 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-01-02 00:52:11.930729 | orchestrator | Friday 02 January 2026 00:51:40 +0000 (0:00:00.618) 0:00:00.796 ******** 2026-01-02 00:52:11.930740 | orchestrator | ok: [testbed-manager] 2026-01-02 00:52:11.930751 | orchestrator | 2026-01-02 00:52:11.930762 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-02 00:52:11.930773 | orchestrator | Friday 02 January 2026 00:51:40 +0000 (0:00:00.601) 0:00:01.398 ******** 2026-01-02 00:52:11.930784 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-02 00:52:11.930795 | orchestrator | 2026-01-02 00:52:11.930806 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-02 00:52:11.930816 | orchestrator | Friday 02 January 2026 00:51:41 +0000 (0:00:00.688) 0:00:02.087 ******** 2026-01-02 00:52:11.930828 | orchestrator | changed: [testbed-manager] 2026-01-02 00:52:11.930873 | orchestrator | 2026-01-02 00:52:11.930892 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-01-02 00:52:11.930911 | orchestrator | Friday 02 January 2026 00:51:42 +0000 (0:00:01.337) 0:00:03.424 ******** 2026-01-02 00:52:11.930929 | orchestrator | changed: [testbed-manager] 2026-01-02 00:52:11.930948 | orchestrator | 2026-01-02 00:52:11.930960 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-01-02 00:52:11.930971 | orchestrator | Friday 02 January 2026 00:51:43 +0000 (0:00:00.507) 0:00:03.932 ******** 2026-01-02 00:52:11.930982 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-02 00:52:11.931006 | orchestrator | 2026-01-02 00:52:11.931016 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-01-02 00:52:11.931027 | orchestrator | Friday 02 January 2026 00:51:44 +0000 (0:00:01.484) 0:00:05.417 ******** 2026-01-02 00:52:11.931038 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-02 00:52:11.931049 | orchestrator | 2026-01-02 00:52:11.931060 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-01-02 00:52:11.931071 | orchestrator | Friday 02 January 2026 00:51:45 +0000 (0:00:00.814) 0:00:06.231 ******** 2026-01-02 00:52:11.931082 | orchestrator | ok: [testbed-manager] 2026-01-02 00:52:11.931093 | orchestrator | 2026-01-02 00:52:11.931104 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-01-02 00:52:11.931114 | orchestrator | Friday 02 January 2026 00:51:46 +0000 (0:00:00.371) 0:00:06.602 ******** 2026-01-02 00:52:11.931125 | orchestrator | ok: [testbed-manager] 2026-01-02 00:52:11.931136 | orchestrator | 2026-01-02 00:52:11.931147 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:52:11.931158 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:52:11.931169 | orchestrator | 2026-01-02 00:52:11.931180 | orchestrator | 2026-01-02 00:52:11.931191 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:52:11.931202 | orchestrator | Friday 02 January 2026 00:51:46 +0000 (0:00:00.287) 0:00:06.890 ******** 2026-01-02 00:52:11.931212 | orchestrator | =============================================================================== 2026-01-02 00:52:11.931223 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.48s 2026-01-02 00:52:11.931234 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.34s 2026-01-02 00:52:11.931245 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.81s 2026-01-02 00:52:11.931273 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.69s 2026-01-02 00:52:11.931285 | orchestrator | Get home directory of operator user ------------------------------------- 0.62s 2026-01-02 00:52:11.931296 | orchestrator | Create .kube directory -------------------------------------------------- 0.60s 2026-01-02 00:52:11.931307 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.51s 2026-01-02 00:52:11.931318 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.37s 2026-01-02 00:52:11.931329 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.29s 2026-01-02 00:52:11.931340 | orchestrator | 2026-01-02 00:52:11.931351 | orchestrator | 2026-01-02 00:52:11.931361 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-01-02 00:52:11.931372 | orchestrator | 2026-01-02 00:52:11.931383 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-01-02 00:52:11.931394 | orchestrator | Friday 02 January 2026 00:49:39 +0000 (0:00:00.108) 0:00:00.108 ******** 2026-01-02 00:52:11.931405 | orchestrator | ok: [localhost] => { 2026-01-02 00:52:11.931416 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-01-02 00:52:11.931428 | orchestrator | } 2026-01-02 00:52:11.931439 | orchestrator | 2026-01-02 00:52:11.931451 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-01-02 00:52:11.931462 | orchestrator | Friday 02 January 2026 00:49:39 +0000 (0:00:00.100) 0:00:00.209 ******** 2026-01-02 00:52:11.931474 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-01-02 00:52:11.931486 | orchestrator | ...ignoring 2026-01-02 00:52:11.931498 | orchestrator | 2026-01-02 00:52:11.931509 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-01-02 00:52:11.931519 | orchestrator | Friday 02 January 2026 00:49:43 +0000 (0:00:03.878) 0:00:04.087 ******** 2026-01-02 00:52:11.931530 | orchestrator | skipping: [localhost] 2026-01-02 00:52:11.931548 | orchestrator | 2026-01-02 00:52:11.931558 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-01-02 00:52:11.931569 | orchestrator | Friday 02 January 2026 00:49:43 +0000 (0:00:00.082) 0:00:04.170 ******** 2026-01-02 00:52:11.931667 | orchestrator | ok: [localhost] 2026-01-02 00:52:11.931687 | orchestrator | 2026-01-02 00:52:11.931699 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-02 00:52:11.931710 | orchestrator | 2026-01-02 00:52:11.931720 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-02 00:52:11.931732 | orchestrator | Friday 02 January 2026 00:49:43 +0000 (0:00:00.153) 0:00:04.324 ******** 2026-01-02 00:52:11.931743 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:52:11.931754 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:52:11.931764 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:52:11.931775 | orchestrator | 2026-01-02 00:52:11.931786 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-02 00:52:11.931797 | orchestrator | Friday 02 January 2026 00:49:43 +0000 (0:00:00.399) 0:00:04.723 ******** 2026-01-02 00:52:11.931816 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-01-02 00:52:11.931862 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-01-02 00:52:11.931882 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-01-02 00:52:11.931900 | orchestrator | 2026-01-02 00:52:11.931916 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-01-02 00:52:11.931933 | orchestrator | 2026-01-02 00:52:11.931952 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-02 00:52:11.931972 | orchestrator | Friday 02 January 2026 00:49:44 +0000 (0:00:00.847) 0:00:05.571 ******** 2026-01-02 00:52:11.931992 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:52:11.932011 | orchestrator | 2026-01-02 00:52:11.932030 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-01-02 00:52:11.932049 | orchestrator | Friday 02 January 2026 00:49:45 +0000 (0:00:00.588) 0:00:06.160 ******** 2026-01-02 00:52:11.932067 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:52:11.932086 | orchestrator | 2026-01-02 00:52:11.932105 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-01-02 00:52:11.932124 | orchestrator | Friday 02 January 2026 00:49:46 +0000 (0:00:01.412) 0:00:07.573 ******** 2026-01-02 00:52:11.932143 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:52:11.932161 | orchestrator | 2026-01-02 00:52:11.932180 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-01-02 00:52:11.932199 | orchestrator | Friday 02 January 2026 00:49:47 +0000 (0:00:00.520) 0:00:08.093 ******** 2026-01-02 00:52:11.932217 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:52:11.932235 | orchestrator | 2026-01-02 00:52:11.932254 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-01-02 00:52:11.932273 | orchestrator | Friday 02 January 2026 00:49:47 +0000 (0:00:00.517) 0:00:08.611 ******** 2026-01-02 00:52:11.932293 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:52:11.932313 | orchestrator | 2026-01-02 00:52:11.932331 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-01-02 00:52:11.932349 | orchestrator | Friday 02 January 2026 00:49:48 +0000 (0:00:00.479) 0:00:09.091 ******** 2026-01-02 00:52:11.932367 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:52:11.932386 | orchestrator | 2026-01-02 00:52:11.932405 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-02 00:52:11.932423 | orchestrator | Friday 02 January 2026 00:49:49 +0000 (0:00:00.929) 0:00:10.020 ******** 2026-01-02 00:52:11.932441 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:52:11.932460 | orchestrator | 2026-01-02 00:52:11.932487 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-01-02 00:52:11.932534 | orchestrator | Friday 02 January 2026 00:49:50 +0000 (0:00:01.077) 0:00:11.097 ******** 2026-01-02 00:52:11.932553 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:52:11.932572 | orchestrator | 2026-01-02 00:52:11.932590 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-01-02 00:52:11.932609 | orchestrator | Friday 02 January 2026 00:49:51 +0000 (0:00:01.147) 0:00:12.245 ******** 2026-01-02 00:52:11.932626 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:52:11.932645 | orchestrator | 2026-01-02 00:52:11.932664 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-01-02 00:52:11.932684 | orchestrator | Friday 02 January 2026 00:49:52 +0000 (0:00:00.663) 0:00:12.908 ******** 2026-01-02 00:52:11.932738 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:52:11.932757 | orchestrator | 2026-01-02 00:52:11.932776 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-01-02 00:52:11.932795 | orchestrator | Friday 02 January 2026 00:49:53 +0000 (0:00:01.174) 0:00:14.082 ******** 2026-01-02 00:52:11.932821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-02 00:52:11.932872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-02 00:52:11.932895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-02 00:52:11.932928 | orchestrator | 2026-01-02 00:52:11.932947 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-01-02 00:52:11.932965 | orchestrator | Friday 02 January 2026 00:49:55 +0000 (0:00:02.237) 0:00:16.320 ******** 2026-01-02 00:52:11.933007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-02 00:52:11.933030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-02 00:52:11.933050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-02 00:52:11.933069 | orchestrator | 2026-01-02 00:52:11.933089 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-01-02 00:52:11.933107 | orchestrator | Friday 02 January 2026 00:49:59 +0000 (0:00:04.505) 0:00:20.825 ******** 2026-01-02 00:52:11.933126 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-02 00:52:11.933151 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-02 00:52:11.933163 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-02 00:52:11.933174 | orchestrator | 2026-01-02 00:52:11.933185 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-01-02 00:52:11.933196 | orchestrator | Friday 02 January 2026 00:50:02 +0000 (0:00:02.109) 0:00:22.935 ******** 2026-01-02 00:52:11.933206 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-02 00:52:11.933217 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-02 00:52:11.933228 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-02 00:52:11.933238 | orchestrator | 2026-01-02 00:52:11.933390 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-01-02 00:52:11.933416 | orchestrator | Friday 02 January 2026 00:50:04 +0000 (0:00:02.246) 0:00:25.181 ******** 2026-01-02 00:52:11.933432 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-02 00:52:11.933451 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-02 00:52:11.933470 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-02 00:52:11.933488 | orchestrator | 2026-01-02 00:52:11.933506 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-01-02 00:52:11.933524 | orchestrator | Friday 02 January 2026 00:50:05 +0000 (0:00:01.605) 0:00:26.787 ******** 2026-01-02 00:52:11.933539 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-02 00:52:11.933558 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-02 00:52:11.933575 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-02 00:52:11.933596 | orchestrator | 2026-01-02 00:52:11.933614 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-01-02 00:52:11.933632 | orchestrator | Friday 02 January 2026 00:50:09 +0000 (0:00:03.221) 0:00:30.008 ******** 2026-01-02 00:52:11.933648 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-02 00:52:11.933659 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-02 00:52:11.933670 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-02 00:52:11.933681 | orchestrator | 2026-01-02 00:52:11.933692 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-01-02 00:52:11.933703 | orchestrator | Friday 02 January 2026 00:50:11 +0000 (0:00:02.199) 0:00:32.208 ******** 2026-01-02 00:52:11.933713 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-02 00:52:11.933724 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-02 00:52:11.933735 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-02 00:52:11.933746 | orchestrator | 2026-01-02 00:52:11.933756 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-02 00:52:11.933767 | orchestrator | Friday 02 January 2026 00:50:13 +0000 (0:00:02.175) 0:00:34.384 ******** 2026-01-02 00:52:11.933778 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:52:11.933789 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:52:11.933799 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:52:11.933810 | orchestrator | 2026-01-02 00:52:11.933821 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-01-02 00:52:11.933908 | orchestrator | Friday 02 January 2026 00:50:14 +0000 (0:00:00.525) 0:00:34.909 ******** 2026-01-02 00:52:11.933949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-02 00:52:11.933994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-02 00:52:11.934102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-02 00:52:11.934132 | orchestrator | 2026-01-02 00:52:11.934148 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-01-02 00:52:11.934166 | orchestrator | Friday 02 January 2026 00:50:15 +0000 (0:00:01.519) 0:00:36.429 ******** 2026-01-02 00:52:11.934183 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:52:11.934199 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:52:11.934214 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:52:11.934313 | orchestrator | 2026-01-02 00:52:11.934330 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-01-02 00:52:11.934348 | orchestrator | Friday 02 January 2026 00:50:16 +0000 (0:00:00.895) 0:00:37.324 ******** 2026-01-02 00:52:11.934364 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:52:11.934380 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:52:11.934410 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:52:11.934427 | orchestrator | 2026-01-02 00:52:11.934443 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-01-02 00:52:11.934460 | orchestrator | Friday 02 January 2026 00:50:24 +0000 (0:00:07.741) 0:00:45.066 ******** 2026-01-02 00:52:11.934477 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:52:11.934494 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:52:11.934511 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:52:11.934528 | orchestrator | 2026-01-02 00:52:11.934545 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-02 00:52:11.934634 | orchestrator | 2026-01-02 00:52:11.934652 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-02 00:52:11.934669 | orchestrator | Friday 02 January 2026 00:50:24 +0000 (0:00:00.326) 0:00:45.392 ******** 2026-01-02 00:52:11.934686 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:52:11.934704 | orchestrator | 2026-01-02 00:52:11.934720 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-02 00:52:11.934737 | orchestrator | Friday 02 January 2026 00:50:25 +0000 (0:00:00.604) 0:00:45.997 ******** 2026-01-02 00:52:11.934753 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:52:11.934769 | orchestrator | 2026-01-02 00:52:11.934787 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-02 00:52:11.934804 | orchestrator | Friday 02 January 2026 00:50:25 +0000 (0:00:00.281) 0:00:46.278 ******** 2026-01-02 00:52:11.934821 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:52:11.934860 | orchestrator | 2026-01-02 00:52:11.934877 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-02 00:52:11.934893 | orchestrator | Friday 02 January 2026 00:50:27 +0000 (0:00:01.924) 0:00:48.202 ******** 2026-01-02 00:52:11.934909 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:52:11.934926 | orchestrator | 2026-01-02 00:52:11.934943 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-02 00:52:11.934960 | orchestrator | 2026-01-02 00:52:11.934975 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-02 00:52:11.934992 | orchestrator | Friday 02 January 2026 00:51:25 +0000 (0:00:58.421) 0:01:46.623 ******** 2026-01-02 00:52:11.935009 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:52:11.935026 | orchestrator | 2026-01-02 00:52:11.935042 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-02 00:52:11.935059 | orchestrator | Friday 02 January 2026 00:51:26 +0000 (0:00:00.922) 0:01:47.546 ******** 2026-01-02 00:52:11.935075 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:52:11.935091 | orchestrator | 2026-01-02 00:52:11.935108 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-02 00:52:11.935125 | orchestrator | Friday 02 January 2026 00:51:26 +0000 (0:00:00.212) 0:01:47.758 ******** 2026-01-02 00:52:11.935142 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:52:11.935158 | orchestrator | 2026-01-02 00:52:11.935175 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-02 00:52:11.935191 | orchestrator | Friday 02 January 2026 00:51:33 +0000 (0:00:07.012) 0:01:54.770 ******** 2026-01-02 00:52:11.935208 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:52:11.935226 | orchestrator | 2026-01-02 00:52:11.935243 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-02 00:52:11.935259 | orchestrator | 2026-01-02 00:52:11.935275 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-02 00:52:11.935301 | orchestrator | Friday 02 January 2026 00:51:45 +0000 (0:00:11.676) 0:02:06.447 ******** 2026-01-02 00:52:11.935318 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:52:11.935336 | orchestrator | 2026-01-02 00:52:11.935365 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-02 00:52:11.935382 | orchestrator | Friday 02 January 2026 00:51:46 +0000 (0:00:00.707) 0:02:07.155 ******** 2026-01-02 00:52:11.935398 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:52:11.935427 | orchestrator | 2026-01-02 00:52:11.935444 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-02 00:52:11.935460 | orchestrator | Friday 02 January 2026 00:51:46 +0000 (0:00:00.279) 0:02:07.434 ******** 2026-01-02 00:52:11.935476 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:52:11.935492 | orchestrator | 2026-01-02 00:52:11.935508 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-02 00:52:11.935525 | orchestrator | Friday 02 January 2026 00:51:48 +0000 (0:00:01.914) 0:02:09.349 ******** 2026-01-02 00:52:11.935542 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:52:11.935559 | orchestrator | 2026-01-02 00:52:11.935576 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-01-02 00:52:11.935594 | orchestrator | 2026-01-02 00:52:11.935610 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-01-02 00:52:11.935627 | orchestrator | Friday 02 January 2026 00:52:06 +0000 (0:00:18.495) 0:02:27.844 ******** 2026-01-02 00:52:11.935644 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:52:11.935661 | orchestrator | 2026-01-02 00:52:11.935677 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-01-02 00:52:11.935694 | orchestrator | Friday 02 January 2026 00:52:07 +0000 (0:00:00.574) 0:02:28.418 ******** 2026-01-02 00:52:11.935710 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-01-02 00:52:11.935726 | orchestrator | enable_outward_rabbitmq_True 2026-01-02 00:52:11.935744 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-01-02 00:52:11.935761 | orchestrator | outward_rabbitmq_restart 2026-01-02 00:52:11.935778 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:52:11.935794 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:52:11.935812 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:52:11.935828 | orchestrator | 2026-01-02 00:52:11.935868 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-01-02 00:52:11.935885 | orchestrator | skipping: no hosts matched 2026-01-02 00:52:11.935902 | orchestrator | 2026-01-02 00:52:11.935919 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-01-02 00:52:11.935935 | orchestrator | skipping: no hosts matched 2026-01-02 00:52:11.935952 | orchestrator | 2026-01-02 00:52:11.935969 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-01-02 00:52:11.935985 | orchestrator | skipping: no hosts matched 2026-01-02 00:52:11.936002 | orchestrator | 2026-01-02 00:52:11.936018 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:52:11.936036 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-01-02 00:52:11.936054 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-02 00:52:11.936071 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:52:11.936087 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:52:11.936104 | orchestrator | 2026-01-02 00:52:11.936120 | orchestrator | 2026-01-02 00:52:11.936137 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:52:11.936153 | orchestrator | Friday 02 January 2026 00:52:10 +0000 (0:00:02.643) 0:02:31.061 ******** 2026-01-02 00:52:11.936169 | orchestrator | =============================================================================== 2026-01-02 00:52:11.936186 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 88.59s 2026-01-02 00:52:11.936203 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.85s 2026-01-02 00:52:11.936220 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.74s 2026-01-02 00:52:11.936248 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 4.51s 2026-01-02 00:52:11.936265 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.88s 2026-01-02 00:52:11.936281 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 3.22s 2026-01-02 00:52:11.936297 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.64s 2026-01-02 00:52:11.936315 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.25s 2026-01-02 00:52:11.936333 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 2.24s 2026-01-02 00:52:11.936349 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.24s 2026-01-02 00:52:11.936366 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.20s 2026-01-02 00:52:11.936382 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.18s 2026-01-02 00:52:11.936398 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.11s 2026-01-02 00:52:11.936415 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.61s 2026-01-02 00:52:11.936432 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.52s 2026-01-02 00:52:11.936448 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.41s 2026-01-02 00:52:11.936471 | orchestrator | rabbitmq : Remove ha-all policy from RabbitMQ --------------------------- 1.17s 2026-01-02 00:52:11.936497 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.15s 2026-01-02 00:52:11.936515 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.08s 2026-01-02 00:52:11.936533 | orchestrator | rabbitmq : Catch when RabbitMQ is being downgraded ---------------------- 0.93s 2026-01-02 00:52:11.936550 | orchestrator | 2026-01-02 00:52:11 | INFO  | Task e8aaa020-9daa-44b5-9ae6-2cdfc23c513a is in state SUCCESS 2026-01-02 00:52:11.936566 | orchestrator | 2026-01-02 00:52:11 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state STARTED 2026-01-02 00:52:11.936582 | orchestrator | 2026-01-02 00:52:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:52:14.974712 | orchestrator | 2026-01-02 00:52:14 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:52:14.975081 | orchestrator | 2026-01-02 00:52:14 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:52:14.977555 | orchestrator | 2026-01-02 00:52:14 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state STARTED 2026-01-02 00:52:14.977597 | orchestrator | 2026-01-02 00:52:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:52:18.033726 | orchestrator | 2026-01-02 00:52:18 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:52:18.034197 | orchestrator | 2026-01-02 00:52:18 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:52:18.035222 | orchestrator | 2026-01-02 00:52:18 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state STARTED 2026-01-02 00:52:18.035276 | orchestrator | 2026-01-02 00:52:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:52:21.065528 | orchestrator | 2026-01-02 00:52:21 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:52:21.067469 | orchestrator | 2026-01-02 00:52:21 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:52:21.068167 | orchestrator | 2026-01-02 00:52:21 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state STARTED 2026-01-02 00:52:21.068372 | orchestrator | 2026-01-02 00:52:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:52:24.103415 | orchestrator | 2026-01-02 00:52:24 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:52:24.103631 | orchestrator | 2026-01-02 00:52:24 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:52:24.106441 | orchestrator | 2026-01-02 00:52:24 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state STARTED 2026-01-02 00:52:24.106496 | orchestrator | 2026-01-02 00:52:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:52:27.143717 | orchestrator | 2026-01-02 00:52:27 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:52:27.146264 | orchestrator | 2026-01-02 00:52:27 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:52:27.147336 | orchestrator | 2026-01-02 00:52:27 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state STARTED 2026-01-02 00:52:27.147409 | orchestrator | 2026-01-02 00:52:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:52:30.186719 | orchestrator | 2026-01-02 00:52:30 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:52:30.190543 | orchestrator | 2026-01-02 00:52:30 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:52:30.190612 | orchestrator | 2026-01-02 00:52:30 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state STARTED 2026-01-02 00:52:30.190624 | orchestrator | 2026-01-02 00:52:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:52:33.225978 | orchestrator | 2026-01-02 00:52:33 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:52:33.231534 | orchestrator | 2026-01-02 00:52:33 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:52:33.233416 | orchestrator | 2026-01-02 00:52:33 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state STARTED 2026-01-02 00:52:33.234540 | orchestrator | 2026-01-02 00:52:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:52:36.328137 | orchestrator | 2026-01-02 00:52:36 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:52:36.331928 | orchestrator | 2026-01-02 00:52:36 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:52:36.332534 | orchestrator | 2026-01-02 00:52:36 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state STARTED 2026-01-02 00:52:36.332566 | orchestrator | 2026-01-02 00:52:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:52:39.377486 | orchestrator | 2026-01-02 00:52:39 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:52:39.379269 | orchestrator | 2026-01-02 00:52:39 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:52:39.381147 | orchestrator | 2026-01-02 00:52:39 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state STARTED 2026-01-02 00:52:39.381310 | orchestrator | 2026-01-02 00:52:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:52:42.429466 | orchestrator | 2026-01-02 00:52:42 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:52:42.431011 | orchestrator | 2026-01-02 00:52:42 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:52:42.431056 | orchestrator | 2026-01-02 00:52:42 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state STARTED 2026-01-02 00:52:42.431063 | orchestrator | 2026-01-02 00:52:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:52:45.489570 | orchestrator | 2026-01-02 00:52:45 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:52:45.489852 | orchestrator | 2026-01-02 00:52:45 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:52:45.492667 | orchestrator | 2026-01-02 00:52:45 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state STARTED 2026-01-02 00:52:45.493053 | orchestrator | 2026-01-02 00:52:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:52:48.538980 | orchestrator | 2026-01-02 00:52:48 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:52:48.540675 | orchestrator | 2026-01-02 00:52:48 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:52:48.542941 | orchestrator | 2026-01-02 00:52:48 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state STARTED 2026-01-02 00:52:48.542981 | orchestrator | 2026-01-02 00:52:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:52:51.586943 | orchestrator | 2026-01-02 00:52:51 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:52:51.588237 | orchestrator | 2026-01-02 00:52:51 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:52:51.591828 | orchestrator | 2026-01-02 00:52:51 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state STARTED 2026-01-02 00:52:51.592719 | orchestrator | 2026-01-02 00:52:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:52:54.633303 | orchestrator | 2026-01-02 00:52:54 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:52:54.634283 | orchestrator | 2026-01-02 00:52:54 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:52:54.635578 | orchestrator | 2026-01-02 00:52:54 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state STARTED 2026-01-02 00:52:54.635686 | orchestrator | 2026-01-02 00:52:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:52:57.671385 | orchestrator | 2026-01-02 00:52:57 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:52:57.672768 | orchestrator | 2026-01-02 00:52:57 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:52:57.673707 | orchestrator | 2026-01-02 00:52:57 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state STARTED 2026-01-02 00:52:57.673755 | orchestrator | 2026-01-02 00:52:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:53:00.708886 | orchestrator | 2026-01-02 00:53:00 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:53:00.712705 | orchestrator | 2026-01-02 00:53:00 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:53:00.716489 | orchestrator | 2026-01-02 00:53:00 | INFO  | Task 73fcab67-33fe-462b-993a-1ff467decec7 is in state SUCCESS 2026-01-02 00:53:00.718149 | orchestrator | 2026-01-02 00:53:00.718196 | orchestrator | 2026-01-02 00:53:00.718209 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-02 00:53:00.718225 | orchestrator | 2026-01-02 00:53:00.718245 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-02 00:53:00.718265 | orchestrator | Friday 02 January 2026 00:50:35 +0000 (0:00:00.221) 0:00:00.221 ******** 2026-01-02 00:53:00.718285 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:53:00.718307 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:53:00.718326 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:53:00.718366 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:53:00.718523 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:53:00.718545 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:53:00.718566 | orchestrator | 2026-01-02 00:53:00.718587 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-02 00:53:00.718632 | orchestrator | Friday 02 January 2026 00:50:36 +0000 (0:00:00.977) 0:00:01.199 ******** 2026-01-02 00:53:00.718668 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-01-02 00:53:00.718680 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-01-02 00:53:00.718691 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-01-02 00:53:00.718702 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-01-02 00:53:00.718713 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-01-02 00:53:00.718724 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-01-02 00:53:00.718735 | orchestrator | 2026-01-02 00:53:00.718749 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-01-02 00:53:00.718762 | orchestrator | 2026-01-02 00:53:00.718775 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-01-02 00:53:00.718869 | orchestrator | Friday 02 January 2026 00:50:37 +0000 (0:00:01.523) 0:00:02.722 ******** 2026-01-02 00:53:00.718892 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:53:00.718909 | orchestrator | 2026-01-02 00:53:00.718923 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-01-02 00:53:00.718936 | orchestrator | Friday 02 January 2026 00:50:39 +0000 (0:00:01.529) 0:00:04.252 ******** 2026-01-02 00:53:00.718954 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.718972 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.718985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.718999 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.719012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.719026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.719049 | orchestrator | 2026-01-02 00:53:00.719077 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-01-02 00:53:00.719091 | orchestrator | Friday 02 January 2026 00:50:40 +0000 (0:00:01.421) 0:00:05.673 ******** 2026-01-02 00:53:00.719113 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.719124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.719136 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.719147 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.719158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.719170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.719181 | orchestrator | 2026-01-02 00:53:00.719192 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-01-02 00:53:00.719203 | orchestrator | Friday 02 January 2026 00:50:42 +0000 (0:00:01.862) 0:00:07.536 ******** 2026-01-02 00:53:00.719214 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.719226 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.719251 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.719269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.719281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.719292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.719304 | orchestrator | 2026-01-02 00:53:00.719315 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-01-02 00:53:00.719326 | orchestrator | Friday 02 January 2026 00:50:44 +0000 (0:00:01.782) 0:00:09.319 ******** 2026-01-02 00:53:00.719338 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.719350 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.719361 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.719373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.719384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.719403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.719414 | orchestrator | 2026-01-02 00:53:00.719431 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-01-02 00:53:00.719443 | orchestrator | Friday 02 January 2026 00:50:46 +0000 (0:00:01.753) 0:00:11.073 ******** 2026-01-02 00:53:00.719459 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.719471 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.719482 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.719494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.719505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.719516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.719527 | orchestrator | 2026-01-02 00:53:00.719539 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-01-02 00:53:00.719550 | orchestrator | Friday 02 January 2026 00:50:47 +0000 (0:00:01.578) 0:00:12.652 ******** 2026-01-02 00:53:00.719568 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:53:00.719580 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:53:00.719591 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:53:00.719602 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:53:00.719613 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:53:00.719624 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:53:00.719641 | orchestrator | 2026-01-02 00:53:00.719661 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-01-02 00:53:00.719681 | orchestrator | Friday 02 January 2026 00:50:50 +0000 (0:00:02.923) 0:00:15.575 ******** 2026-01-02 00:53:00.719699 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-01-02 00:53:00.719711 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-01-02 00:53:00.719722 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-01-02 00:53:00.719733 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-01-02 00:53:00.719744 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-01-02 00:53:00.719755 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-01-02 00:53:00.719766 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-02 00:53:00.719777 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-02 00:53:00.719821 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-02 00:53:00.719834 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-02 00:53:00.719845 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-02 00:53:00.719856 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-02 00:53:00.719873 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-02 00:53:00.719886 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-02 00:53:00.719897 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-02 00:53:00.719909 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-02 00:53:00.719920 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-02 00:53:00.719931 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-02 00:53:00.719942 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-02 00:53:00.719954 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-02 00:53:00.719965 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-02 00:53:00.719976 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-02 00:53:00.719987 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-02 00:53:00.719998 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-02 00:53:00.720010 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-02 00:53:00.720029 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-02 00:53:00.720040 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-02 00:53:00.720051 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-02 00:53:00.720062 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-02 00:53:00.720073 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-02 00:53:00.720085 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-02 00:53:00.720096 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-02 00:53:00.720107 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-02 00:53:00.720119 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-02 00:53:00.720130 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-02 00:53:00.720141 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-02 00:53:00.720152 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-02 00:53:00.720163 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-02 00:53:00.720174 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-02 00:53:00.720186 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-02 00:53:00.720197 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-01-02 00:53:00.720208 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-02 00:53:00.720219 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-01-02 00:53:00.720231 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-01-02 00:53:00.720247 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-02 00:53:00.720259 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-01-02 00:53:00.720270 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-02 00:53:00.720286 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-01-02 00:53:00.720298 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-02 00:53:00.720309 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-02 00:53:00.720320 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-01-02 00:53:00.720331 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-02 00:53:00.720342 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-02 00:53:00.720359 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-02 00:53:00.720370 | orchestrator | 2026-01-02 00:53:00.720381 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-02 00:53:00.720393 | orchestrator | Friday 02 January 2026 00:51:11 +0000 (0:00:21.251) 0:00:36.827 ******** 2026-01-02 00:53:00.720404 | orchestrator | 2026-01-02 00:53:00.720415 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-02 00:53:00.720426 | orchestrator | Friday 02 January 2026 00:51:11 +0000 (0:00:00.113) 0:00:36.940 ******** 2026-01-02 00:53:00.720437 | orchestrator | 2026-01-02 00:53:00.720448 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-02 00:53:00.720459 | orchestrator | Friday 02 January 2026 00:51:12 +0000 (0:00:00.157) 0:00:37.098 ******** 2026-01-02 00:53:00.720470 | orchestrator | 2026-01-02 00:53:00.720481 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-02 00:53:00.720492 | orchestrator | Friday 02 January 2026 00:51:12 +0000 (0:00:00.069) 0:00:37.168 ******** 2026-01-02 00:53:00.720503 | orchestrator | 2026-01-02 00:53:00.720513 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-02 00:53:00.720524 | orchestrator | Friday 02 January 2026 00:51:12 +0000 (0:00:00.063) 0:00:37.231 ******** 2026-01-02 00:53:00.720535 | orchestrator | 2026-01-02 00:53:00.720546 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-02 00:53:00.720557 | orchestrator | Friday 02 January 2026 00:51:12 +0000 (0:00:00.062) 0:00:37.294 ******** 2026-01-02 00:53:00.720568 | orchestrator | 2026-01-02 00:53:00.720579 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-01-02 00:53:00.720590 | orchestrator | Friday 02 January 2026 00:51:12 +0000 (0:00:00.065) 0:00:37.360 ******** 2026-01-02 00:53:00.720601 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:53:00.720613 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:53:00.720624 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:53:00.720635 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:53:00.720646 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:53:00.720657 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:53:00.720668 | orchestrator | 2026-01-02 00:53:00.720679 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-01-02 00:53:00.720690 | orchestrator | Friday 02 January 2026 00:51:14 +0000 (0:00:01.877) 0:00:39.237 ******** 2026-01-02 00:53:00.720701 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:53:00.720712 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:53:00.720724 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:53:00.720735 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:53:00.720746 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:53:00.720757 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:53:00.720767 | orchestrator | 2026-01-02 00:53:00.720779 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-01-02 00:53:00.720817 | orchestrator | 2026-01-02 00:53:00.720829 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-02 00:53:00.720840 | orchestrator | Friday 02 January 2026 00:51:43 +0000 (0:00:28.851) 0:01:08.088 ******** 2026-01-02 00:53:00.720851 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:53:00.720862 | orchestrator | 2026-01-02 00:53:00.720873 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-02 00:53:00.720884 | orchestrator | Friday 02 January 2026 00:51:43 +0000 (0:00:00.817) 0:01:08.906 ******** 2026-01-02 00:53:00.720895 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:53:00.720906 | orchestrator | 2026-01-02 00:53:00.720917 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-01-02 00:53:00.720927 | orchestrator | Friday 02 January 2026 00:51:44 +0000 (0:00:00.882) 0:01:09.789 ******** 2026-01-02 00:53:00.720947 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:53:00.720958 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:53:00.720969 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:53:00.720980 | orchestrator | 2026-01-02 00:53:00.720991 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-01-02 00:53:00.721003 | orchestrator | Friday 02 January 2026 00:51:46 +0000 (0:00:01.206) 0:01:10.995 ******** 2026-01-02 00:53:00.721014 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:53:00.721025 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:53:00.721036 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:53:00.721053 | orchestrator | 2026-01-02 00:53:00.721064 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-01-02 00:53:00.721076 | orchestrator | Friday 02 January 2026 00:51:46 +0000 (0:00:00.550) 0:01:11.545 ******** 2026-01-02 00:53:00.721087 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:53:00.721098 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:53:00.721109 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:53:00.721120 | orchestrator | 2026-01-02 00:53:00.721131 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-01-02 00:53:00.721143 | orchestrator | Friday 02 January 2026 00:51:46 +0000 (0:00:00.337) 0:01:11.883 ******** 2026-01-02 00:53:00.721154 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:53:00.721165 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:53:00.721176 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:53:00.721187 | orchestrator | 2026-01-02 00:53:00.721198 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-01-02 00:53:00.721209 | orchestrator | Friday 02 January 2026 00:51:47 +0000 (0:00:00.343) 0:01:12.227 ******** 2026-01-02 00:53:00.721220 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:53:00.721231 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:53:00.721242 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:53:00.721253 | orchestrator | 2026-01-02 00:53:00.721264 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-01-02 00:53:00.721275 | orchestrator | Friday 02 January 2026 00:51:47 +0000 (0:00:00.448) 0:01:12.676 ******** 2026-01-02 00:53:00.721286 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:53:00.721297 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:53:00.721308 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:53:00.721319 | orchestrator | 2026-01-02 00:53:00.721330 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-01-02 00:53:00.721341 | orchestrator | Friday 02 January 2026 00:51:48 +0000 (0:00:00.305) 0:01:12.981 ******** 2026-01-02 00:53:00.721352 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:53:00.721363 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:53:00.721374 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:53:00.721385 | orchestrator | 2026-01-02 00:53:00.721396 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-01-02 00:53:00.721407 | orchestrator | Friday 02 January 2026 00:51:48 +0000 (0:00:00.300) 0:01:13.282 ******** 2026-01-02 00:53:00.721425 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:53:00.721445 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:53:00.721462 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:53:00.721480 | orchestrator | 2026-01-02 00:53:00.721499 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-01-02 00:53:00.721518 | orchestrator | Friday 02 January 2026 00:51:48 +0000 (0:00:00.439) 0:01:13.721 ******** 2026-01-02 00:53:00.721537 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:53:00.721557 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:53:00.721574 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:53:00.721591 | orchestrator | 2026-01-02 00:53:00.721603 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-01-02 00:53:00.721614 | orchestrator | Friday 02 January 2026 00:51:49 +0000 (0:00:00.427) 0:01:14.149 ******** 2026-01-02 00:53:00.721625 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:53:00.721636 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:53:00.721661 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:53:00.721672 | orchestrator | 2026-01-02 00:53:00.721683 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-01-02 00:53:00.721694 | orchestrator | Friday 02 January 2026 00:51:49 +0000 (0:00:00.310) 0:01:14.460 ******** 2026-01-02 00:53:00.721705 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:53:00.721716 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:53:00.721726 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:53:00.721737 | orchestrator | 2026-01-02 00:53:00.721748 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-01-02 00:53:00.721759 | orchestrator | Friday 02 January 2026 00:51:49 +0000 (0:00:00.324) 0:01:14.784 ******** 2026-01-02 00:53:00.721770 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:53:00.721804 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:53:00.721818 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:53:00.721828 | orchestrator | 2026-01-02 00:53:00.721840 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-01-02 00:53:00.721884 | orchestrator | Friday 02 January 2026 00:51:50 +0000 (0:00:00.349) 0:01:15.134 ******** 2026-01-02 00:53:00.721896 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:53:00.721907 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:53:00.721918 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:53:00.721929 | orchestrator | 2026-01-02 00:53:00.721940 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-01-02 00:53:00.721951 | orchestrator | Friday 02 January 2026 00:51:50 +0000 (0:00:00.578) 0:01:15.712 ******** 2026-01-02 00:53:00.721962 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:53:00.721982 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:53:00.722001 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:53:00.722079 | orchestrator | 2026-01-02 00:53:00.722094 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-01-02 00:53:00.722105 | orchestrator | Friday 02 January 2026 00:51:51 +0000 (0:00:00.344) 0:01:16.056 ******** 2026-01-02 00:53:00.722116 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:53:00.722127 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:53:00.722137 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:53:00.722148 | orchestrator | 2026-01-02 00:53:00.722160 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-01-02 00:53:00.722170 | orchestrator | Friday 02 January 2026 00:51:51 +0000 (0:00:00.296) 0:01:16.352 ******** 2026-01-02 00:53:00.722181 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:53:00.722192 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:53:00.722203 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:53:00.722214 | orchestrator | 2026-01-02 00:53:00.722225 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-01-02 00:53:00.722236 | orchestrator | Friday 02 January 2026 00:51:51 +0000 (0:00:00.321) 0:01:16.674 ******** 2026-01-02 00:53:00.722248 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:53:00.722259 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:53:00.722279 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:53:00.722291 | orchestrator | 2026-01-02 00:53:00.722302 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-02 00:53:00.722313 | orchestrator | Friday 02 January 2026 00:51:52 +0000 (0:00:00.332) 0:01:17.006 ******** 2026-01-02 00:53:00.722324 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:53:00.722334 | orchestrator | 2026-01-02 00:53:00.722351 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-01-02 00:53:00.722362 | orchestrator | Friday 02 January 2026 00:51:52 +0000 (0:00:00.897) 0:01:17.904 ******** 2026-01-02 00:53:00.722373 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:53:00.722384 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:53:00.722395 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:53:00.722415 | orchestrator | 2026-01-02 00:53:00.722426 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-01-02 00:53:00.722437 | orchestrator | Friday 02 January 2026 00:51:53 +0000 (0:00:00.474) 0:01:18.378 ******** 2026-01-02 00:53:00.722448 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:53:00.722459 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:53:00.722470 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:53:00.722480 | orchestrator | 2026-01-02 00:53:00.722491 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-01-02 00:53:00.722502 | orchestrator | Friday 02 January 2026 00:51:53 +0000 (0:00:00.424) 0:01:18.802 ******** 2026-01-02 00:53:00.722513 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:53:00.722524 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:53:00.722535 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:53:00.722546 | orchestrator | 2026-01-02 00:53:00.722557 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-01-02 00:53:00.722568 | orchestrator | Friday 02 January 2026 00:51:54 +0000 (0:00:00.621) 0:01:19.424 ******** 2026-01-02 00:53:00.722578 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:53:00.722589 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:53:00.722600 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:53:00.722611 | orchestrator | 2026-01-02 00:53:00.722622 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-01-02 00:53:00.722633 | orchestrator | Friday 02 January 2026 00:51:54 +0000 (0:00:00.360) 0:01:19.785 ******** 2026-01-02 00:53:00.722644 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:53:00.722655 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:53:00.722666 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:53:00.722677 | orchestrator | 2026-01-02 00:53:00.722687 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-01-02 00:53:00.722698 | orchestrator | Friday 02 January 2026 00:51:55 +0000 (0:00:00.355) 0:01:20.141 ******** 2026-01-02 00:53:00.722709 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:53:00.722720 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:53:00.722731 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:53:00.722742 | orchestrator | 2026-01-02 00:53:00.722753 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-01-02 00:53:00.722764 | orchestrator | Friday 02 January 2026 00:51:55 +0000 (0:00:00.359) 0:01:20.500 ******** 2026-01-02 00:53:00.722775 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:53:00.722809 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:53:00.722825 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:53:00.722836 | orchestrator | 2026-01-02 00:53:00.722847 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-01-02 00:53:00.722858 | orchestrator | Friday 02 January 2026 00:51:56 +0000 (0:00:00.571) 0:01:21.072 ******** 2026-01-02 00:53:00.722869 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:53:00.722881 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:53:00.722892 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:53:00.722903 | orchestrator | 2026-01-02 00:53:00.722914 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-01-02 00:53:00.722925 | orchestrator | Friday 02 January 2026 00:51:56 +0000 (0:00:00.419) 0:01:21.491 ******** 2026-01-02 00:53:00.722937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.722955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.722986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.723040 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.723061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.723073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.723085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.723096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.723108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.723119 | orchestrator | 2026-01-02 00:53:00.723130 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-01-02 00:53:00.723142 | orchestrator | Friday 02 January 2026 00:51:58 +0000 (0:00:01.739) 0:01:23.231 ******** 2026-01-02 00:53:00.723153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.723165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.723183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.723194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.723213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.723230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.723242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.723254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.723265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.723276 | orchestrator | 2026-01-02 00:53:00.723288 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-01-02 00:53:00.723299 | orchestrator | Friday 02 January 2026 00:52:02 +0000 (0:00:04.296) 0:01:27.528 ******** 2026-01-02 00:53:00.723310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.723322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.723333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.723351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.723363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.723382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.723399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.723411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.723426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.723446 | orchestrator | 2026-01-02 00:53:00.723467 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-02 00:53:00.723479 | orchestrator | Friday 02 January 2026 00:52:05 +0000 (0:00:02.493) 0:01:30.021 ******** 2026-01-02 00:53:00.723490 | orchestrator | 2026-01-02 00:53:00.723501 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-02 00:53:00.723512 | orchestrator | Friday 02 January 2026 00:52:05 +0000 (0:00:00.073) 0:01:30.095 ******** 2026-01-02 00:53:00.723523 | orchestrator | 2026-01-02 00:53:00.723534 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-02 00:53:00.723545 | orchestrator | Friday 02 January 2026 00:52:05 +0000 (0:00:00.066) 0:01:30.161 ******** 2026-01-02 00:53:00.723556 | orchestrator | 2026-01-02 00:53:00.723567 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-01-02 00:53:00.723578 | orchestrator | Friday 02 January 2026 00:52:05 +0000 (0:00:00.068) 0:01:30.230 ******** 2026-01-02 00:53:00.723589 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:53:00.723601 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:53:00.723619 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:53:00.723630 | orchestrator | 2026-01-02 00:53:00.723641 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-01-02 00:53:00.723652 | orchestrator | Friday 02 January 2026 00:52:12 +0000 (0:00:06.806) 0:01:37.037 ******** 2026-01-02 00:53:00.723663 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:53:00.723674 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:53:00.723685 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:53:00.723696 | orchestrator | 2026-01-02 00:53:00.723707 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-01-02 00:53:00.723718 | orchestrator | Friday 02 January 2026 00:52:14 +0000 (0:00:02.768) 0:01:39.805 ******** 2026-01-02 00:53:00.723729 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:53:00.723740 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:53:00.723751 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:53:00.723762 | orchestrator | 2026-01-02 00:53:00.723773 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-01-02 00:53:00.723808 | orchestrator | Friday 02 January 2026 00:52:17 +0000 (0:00:02.787) 0:01:42.593 ******** 2026-01-02 00:53:00.723821 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:53:00.723832 | orchestrator | 2026-01-02 00:53:00.723843 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-01-02 00:53:00.723854 | orchestrator | Friday 02 January 2026 00:52:17 +0000 (0:00:00.372) 0:01:42.965 ******** 2026-01-02 00:53:00.723865 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:53:00.723876 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:53:00.723887 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:53:00.723897 | orchestrator | 2026-01-02 00:53:00.723908 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-01-02 00:53:00.723919 | orchestrator | Friday 02 January 2026 00:52:18 +0000 (0:00:00.933) 0:01:43.898 ******** 2026-01-02 00:53:00.723930 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:53:00.723941 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:53:00.723952 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:53:00.723963 | orchestrator | 2026-01-02 00:53:00.723974 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-01-02 00:53:00.723985 | orchestrator | Friday 02 January 2026 00:52:19 +0000 (0:00:00.870) 0:01:44.769 ******** 2026-01-02 00:53:00.723996 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:53:00.724007 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:53:00.724018 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:53:00.724029 | orchestrator | 2026-01-02 00:53:00.724040 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-01-02 00:53:00.724051 | orchestrator | Friday 02 January 2026 00:52:20 +0000 (0:00:00.931) 0:01:45.700 ******** 2026-01-02 00:53:00.724062 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:53:00.724073 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:53:00.724083 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:53:00.724094 | orchestrator | 2026-01-02 00:53:00.724105 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-01-02 00:53:00.724116 | orchestrator | Friday 02 January 2026 00:52:21 +0000 (0:00:00.903) 0:01:46.604 ******** 2026-01-02 00:53:00.724127 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:53:00.724139 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:53:00.724157 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:53:00.724169 | orchestrator | 2026-01-02 00:53:00.724180 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-01-02 00:53:00.724190 | orchestrator | Friday 02 January 2026 00:52:22 +0000 (0:00:01.179) 0:01:47.783 ******** 2026-01-02 00:53:00.724201 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:53:00.724213 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:53:00.724224 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:53:00.724234 | orchestrator | 2026-01-02 00:53:00.724251 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-01-02 00:53:00.724263 | orchestrator | Friday 02 January 2026 00:52:23 +0000 (0:00:01.124) 0:01:48.907 ******** 2026-01-02 00:53:00.724281 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:53:00.724293 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:53:00.724304 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:53:00.724315 | orchestrator | 2026-01-02 00:53:00.724326 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-01-02 00:53:00.724337 | orchestrator | Friday 02 January 2026 00:52:24 +0000 (0:00:00.347) 0:01:49.254 ******** 2026-01-02 00:53:00.724349 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.724361 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.724372 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.724384 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.724396 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.724408 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.724419 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.724431 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.724449 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.724468 | orchestrator | 2026-01-02 00:53:00.724479 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-01-02 00:53:00.724491 | orchestrator | Friday 02 January 2026 00:52:25 +0000 (0:00:01.593) 0:01:50.848 ******** 2026-01-02 00:53:00.724507 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.724518 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.724530 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.724541 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.724553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.724565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.724576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.724587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.724599 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.724620 | orchestrator | 2026-01-02 00:53:00.724631 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-01-02 00:53:00.724643 | orchestrator | Friday 02 January 2026 00:52:30 +0000 (0:00:04.855) 0:01:55.703 ******** 2026-01-02 00:53:00.724662 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.724678 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.724690 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.724702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.724713 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.724725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.724736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.724748 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.724759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:53:00.724776 | orchestrator | 2026-01-02 00:53:00.724822 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-02 00:53:00.724835 | orchestrator | Friday 02 January 2026 00:52:34 +0000 (0:00:03.543) 0:01:59.246 ******** 2026-01-02 00:53:00.724846 | orchestrator | 2026-01-02 00:53:00.724857 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-02 00:53:00.724868 | orchestrator | Friday 02 January 2026 00:52:34 +0000 (0:00:00.118) 0:01:59.365 ******** 2026-01-02 00:53:00.724887 | orchestrator | 2026-01-02 00:53:00.724906 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-02 00:53:00.724923 | orchestrator | Friday 02 January 2026 00:52:34 +0000 (0:00:00.068) 0:01:59.433 ******** 2026-01-02 00:53:00.724940 | orchestrator | 2026-01-02 00:53:00.724965 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-01-02 00:53:00.724985 | orchestrator | Friday 02 January 2026 00:52:34 +0000 (0:00:00.145) 0:01:59.579 ******** 2026-01-02 00:53:00.725006 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:53:00.725025 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:53:00.725039 | orchestrator | 2026-01-02 00:53:00.725058 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-01-02 00:53:00.725070 | orchestrator | Friday 02 January 2026 00:52:41 +0000 (0:00:06.415) 0:02:05.995 ******** 2026-01-02 00:53:00.725081 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:53:00.725092 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:53:00.725103 | orchestrator | 2026-01-02 00:53:00.725114 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-01-02 00:53:00.725145 | orchestrator | Friday 02 January 2026 00:52:47 +0000 (0:00:06.309) 0:02:12.304 ******** 2026-01-02 00:53:00.725156 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:53:00.725167 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:53:00.725178 | orchestrator | 2026-01-02 00:53:00.725189 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-01-02 00:53:00.725200 | orchestrator | Friday 02 January 2026 00:52:53 +0000 (0:00:06.449) 0:02:18.754 ******** 2026-01-02 00:53:00.725211 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:53:00.725222 | orchestrator | 2026-01-02 00:53:00.725233 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-01-02 00:53:00.725244 | orchestrator | Friday 02 January 2026 00:52:53 +0000 (0:00:00.149) 0:02:18.903 ******** 2026-01-02 00:53:00.725255 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:53:00.725266 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:53:00.725277 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:53:00.725287 | orchestrator | 2026-01-02 00:53:00.725298 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-01-02 00:53:00.725309 | orchestrator | Friday 02 January 2026 00:52:54 +0000 (0:00:00.828) 0:02:19.732 ******** 2026-01-02 00:53:00.725320 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:53:00.725331 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:53:00.725342 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:53:00.725353 | orchestrator | 2026-01-02 00:53:00.725364 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-01-02 00:53:00.725375 | orchestrator | Friday 02 January 2026 00:52:55 +0000 (0:00:00.842) 0:02:20.575 ******** 2026-01-02 00:53:00.725386 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:53:00.725397 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:53:00.725407 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:53:00.725419 | orchestrator | 2026-01-02 00:53:00.725429 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-01-02 00:53:00.725440 | orchestrator | Friday 02 January 2026 00:52:56 +0000 (0:00:01.037) 0:02:21.612 ******** 2026-01-02 00:53:00.725451 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:53:00.725462 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:53:00.725473 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:53:00.725484 | orchestrator | 2026-01-02 00:53:00.725495 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-01-02 00:53:00.725515 | orchestrator | Friday 02 January 2026 00:52:57 +0000 (0:00:00.906) 0:02:22.519 ******** 2026-01-02 00:53:00.725526 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:53:00.725537 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:53:00.725548 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:53:00.725559 | orchestrator | 2026-01-02 00:53:00.725570 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-01-02 00:53:00.725581 | orchestrator | Friday 02 January 2026 00:52:58 +0000 (0:00:01.044) 0:02:23.563 ******** 2026-01-02 00:53:00.725592 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:53:00.725603 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:53:00.725614 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:53:00.725625 | orchestrator | 2026-01-02 00:53:00.725636 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:53:00.725648 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-01-02 00:53:00.725659 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-01-02 00:53:00.725670 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-01-02 00:53:00.725681 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:53:00.725692 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:53:00.725703 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:53:00.725714 | orchestrator | 2026-01-02 00:53:00.725726 | orchestrator | 2026-01-02 00:53:00.725737 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:53:00.725748 | orchestrator | Friday 02 January 2026 00:52:59 +0000 (0:00:01.211) 0:02:24.775 ******** 2026-01-02 00:53:00.725759 | orchestrator | =============================================================================== 2026-01-02 00:53:00.725769 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 28.85s 2026-01-02 00:53:00.725803 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 21.25s 2026-01-02 00:53:00.725817 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.22s 2026-01-02 00:53:00.725828 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 9.24s 2026-01-02 00:53:00.725839 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 9.08s 2026-01-02 00:53:00.725850 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.86s 2026-01-02 00:53:00.725861 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.30s 2026-01-02 00:53:00.725878 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.54s 2026-01-02 00:53:00.725889 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.92s 2026-01-02 00:53:00.725900 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.49s 2026-01-02 00:53:00.725911 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.88s 2026-01-02 00:53:00.725926 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.86s 2026-01-02 00:53:00.725955 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.78s 2026-01-02 00:53:00.725983 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.75s 2026-01-02 00:53:00.726000 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.74s 2026-01-02 00:53:00.726059 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.59s 2026-01-02 00:53:00.726094 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.58s 2026-01-02 00:53:00.726112 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.53s 2026-01-02 00:53:00.726130 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.52s 2026-01-02 00:53:00.726150 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.42s 2026-01-02 00:53:00.726167 | orchestrator | 2026-01-02 00:53:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:53:03.762638 | orchestrator | 2026-01-02 00:53:03 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:53:03.764664 | orchestrator | 2026-01-02 00:53:03 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:53:03.764935 | orchestrator | 2026-01-02 00:53:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:53:06.801197 | orchestrator | 2026-01-02 00:53:06 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:53:06.803033 | orchestrator | 2026-01-02 00:53:06 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:53:06.803186 | orchestrator | 2026-01-02 00:53:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:53:09.851113 | orchestrator | 2026-01-02 00:53:09 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:53:09.852836 | orchestrator | 2026-01-02 00:53:09 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:53:09.853409 | orchestrator | 2026-01-02 00:53:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:53:12.900571 | orchestrator | 2026-01-02 00:53:12 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:53:12.902575 | orchestrator | 2026-01-02 00:53:12 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:53:12.902637 | orchestrator | 2026-01-02 00:53:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:53:15.949854 | orchestrator | 2026-01-02 00:53:15 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:53:15.952289 | orchestrator | 2026-01-02 00:53:15 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:53:15.953068 | orchestrator | 2026-01-02 00:53:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:53:19.010908 | orchestrator | 2026-01-02 00:53:19 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:53:19.013621 | orchestrator | 2026-01-02 00:53:19 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:53:19.013943 | orchestrator | 2026-01-02 00:53:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:53:22.074479 | orchestrator | 2026-01-02 00:53:22 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:53:22.074568 | orchestrator | 2026-01-02 00:53:22 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:53:22.074581 | orchestrator | 2026-01-02 00:53:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:53:25.104305 | orchestrator | 2026-01-02 00:53:25 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:53:25.107123 | orchestrator | 2026-01-02 00:53:25 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:53:25.107170 | orchestrator | 2026-01-02 00:53:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:53:28.162842 | orchestrator | 2026-01-02 00:53:28 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:53:28.165778 | orchestrator | 2026-01-02 00:53:28 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:53:28.166044 | orchestrator | 2026-01-02 00:53:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:53:31.210723 | orchestrator | 2026-01-02 00:53:31 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:53:31.214210 | orchestrator | 2026-01-02 00:53:31 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:53:31.214377 | orchestrator | 2026-01-02 00:53:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:53:34.269596 | orchestrator | 2026-01-02 00:53:34 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:53:34.271177 | orchestrator | 2026-01-02 00:53:34 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:53:34.271229 | orchestrator | 2026-01-02 00:53:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:53:37.315980 | orchestrator | 2026-01-02 00:53:37 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:53:37.317275 | orchestrator | 2026-01-02 00:53:37 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:53:37.317557 | orchestrator | 2026-01-02 00:53:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:53:40.361311 | orchestrator | 2026-01-02 00:53:40 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:53:40.363454 | orchestrator | 2026-01-02 00:53:40 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:53:40.363574 | orchestrator | 2026-01-02 00:53:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:53:43.413391 | orchestrator | 2026-01-02 00:53:43 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:53:43.416066 | orchestrator | 2026-01-02 00:53:43 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:53:43.416118 | orchestrator | 2026-01-02 00:53:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:53:46.468849 | orchestrator | 2026-01-02 00:53:46 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:53:46.470321 | orchestrator | 2026-01-02 00:53:46 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:53:46.470371 | orchestrator | 2026-01-02 00:53:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:53:49.518186 | orchestrator | 2026-01-02 00:53:49 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:53:49.520207 | orchestrator | 2026-01-02 00:53:49 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:53:49.520255 | orchestrator | 2026-01-02 00:53:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:53:52.571707 | orchestrator | 2026-01-02 00:53:52 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:53:52.573825 | orchestrator | 2026-01-02 00:53:52 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:53:52.573870 | orchestrator | 2026-01-02 00:53:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:53:55.604984 | orchestrator | 2026-01-02 00:53:55 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:53:55.608209 | orchestrator | 2026-01-02 00:53:55 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:53:55.608267 | orchestrator | 2026-01-02 00:53:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:53:58.656399 | orchestrator | 2026-01-02 00:53:58 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:53:58.657124 | orchestrator | 2026-01-02 00:53:58 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:53:58.657200 | orchestrator | 2026-01-02 00:53:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:54:01.701419 | orchestrator | 2026-01-02 00:54:01 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:54:01.703634 | orchestrator | 2026-01-02 00:54:01 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:54:01.704960 | orchestrator | 2026-01-02 00:54:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:54:04.747285 | orchestrator | 2026-01-02 00:54:04 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:54:04.748884 | orchestrator | 2026-01-02 00:54:04 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:54:04.748964 | orchestrator | 2026-01-02 00:54:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:54:07.801134 | orchestrator | 2026-01-02 00:54:07 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:54:07.801335 | orchestrator | 2026-01-02 00:54:07 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:54:07.801356 | orchestrator | 2026-01-02 00:54:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:54:10.832917 | orchestrator | 2026-01-02 00:54:10 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:54:10.834004 | orchestrator | 2026-01-02 00:54:10 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:54:10.834093 | orchestrator | 2026-01-02 00:54:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:54:13.872647 | orchestrator | 2026-01-02 00:54:13 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:54:13.874360 | orchestrator | 2026-01-02 00:54:13 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:54:13.874392 | orchestrator | 2026-01-02 00:54:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:54:16.915986 | orchestrator | 2026-01-02 00:54:16 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:54:16.917148 | orchestrator | 2026-01-02 00:54:16 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:54:16.917203 | orchestrator | 2026-01-02 00:54:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:54:19.964941 | orchestrator | 2026-01-02 00:54:19 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:54:19.965045 | orchestrator | 2026-01-02 00:54:19 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:54:19.965053 | orchestrator | 2026-01-02 00:54:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:54:23.019528 | orchestrator | 2026-01-02 00:54:23 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:54:23.022063 | orchestrator | 2026-01-02 00:54:23 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:54:23.022123 | orchestrator | 2026-01-02 00:54:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:54:26.066234 | orchestrator | 2026-01-02 00:54:26 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:54:26.069059 | orchestrator | 2026-01-02 00:54:26 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:54:26.069168 | orchestrator | 2026-01-02 00:54:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:54:29.101103 | orchestrator | 2026-01-02 00:54:29 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:54:29.101345 | orchestrator | 2026-01-02 00:54:29 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:54:29.101373 | orchestrator | 2026-01-02 00:54:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:54:32.148479 | orchestrator | 2026-01-02 00:54:32 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:54:32.149028 | orchestrator | 2026-01-02 00:54:32 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:54:32.149140 | orchestrator | 2026-01-02 00:54:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:54:35.196184 | orchestrator | 2026-01-02 00:54:35 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:54:35.198306 | orchestrator | 2026-01-02 00:54:35 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:54:35.198340 | orchestrator | 2026-01-02 00:54:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:54:38.244920 | orchestrator | 2026-01-02 00:54:38 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:54:38.246461 | orchestrator | 2026-01-02 00:54:38 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:54:38.246553 | orchestrator | 2026-01-02 00:54:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:54:41.297044 | orchestrator | 2026-01-02 00:54:41 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:54:41.298428 | orchestrator | 2026-01-02 00:54:41 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:54:41.298786 | orchestrator | 2026-01-02 00:54:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:54:44.354488 | orchestrator | 2026-01-02 00:54:44 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:54:44.354630 | orchestrator | 2026-01-02 00:54:44 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:54:44.354804 | orchestrator | 2026-01-02 00:54:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:54:47.402822 | orchestrator | 2026-01-02 00:54:47 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:54:47.402925 | orchestrator | 2026-01-02 00:54:47 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:54:47.402936 | orchestrator | 2026-01-02 00:54:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:54:50.454837 | orchestrator | 2026-01-02 00:54:50 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:54:50.458162 | orchestrator | 2026-01-02 00:54:50 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:54:50.458211 | orchestrator | 2026-01-02 00:54:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:54:53.504444 | orchestrator | 2026-01-02 00:54:53 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:54:53.509709 | orchestrator | 2026-01-02 00:54:53 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:54:53.511653 | orchestrator | 2026-01-02 00:54:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:54:56.574942 | orchestrator | 2026-01-02 00:54:56 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:54:56.576350 | orchestrator | 2026-01-02 00:54:56 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:54:56.576412 | orchestrator | 2026-01-02 00:54:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:54:59.622727 | orchestrator | 2026-01-02 00:54:59 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:54:59.626695 | orchestrator | 2026-01-02 00:54:59 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:54:59.626758 | orchestrator | 2026-01-02 00:54:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:55:02.685834 | orchestrator | 2026-01-02 00:55:02 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:55:02.687394 | orchestrator | 2026-01-02 00:55:02 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:55:02.687941 | orchestrator | 2026-01-02 00:55:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:55:05.753569 | orchestrator | 2026-01-02 00:55:05 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:55:05.760171 | orchestrator | 2026-01-02 00:55:05 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:55:05.760246 | orchestrator | 2026-01-02 00:55:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:55:08.805033 | orchestrator | 2026-01-02 00:55:08 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:55:08.806437 | orchestrator | 2026-01-02 00:55:08 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:55:08.806950 | orchestrator | 2026-01-02 00:55:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:55:11.861044 | orchestrator | 2026-01-02 00:55:11 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:55:11.861353 | orchestrator | 2026-01-02 00:55:11 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:55:11.862124 | orchestrator | 2026-01-02 00:55:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:55:14.904621 | orchestrator | 2026-01-02 00:55:14 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:55:14.905710 | orchestrator | 2026-01-02 00:55:14 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:55:14.905954 | orchestrator | 2026-01-02 00:55:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:55:17.945321 | orchestrator | 2026-01-02 00:55:17 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:55:17.946113 | orchestrator | 2026-01-02 00:55:17 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:55:17.946174 | orchestrator | 2026-01-02 00:55:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:55:20.985115 | orchestrator | 2026-01-02 00:55:20 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:55:20.987873 | orchestrator | 2026-01-02 00:55:20 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:55:20.987929 | orchestrator | 2026-01-02 00:55:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:55:24.040395 | orchestrator | 2026-01-02 00:55:24 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:55:24.042668 | orchestrator | 2026-01-02 00:55:24 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:55:24.042731 | orchestrator | 2026-01-02 00:55:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:55:27.089583 | orchestrator | 2026-01-02 00:55:27 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:55:27.090363 | orchestrator | 2026-01-02 00:55:27 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:55:27.090573 | orchestrator | 2026-01-02 00:55:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:55:30.139776 | orchestrator | 2026-01-02 00:55:30 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:55:30.142733 | orchestrator | 2026-01-02 00:55:30 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:55:30.142974 | orchestrator | 2026-01-02 00:55:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:55:33.192437 | orchestrator | 2026-01-02 00:55:33 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:55:33.196281 | orchestrator | 2026-01-02 00:55:33 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:55:33.196337 | orchestrator | 2026-01-02 00:55:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:55:36.249257 | orchestrator | 2026-01-02 00:55:36 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:55:36.251838 | orchestrator | 2026-01-02 00:55:36 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:55:36.251880 | orchestrator | 2026-01-02 00:55:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:55:39.303056 | orchestrator | 2026-01-02 00:55:39 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:55:39.309783 | orchestrator | 2026-01-02 00:55:39 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:55:39.309845 | orchestrator | 2026-01-02 00:55:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:55:42.348736 | orchestrator | 2026-01-02 00:55:42 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:55:42.348917 | orchestrator | 2026-01-02 00:55:42 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:55:42.348937 | orchestrator | 2026-01-02 00:55:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:55:45.385440 | orchestrator | 2026-01-02 00:55:45 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:55:45.387887 | orchestrator | 2026-01-02 00:55:45 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:55:45.387972 | orchestrator | 2026-01-02 00:55:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:55:48.441092 | orchestrator | 2026-01-02 00:55:48 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:55:48.444955 | orchestrator | 2026-01-02 00:55:48 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:55:48.445038 | orchestrator | 2026-01-02 00:55:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:55:51.498930 | orchestrator | 2026-01-02 00:55:51 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:55:51.499570 | orchestrator | 2026-01-02 00:55:51 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:55:51.500007 | orchestrator | 2026-01-02 00:55:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:55:54.540055 | orchestrator | 2026-01-02 00:55:54 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:55:54.542582 | orchestrator | 2026-01-02 00:55:54 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:55:54.542780 | orchestrator | 2026-01-02 00:55:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:55:57.582997 | orchestrator | 2026-01-02 00:55:57 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:55:57.585618 | orchestrator | 2026-01-02 00:55:57 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:55:57.585676 | orchestrator | 2026-01-02 00:55:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:56:00.623507 | orchestrator | 2026-01-02 00:56:00 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:56:00.625101 | orchestrator | 2026-01-02 00:56:00 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:56:00.625206 | orchestrator | 2026-01-02 00:56:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:56:03.680245 | orchestrator | 2026-01-02 00:56:03 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:56:03.680474 | orchestrator | 2026-01-02 00:56:03 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:56:03.680497 | orchestrator | 2026-01-02 00:56:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:56:06.724095 | orchestrator | 2026-01-02 00:56:06 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:56:06.724910 | orchestrator | 2026-01-02 00:56:06 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:56:06.726632 | orchestrator | 2026-01-02 00:56:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:56:09.767421 | orchestrator | 2026-01-02 00:56:09 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:56:09.768803 | orchestrator | 2026-01-02 00:56:09 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:56:09.769355 | orchestrator | 2026-01-02 00:56:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:56:12.811541 | orchestrator | 2026-01-02 00:56:12 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state STARTED 2026-01-02 00:56:12.814120 | orchestrator | 2026-01-02 00:56:12 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:56:12.814165 | orchestrator | 2026-01-02 00:56:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:56:15.874324 | orchestrator | 2026-01-02 00:56:15 | INFO  | Task f0ed901e-d406-493d-8a50-ac9bf5995b46 is in state SUCCESS 2026-01-02 00:56:15.874507 | orchestrator | 2026-01-02 00:56:15.878085 | orchestrator | 2026-01-02 00:56:15.878176 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-02 00:56:15.878194 | orchestrator | 2026-01-02 00:56:15.878206 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-02 00:56:15.878218 | orchestrator | Friday 02 January 2026 00:49:16 +0000 (0:00:00.330) 0:00:00.330 ******** 2026-01-02 00:56:15.878230 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:15.878244 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:15.878256 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:15.878268 | orchestrator | 2026-01-02 00:56:15.878336 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-02 00:56:15.878349 | orchestrator | Friday 02 January 2026 00:49:16 +0000 (0:00:00.388) 0:00:00.718 ******** 2026-01-02 00:56:15.878361 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-01-02 00:56:15.878373 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-01-02 00:56:15.878384 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-01-02 00:56:15.878455 | orchestrator | 2026-01-02 00:56:15.878468 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-01-02 00:56:15.878507 | orchestrator | 2026-01-02 00:56:15.878520 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-01-02 00:56:15.878531 | orchestrator | Friday 02 January 2026 00:49:17 +0000 (0:00:00.591) 0:00:01.309 ******** 2026-01-02 00:56:15.878543 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:15.878554 | orchestrator | 2026-01-02 00:56:15.878566 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-01-02 00:56:15.878614 | orchestrator | Friday 02 January 2026 00:49:18 +0000 (0:00:00.987) 0:00:02.297 ******** 2026-01-02 00:56:15.878626 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:15.878639 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:15.878652 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:15.878666 | orchestrator | 2026-01-02 00:56:15.878679 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-01-02 00:56:15.878691 | orchestrator | Friday 02 January 2026 00:49:19 +0000 (0:00:01.740) 0:00:04.037 ******** 2026-01-02 00:56:15.878705 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:15.878717 | orchestrator | 2026-01-02 00:56:15.878729 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-01-02 00:56:15.878740 | orchestrator | Friday 02 January 2026 00:49:20 +0000 (0:00:00.975) 0:00:05.012 ******** 2026-01-02 00:56:15.878750 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:15.878762 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:15.878772 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:15.878784 | orchestrator | 2026-01-02 00:56:15.878795 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-01-02 00:56:15.878806 | orchestrator | Friday 02 January 2026 00:49:21 +0000 (0:00:00.636) 0:00:05.649 ******** 2026-01-02 00:56:15.878817 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-02 00:56:15.878828 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-02 00:56:15.878838 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-02 00:56:15.878849 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-02 00:56:15.878860 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-02 00:56:15.878914 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-02 00:56:15.878928 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-02 00:56:15.878940 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-02 00:56:15.878951 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-02 00:56:15.878962 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-02 00:56:15.878973 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-02 00:56:15.879077 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-02 00:56:15.879089 | orchestrator | 2026-01-02 00:56:15.879100 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-02 00:56:15.879111 | orchestrator | Friday 02 January 2026 00:49:23 +0000 (0:00:02.535) 0:00:08.184 ******** 2026-01-02 00:56:15.879122 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-01-02 00:56:15.879134 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-01-02 00:56:15.879145 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-01-02 00:56:15.879213 | orchestrator | 2026-01-02 00:56:15.879224 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-02 00:56:15.879235 | orchestrator | Friday 02 January 2026 00:49:24 +0000 (0:00:00.806) 0:00:08.990 ******** 2026-01-02 00:56:15.879255 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-01-02 00:56:15.879267 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-01-02 00:56:15.879278 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-01-02 00:56:15.879289 | orchestrator | 2026-01-02 00:56:15.879300 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-02 00:56:15.879311 | orchestrator | Friday 02 January 2026 00:49:26 +0000 (0:00:01.684) 0:00:10.674 ******** 2026-01-02 00:56:15.879322 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-01-02 00:56:15.879334 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.879361 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-01-02 00:56:15.879373 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.879384 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-01-02 00:56:15.879395 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.879406 | orchestrator | 2026-01-02 00:56:15.879417 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-01-02 00:56:15.879428 | orchestrator | Friday 02 January 2026 00:49:27 +0000 (0:00:00.849) 0:00:11.524 ******** 2026-01-02 00:56:15.879443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-02 00:56:15.879460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-02 00:56:15.879472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-02 00:56:15.879484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-02 00:56:15.879496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-02 00:56:15.879550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-02 00:56:15.879563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-02 00:56:15.879625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-02 00:56:15.879689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-02 00:56:15.879750 | orchestrator | 2026-01-02 00:56:15.879762 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-01-02 00:56:15.879774 | orchestrator | Friday 02 January 2026 00:49:29 +0000 (0:00:02.564) 0:00:14.088 ******** 2026-01-02 00:56:15.879785 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:15.879840 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:15.879852 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:15.879863 | orchestrator | 2026-01-02 00:56:15.879874 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-01-02 00:56:15.879885 | orchestrator | Friday 02 January 2026 00:49:31 +0000 (0:00:01.396) 0:00:15.485 ******** 2026-01-02 00:56:15.879896 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-01-02 00:56:15.879908 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-01-02 00:56:15.879944 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-01-02 00:56:15.879956 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-01-02 00:56:15.879967 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-01-02 00:56:15.879979 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-01-02 00:56:15.879989 | orchestrator | 2026-01-02 00:56:15.880001 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-01-02 00:56:15.880012 | orchestrator | Friday 02 January 2026 00:49:34 +0000 (0:00:03.251) 0:00:18.736 ******** 2026-01-02 00:56:15.880031 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:15.880043 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:15.880060 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:15.880071 | orchestrator | 2026-01-02 00:56:15.880082 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-01-02 00:56:15.880093 | orchestrator | Friday 02 January 2026 00:49:36 +0000 (0:00:01.626) 0:00:20.363 ******** 2026-01-02 00:56:15.880104 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:15.880116 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:15.880127 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:15.880161 | orchestrator | 2026-01-02 00:56:15.880174 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-01-02 00:56:15.880185 | orchestrator | Friday 02 January 2026 00:49:39 +0000 (0:00:03.798) 0:00:24.162 ******** 2026-01-02 00:56:15.880197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-02 00:56:15.880230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-02 00:56:15.880243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-02 00:56:15.880293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__2a76db18e8503f0bfbe2b96d839e73050e39f86b', '__omit_place_holder__2a76db18e8503f0bfbe2b96d839e73050e39f86b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-02 00:56:15.880305 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.880317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-02 00:56:15.880345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-02 00:56:15.880357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-02 00:56:15.880369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__2a76db18e8503f0bfbe2b96d839e73050e39f86b', '__omit_place_holder__2a76db18e8503f0bfbe2b96d839e73050e39f86b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-02 00:56:15.880380 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.880402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-02 00:56:15.880515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-02 00:56:15.880527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-02 00:56:15.880539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__2a76db18e8503f0bfbe2b96d839e73050e39f86b', '__omit_place_holder__2a76db18e8503f0bfbe2b96d839e73050e39f86b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-02 00:56:15.880557 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.880569 | orchestrator | 2026-01-02 00:56:15.880597 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-01-02 00:56:15.880608 | orchestrator | Friday 02 January 2026 00:49:41 +0000 (0:00:01.889) 0:00:26.052 ******** 2026-01-02 00:56:15.880625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-02 00:56:15.880637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-02 00:56:15.880657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-02 00:56:15.880670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-02 00:56:15.880681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-02 00:56:15.880700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__2a76db18e8503f0bfbe2b96d839e73050e39f86b', '__omit_place_holder__2a76db18e8503f0bfbe2b96d839e73050e39f86b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-02 00:56:15.880716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-02 00:56:15.880728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-02 00:56:15.880740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-02 00:56:15.880757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__2a76db18e8503f0bfbe2b96d839e73050e39f86b', '__omit_place_holder__2a76db18e8503f0bfbe2b96d839e73050e39f86b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-02 00:56:15.880821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-02 00:56:15.880833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__2a76db18e8503f0bfbe2b96d839e73050e39f86b', '__omit_place_holder__2a76db18e8503f0bfbe2b96d839e73050e39f86b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-02 00:56:15.880859 | orchestrator | 2026-01-02 00:56:15.880871 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-01-02 00:56:15.880882 | orchestrator | Friday 02 January 2026 00:49:45 +0000 (0:00:03.667) 0:00:29.719 ******** 2026-01-02 00:56:15.880894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-02 00:56:15.880912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-02 00:56:15.880924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-02 00:56:15.880944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-02 00:56:15.880957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-02 00:56:15.880975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-02 00:56:15.880987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-02 00:56:15.881003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-02 00:56:15.881015 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-02 00:56:15.881027 | orchestrator | 2026-01-02 00:56:15.881038 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-01-02 00:56:15.881049 | orchestrator | Friday 02 January 2026 00:49:49 +0000 (0:00:03.695) 0:00:33.415 ******** 2026-01-02 00:56:15.881060 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-02 00:56:15.881072 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-02 00:56:15.881083 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-02 00:56:15.881121 | orchestrator | 2026-01-02 00:56:15.881133 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-01-02 00:56:15.881210 | orchestrator | Friday 02 January 2026 00:49:51 +0000 (0:00:02.575) 0:00:35.990 ******** 2026-01-02 00:56:15.881222 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-02 00:56:15.881233 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-02 00:56:15.881244 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-02 00:56:15.881255 | orchestrator | 2026-01-02 00:56:15.883145 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-01-02 00:56:15.883212 | orchestrator | Friday 02 January 2026 00:50:00 +0000 (0:00:08.591) 0:00:44.582 ******** 2026-01-02 00:56:15.883226 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.883237 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.883248 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.883277 | orchestrator | 2026-01-02 00:56:15.883289 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-01-02 00:56:15.883299 | orchestrator | Friday 02 January 2026 00:50:01 +0000 (0:00:00.850) 0:00:45.433 ******** 2026-01-02 00:56:15.883310 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-02 00:56:15.883322 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-02 00:56:15.883333 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-02 00:56:15.883344 | orchestrator | 2026-01-02 00:56:15.883355 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-01-02 00:56:15.883366 | orchestrator | Friday 02 January 2026 00:50:04 +0000 (0:00:02.985) 0:00:48.419 ******** 2026-01-02 00:56:15.883377 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-02 00:56:15.883389 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-02 00:56:15.883399 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-02 00:56:15.883410 | orchestrator | 2026-01-02 00:56:15.883421 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-01-02 00:56:15.883432 | orchestrator | Friday 02 January 2026 00:50:07 +0000 (0:00:03.461) 0:00:51.880 ******** 2026-01-02 00:56:15.883443 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-01-02 00:56:15.883454 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-01-02 00:56:15.883465 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-01-02 00:56:15.883476 | orchestrator | 2026-01-02 00:56:15.883487 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-01-02 00:56:15.883497 | orchestrator | Friday 02 January 2026 00:50:09 +0000 (0:00:02.137) 0:00:54.017 ******** 2026-01-02 00:56:15.883508 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-01-02 00:56:15.883519 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-01-02 00:56:15.883530 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-01-02 00:56:15.883540 | orchestrator | 2026-01-02 00:56:15.883551 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-01-02 00:56:15.883562 | orchestrator | Friday 02 January 2026 00:50:12 +0000 (0:00:02.410) 0:00:56.428 ******** 2026-01-02 00:56:15.883595 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:15.883607 | orchestrator | 2026-01-02 00:56:15.883618 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-01-02 00:56:15.883629 | orchestrator | Friday 02 January 2026 00:50:13 +0000 (0:00:00.927) 0:00:57.356 ******** 2026-01-02 00:56:15.883648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-02 00:56:15.883662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-02 00:56:15.883691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-02 00:56:15.883706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-02 00:56:15.883721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-02 00:56:15.883735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-02 00:56:15.883753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-02 00:56:15.883767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-02 00:56:15.883780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-02 00:56:15.883800 | orchestrator | 2026-01-02 00:56:15.883813 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-01-02 00:56:15.883840 | orchestrator | Friday 02 January 2026 00:50:16 +0000 (0:00:03.794) 0:01:01.151 ******** 2026-01-02 00:56:15.883862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-02 00:56:15.883944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-02 00:56:15.883957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-02 00:56:15.883968 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.883980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-02 00:56:15.883997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-02 00:56:15.884009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-02 00:56:15.884027 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.884039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-02 00:56:15.884115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-02 00:56:15.884129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-02 00:56:15.884141 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.884152 | orchestrator | 2026-01-02 00:56:15.884163 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-01-02 00:56:15.884174 | orchestrator | Friday 02 January 2026 00:50:18 +0000 (0:00:01.126) 0:01:02.277 ******** 2026-01-02 00:56:15.884186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-02 00:56:15.884245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-02 00:56:15.884259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-02 00:56:15.884278 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.884289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-02 00:56:15.884308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-02 00:56:15.884320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-02 00:56:15.884331 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.884343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-02 00:56:15.884354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-02 00:56:15.884370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-02 00:56:15.884388 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.884399 | orchestrator | 2026-01-02 00:56:15.884410 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-01-02 00:56:15.884421 | orchestrator | Friday 02 January 2026 00:50:19 +0000 (0:00:01.303) 0:01:03.581 ******** 2026-01-02 00:56:15.884433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-02 00:56:15.884450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-02 00:56:15.884462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-02 00:56:15.884474 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.884485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-02 00:56:15.884496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-02 00:56:15.884508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-02 00:56:15.884525 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.884541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-02 00:56:15.884553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-02 00:56:15.884670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-02 00:56:15.884683 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.884694 | orchestrator | 2026-01-02 00:56:15.884705 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-01-02 00:56:15.884716 | orchestrator | Friday 02 January 2026 00:50:20 +0000 (0:00:01.287) 0:01:04.869 ******** 2026-01-02 00:56:15.884728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-02 00:56:15.884740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-02 00:56:15.884751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-02 00:56:15.884770 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.884787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-02 00:56:15.884799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-02 00:56:15.884811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-02 00:56:15.884822 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.884840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-02 00:56:15.884852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-02 00:56:15.884864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-02 00:56:15.884886 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.884897 | orchestrator | 2026-01-02 00:56:15.884908 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-01-02 00:56:15.884919 | orchestrator | Friday 02 January 2026 00:50:21 +0000 (0:00:00.824) 0:01:05.693 ******** 2026-01-02 00:56:15.884931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-02 00:56:15.884947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-02 00:56:15.884959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-02 00:56:15.884970 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.885079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-02 00:56:15.885092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-02 00:56:15.885102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-02 00:56:15.885119 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.885129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-02 00:56:15.885144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-02 00:56:15.885154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-02 00:56:15.885164 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.885174 | orchestrator | 2026-01-02 00:56:15.885184 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-01-02 00:56:15.885193 | orchestrator | Friday 02 January 2026 00:50:23 +0000 (0:00:02.032) 0:01:07.725 ******** 2026-01-02 00:56:15.885203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-02 00:56:15.885220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-02 00:56:15.885231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-02 00:56:15.885247 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.885257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-02 00:56:15.885267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-02 00:56:15.885282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-02 00:56:15.885292 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.885302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-02 00:56:15.885317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-02 00:56:15.885328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-02 00:56:15.885338 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.885347 | orchestrator | 2026-01-02 00:56:15.885363 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-01-02 00:56:15.885373 | orchestrator | Friday 02 January 2026 00:50:24 +0000 (0:00:01.168) 0:01:08.894 ******** 2026-01-02 00:56:15.885383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-02 00:56:15.885393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-02 00:56:15.885403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-02 00:56:15.885418 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.885428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-02 00:56:15.885438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-02 00:56:15.885455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-02 00:56:15.885466 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.885482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-02 00:56:15.885492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-02 00:56:15.885551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-02 00:56:15.885562 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.885587 | orchestrator | 2026-01-02 00:56:15.885597 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-01-02 00:56:15.885607 | orchestrator | Friday 02 January 2026 00:50:25 +0000 (0:00:00.489) 0:01:09.384 ******** 2026-01-02 00:56:15.885618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-02 00:56:15.885628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-02 00:56:15.885639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-02 00:56:15.885649 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.885673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-02 00:56:15.885745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-02 00:56:15.885759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-02 00:56:15.885769 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.885779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-02 00:56:15.885793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-02 00:56:15.885804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-02 00:56:15.885814 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.885823 | orchestrator | 2026-01-02 00:56:15.885833 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-01-02 00:56:15.885843 | orchestrator | Friday 02 January 2026 00:50:25 +0000 (0:00:00.822) 0:01:10.207 ******** 2026-01-02 00:56:15.885860 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-02 00:56:15.885870 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-02 00:56:15.885886 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-02 00:56:15.885896 | orchestrator | 2026-01-02 00:56:15.885906 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-01-02 00:56:15.885916 | orchestrator | Friday 02 January 2026 00:50:27 +0000 (0:00:01.850) 0:01:12.057 ******** 2026-01-02 00:56:15.885925 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-02 00:56:15.885935 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-02 00:56:15.885945 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-02 00:56:15.885954 | orchestrator | 2026-01-02 00:56:15.885964 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-01-02 00:56:15.885974 | orchestrator | Friday 02 January 2026 00:50:29 +0000 (0:00:01.598) 0:01:13.656 ******** 2026-01-02 00:56:15.885983 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-02 00:56:15.885993 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-02 00:56:15.886003 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-02 00:56:15.886053 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.886067 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-02 00:56:15.886077 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-02 00:56:15.886086 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.886182 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-02 00:56:15.886194 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.886204 | orchestrator | 2026-01-02 00:56:15.886213 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-01-02 00:56:15.886223 | orchestrator | Friday 02 January 2026 00:50:30 +0000 (0:00:01.048) 0:01:14.704 ******** 2026-01-02 00:56:15.886234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-02 00:56:15.886249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-02 00:56:15.886260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-02 00:56:15.886283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-02 00:56:15.886326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-02 00:56:15.886339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-02 00:56:15.886350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-02 00:56:15.886360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-02 00:56:15.886378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-02 00:56:15.886396 | orchestrator | 2026-01-02 00:56:15.886406 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-01-02 00:56:15.886416 | orchestrator | Friday 02 January 2026 00:50:33 +0000 (0:00:03.432) 0:01:18.137 ******** 2026-01-02 00:56:15.886425 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:15.886435 | orchestrator | 2026-01-02 00:56:15.886445 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-01-02 00:56:15.886454 | orchestrator | Friday 02 January 2026 00:50:34 +0000 (0:00:00.676) 0:01:18.813 ******** 2026-01-02 00:56:15.886466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-02 00:56:15.886484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-02 00:56:15.886495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.886505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.886520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-02 00:56:15.886537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-02 00:56:15.886547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.890134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.890181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-02 00:56:15.890192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-02 00:56:15.890201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.890218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.890238 | orchestrator | 2026-01-02 00:56:15.890249 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-01-02 00:56:15.890258 | orchestrator | Friday 02 January 2026 00:50:39 +0000 (0:00:05.431) 0:01:24.244 ******** 2026-01-02 00:56:15.890268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-02 00:56:15.890288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-02 00:56:15.890297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.890305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.890314 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.890324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-02 00:56:15.890342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-02 00:56:15.890352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.890361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.890370 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.890387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-02 00:56:15.890407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-02 00:56:15.890425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.890444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.890453 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.890461 | orchestrator | 2026-01-02 00:56:15.890470 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-01-02 00:56:15.890478 | orchestrator | Friday 02 January 2026 00:50:41 +0000 (0:00:01.326) 0:01:25.571 ******** 2026-01-02 00:56:15.890487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-01-02 00:56:15.890496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-01-02 00:56:15.890505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-01-02 00:56:15.890518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-01-02 00:56:15.890527 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.890535 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.890543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-01-02 00:56:15.890552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-01-02 00:56:15.890560 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.890568 | orchestrator | 2026-01-02 00:56:15.890602 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-01-02 00:56:15.890610 | orchestrator | Friday 02 January 2026 00:50:42 +0000 (0:00:01.298) 0:01:26.869 ******** 2026-01-02 00:56:15.890619 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:15.890627 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:15.890635 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:15.890643 | orchestrator | 2026-01-02 00:56:15.890742 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-01-02 00:56:15.890753 | orchestrator | Friday 02 January 2026 00:50:44 +0000 (0:00:01.782) 0:01:28.651 ******** 2026-01-02 00:56:15.890761 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:15.890769 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:15.890777 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:15.890785 | orchestrator | 2026-01-02 00:56:15.890792 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-01-02 00:56:15.890800 | orchestrator | Friday 02 January 2026 00:50:46 +0000 (0:00:02.466) 0:01:31.117 ******** 2026-01-02 00:56:15.890809 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:15.890817 | orchestrator | 2026-01-02 00:56:15.890825 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-01-02 00:56:15.890839 | orchestrator | Friday 02 January 2026 00:50:47 +0000 (0:00:00.919) 0:01:32.037 ******** 2026-01-02 00:56:15.890848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-02 00:56:15.890858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.890872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.890881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-02 00:56:15.890896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.890905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.891024 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-02 00:56:15.891048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.891065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.891080 | orchestrator | 2026-01-02 00:56:15.891108 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-01-02 00:56:15.891124 | orchestrator | Friday 02 January 2026 00:50:52 +0000 (0:00:04.364) 0:01:36.402 ******** 2026-01-02 00:56:15.891148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-02 00:56:15.891159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.891174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.891183 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.891191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-02 00:56:15.891208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.891217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.891225 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.891238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-02 00:56:15.891252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.891261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.891269 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.891277 | orchestrator | 2026-01-02 00:56:15.891285 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-01-02 00:56:15.891293 | orchestrator | Friday 02 January 2026 00:50:53 +0000 (0:00:01.137) 0:01:37.539 ******** 2026-01-02 00:56:15.891301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-02 00:56:15.891310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-02 00:56:15.891323 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.891331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-02 00:56:15.891340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-02 00:56:15.891348 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.891356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-02 00:56:15.891364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-02 00:56:15.891372 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.891380 | orchestrator | 2026-01-02 00:56:15.891388 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-01-02 00:56:15.891396 | orchestrator | Friday 02 January 2026 00:50:54 +0000 (0:00:01.325) 0:01:38.865 ******** 2026-01-02 00:56:15.891404 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:15.891411 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:15.891428 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:15.891442 | orchestrator | 2026-01-02 00:56:15.891454 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-01-02 00:56:15.891467 | orchestrator | Friday 02 January 2026 00:50:56 +0000 (0:00:01.601) 0:01:40.466 ******** 2026-01-02 00:56:15.891481 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:15.891495 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:15.891509 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:15.891522 | orchestrator | 2026-01-02 00:56:15.891538 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-01-02 00:56:15.891547 | orchestrator | Friday 02 January 2026 00:50:58 +0000 (0:00:02.202) 0:01:42.669 ******** 2026-01-02 00:56:15.891555 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.891649 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.891658 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.891666 | orchestrator | 2026-01-02 00:56:15.891674 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-01-02 00:56:15.891682 | orchestrator | Friday 02 January 2026 00:50:58 +0000 (0:00:00.289) 0:01:42.958 ******** 2026-01-02 00:56:15.891690 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:15.891698 | orchestrator | 2026-01-02 00:56:15.891706 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-01-02 00:56:15.891714 | orchestrator | Friday 02 January 2026 00:50:59 +0000 (0:00:00.744) 0:01:43.703 ******** 2026-01-02 00:56:15.891722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-02 00:56:15.891732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-02 00:56:15.891746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-02 00:56:15.891764 | orchestrator | 2026-01-02 00:56:15.891772 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-01-02 00:56:15.891780 | orchestrator | Friday 02 January 2026 00:51:02 +0000 (0:00:02.990) 0:01:46.693 ******** 2026-01-02 00:56:15.891794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-02 00:56:15.891803 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.891812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-02 00:56:15.891820 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.891828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-02 00:56:15.891836 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.891844 | orchestrator | 2026-01-02 00:56:15.891852 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-01-02 00:56:15.891860 | orchestrator | Friday 02 January 2026 00:51:04 +0000 (0:00:01.679) 0:01:48.373 ******** 2026-01-02 00:56:15.891869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-02 00:56:15.891883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-02 00:56:15.891898 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.891907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-02 00:56:15.891915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-02 00:56:15.891924 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.891937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-02 00:56:15.891946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-02 00:56:15.891954 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.891962 | orchestrator | 2026-01-02 00:56:15.891970 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-01-02 00:56:15.891978 | orchestrator | Friday 02 January 2026 00:51:06 +0000 (0:00:02.437) 0:01:50.811 ******** 2026-01-02 00:56:15.891986 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.891994 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.892002 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.892010 | orchestrator | 2026-01-02 00:56:15.892017 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-01-02 00:56:15.892025 | orchestrator | Friday 02 January 2026 00:51:07 +0000 (0:00:00.779) 0:01:51.590 ******** 2026-01-02 00:56:15.892043 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.892051 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.892059 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.892067 | orchestrator | 2026-01-02 00:56:15.892075 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-01-02 00:56:15.892083 | orchestrator | Friday 02 January 2026 00:51:08 +0000 (0:00:01.374) 0:01:52.964 ******** 2026-01-02 00:56:15.892091 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:15.892099 | orchestrator | 2026-01-02 00:56:15.892107 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-01-02 00:56:15.892115 | orchestrator | Friday 02 January 2026 00:51:09 +0000 (0:00:00.747) 0:01:53.712 ******** 2026-01-02 00:56:15.892123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-02 00:56:15.892142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.892152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.892167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.892176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-02 00:56:15.892184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.892198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.892211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.892225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-02 00:56:15.892234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.892242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.892250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.892264 | orchestrator | 2026-01-02 00:56:15.892272 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-01-02 00:56:15.892280 | orchestrator | Friday 02 January 2026 00:51:13 +0000 (0:00:04.402) 0:01:58.114 ******** 2026-01-02 00:56:15.892306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-02 00:56:15.892315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.892328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.892337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.892345 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.892370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-02 00:56:15.892384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.892400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.892408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.892417 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.892430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-02 00:56:15.892455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.892478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.892500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.892509 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.892516 | orchestrator | 2026-01-02 00:56:15.892525 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-01-02 00:56:15.892533 | orchestrator | Friday 02 January 2026 00:51:15 +0000 (0:00:01.380) 0:01:59.494 ******** 2026-01-02 00:56:15.892541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-02 00:56:15.892549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-02 00:56:15.892558 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.892566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-02 00:56:15.892598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-02 00:56:15.892607 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.892615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-02 00:56:15.892628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-02 00:56:15.892636 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.892644 | orchestrator | 2026-01-02 00:56:15.892652 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-01-02 00:56:15.892661 | orchestrator | Friday 02 January 2026 00:51:16 +0000 (0:00:00.963) 0:02:00.458 ******** 2026-01-02 00:56:15.892668 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:15.892676 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:15.892684 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:15.892698 | orchestrator | 2026-01-02 00:56:15.892706 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-01-02 00:56:15.892714 | orchestrator | Friday 02 January 2026 00:51:17 +0000 (0:00:01.531) 0:02:01.989 ******** 2026-01-02 00:56:15.892722 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:15.892730 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:15.892738 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:15.892746 | orchestrator | 2026-01-02 00:56:15.892754 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-01-02 00:56:15.892761 | orchestrator | Friday 02 January 2026 00:51:20 +0000 (0:00:02.558) 0:02:04.548 ******** 2026-01-02 00:56:15.892769 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.892777 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.892785 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.892793 | orchestrator | 2026-01-02 00:56:15.892801 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-01-02 00:56:15.892809 | orchestrator | Friday 02 January 2026 00:51:21 +0000 (0:00:00.889) 0:02:05.437 ******** 2026-01-02 00:56:15.892817 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.892825 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.892833 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.892841 | orchestrator | 2026-01-02 00:56:15.892849 | orchestrator | TASK [include_role : designate] ************************************************ 2026-01-02 00:56:15.892857 | orchestrator | Friday 02 January 2026 00:51:22 +0000 (0:00:00.906) 0:02:06.343 ******** 2026-01-02 00:56:15.892865 | orchestrator | included: designate for testbed-node-1, testbed-node-2, testbed-node-0 2026-01-02 00:56:15.892873 | orchestrator | 2026-01-02 00:56:15.892881 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-01-02 00:56:15.892888 | orchestrator | Friday 02 January 2026 00:51:23 +0000 (0:00:01.181) 0:02:07.524 ******** 2026-01-02 00:56:15.892900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-02 00:56:15.892910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-02 00:56:15.892918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.892937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.892946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.892955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.892963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.892972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-02 00:56:15.892980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-02 00:56:15.892997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.893006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.893015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.893023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.893048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.893061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-02 00:56:15.893069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-02 00:56:15.893089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.893098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.893107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.893115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.893127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.893135 | orchestrator | 2026-01-02 00:56:15.893143 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-01-02 00:56:15.893151 | orchestrator | Friday 02 January 2026 00:51:29 +0000 (0:00:06.605) 0:02:14.129 ******** 2026-01-02 00:56:15.893160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-02 00:56:15.893178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-02 00:56:15.893187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.893196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.893204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.893216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.893225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.893238 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.893250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-02 00:56:15.893259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-02 00:56:15.893268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.893276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.893289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.893297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.893314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.893322 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.893335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-02 00:56:15.893344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-02 00:56:15.893353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.893361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.893373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.893387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.893401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.893409 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.893417 | orchestrator | 2026-01-02 00:56:15.893425 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-01-02 00:56:15.893434 | orchestrator | Friday 02 January 2026 00:51:31 +0000 (0:00:01.184) 0:02:15.314 ******** 2026-01-02 00:56:15.893442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-01-02 00:56:15.893450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-01-02 00:56:15.893459 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.893467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-01-02 00:56:15.893475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-01-02 00:56:15.893483 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.893492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-01-02 00:56:15.893500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-01-02 00:56:15.893508 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.893516 | orchestrator | 2026-01-02 00:56:15.893524 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-01-02 00:56:15.893532 | orchestrator | Friday 02 January 2026 00:51:32 +0000 (0:00:01.282) 0:02:16.596 ******** 2026-01-02 00:56:15.893540 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:15.893548 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:15.893556 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:15.893592 | orchestrator | 2026-01-02 00:56:15.893601 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-01-02 00:56:15.893609 | orchestrator | Friday 02 January 2026 00:51:34 +0000 (0:00:01.936) 0:02:18.532 ******** 2026-01-02 00:56:15.893617 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:15.893625 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:15.893633 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:15.893641 | orchestrator | 2026-01-02 00:56:15.893652 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-01-02 00:56:15.893666 | orchestrator | Friday 02 January 2026 00:51:36 +0000 (0:00:02.038) 0:02:20.571 ******** 2026-01-02 00:56:15.893675 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.893683 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.893691 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.893699 | orchestrator | 2026-01-02 00:56:15.893710 | orchestrator | TASK [include_role : glance] *************************************************** 2026-01-02 00:56:15.893718 | orchestrator | Friday 02 January 2026 00:51:36 +0000 (0:00:00.599) 0:02:21.170 ******** 2026-01-02 00:56:15.893726 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:15.893734 | orchestrator | 2026-01-02 00:56:15.893742 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-01-02 00:56:15.893750 | orchestrator | Friday 02 January 2026 00:51:37 +0000 (0:00:00.858) 0:02:22.029 ******** 2026-01-02 00:56:15.893765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-02 00:56:15.893776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-02 00:56:15.893795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-02 00:56:15.893811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-02 00:56:15.893830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-02 00:56:15.893845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-02 00:56:15.893860 | orchestrator | 2026-01-02 00:56:15.893868 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-01-02 00:56:15.893876 | orchestrator | Friday 02 January 2026 00:51:43 +0000 (0:00:05.699) 0:02:27.729 ******** 2026-01-02 00:56:15.893888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-02 00:56:15.893904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-02 00:56:15.893914 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.893926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-02 00:56:15.893945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-02 00:56:15.893955 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.893964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-02 00:56:15.893985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-02 00:56:15.893995 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.894003 | orchestrator | 2026-01-02 00:56:15.894011 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-01-02 00:56:15.894047 | orchestrator | Friday 02 January 2026 00:51:47 +0000 (0:00:03.765) 0:02:31.494 ******** 2026-01-02 00:56:15.894056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-02 00:56:15.894065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-02 00:56:15.894083 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.894107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-02 00:56:15.894116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-02 00:56:15.894124 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.894136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-02 00:56:15.894145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-02 00:56:15.894153 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.894161 | orchestrator | 2026-01-02 00:56:15.894169 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-01-02 00:56:15.894177 | orchestrator | Friday 02 January 2026 00:51:50 +0000 (0:00:02.944) 0:02:34.439 ******** 2026-01-02 00:56:15.894185 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:15.894192 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:15.894200 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:15.894208 | orchestrator | 2026-01-02 00:56:15.894216 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-01-02 00:56:15.894224 | orchestrator | Friday 02 January 2026 00:51:51 +0000 (0:00:01.439) 0:02:35.878 ******** 2026-01-02 00:56:15.894232 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:15.894240 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:15.894248 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:15.894256 | orchestrator | 2026-01-02 00:56:15.894264 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-01-02 00:56:15.895442 | orchestrator | Friday 02 January 2026 00:51:53 +0000 (0:00:02.341) 0:02:38.220 ******** 2026-01-02 00:56:15.895517 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.895548 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.895557 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.895566 | orchestrator | 2026-01-02 00:56:15.895607 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-01-02 00:56:15.895619 | orchestrator | Friday 02 January 2026 00:51:54 +0000 (0:00:00.619) 0:02:38.839 ******** 2026-01-02 00:56:15.895628 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:15.895636 | orchestrator | 2026-01-02 00:56:15.895645 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-01-02 00:56:15.895653 | orchestrator | Friday 02 January 2026 00:51:55 +0000 (0:00:00.888) 0:02:39.728 ******** 2026-01-02 00:56:15.895664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-02 00:56:15.895678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-02 00:56:15.895693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-02 00:56:15.895722 | orchestrator | 2026-01-02 00:56:15.895732 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-01-02 00:56:15.895740 | orchestrator | Friday 02 January 2026 00:51:58 +0000 (0:00:03.452) 0:02:43.180 ******** 2026-01-02 00:56:15.895748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-02 00:56:15.895770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-02 00:56:15.895785 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.895794 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.895802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-02 00:56:15.895810 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.895818 | orchestrator | 2026-01-02 00:56:15.895827 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-01-02 00:56:15.895835 | orchestrator | Friday 02 January 2026 00:51:59 +0000 (0:00:00.763) 0:02:43.944 ******** 2026-01-02 00:56:15.895844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-01-02 00:56:15.895854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-01-02 00:56:15.895862 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.895871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-01-02 00:56:15.895879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-01-02 00:56:15.895887 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.895895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-01-02 00:56:15.895903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-01-02 00:56:15.895911 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.895920 | orchestrator | 2026-01-02 00:56:15.895928 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-01-02 00:56:15.895936 | orchestrator | Friday 02 January 2026 00:52:00 +0000 (0:00:00.687) 0:02:44.631 ******** 2026-01-02 00:56:15.895943 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:15.895951 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:15.895963 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:15.895971 | orchestrator | 2026-01-02 00:56:15.895979 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-01-02 00:56:15.895987 | orchestrator | Friday 02 January 2026 00:52:01 +0000 (0:00:01.466) 0:02:46.098 ******** 2026-01-02 00:56:15.895995 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:15.896004 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:15.896011 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:15.896019 | orchestrator | 2026-01-02 00:56:15.896032 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-01-02 00:56:15.896042 | orchestrator | Friday 02 January 2026 00:52:04 +0000 (0:00:02.299) 0:02:48.398 ******** 2026-01-02 00:56:15.896052 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.896061 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.896070 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.896079 | orchestrator | 2026-01-02 00:56:15.896088 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-01-02 00:56:15.896098 | orchestrator | Friday 02 January 2026 00:52:04 +0000 (0:00:00.628) 0:02:49.027 ******** 2026-01-02 00:56:15.896107 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:15.896116 | orchestrator | 2026-01-02 00:56:15.896126 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-01-02 00:56:15.896135 | orchestrator | Friday 02 January 2026 00:52:05 +0000 (0:00:01.082) 0:02:50.109 ******** 2026-01-02 00:56:15.896155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-02 00:56:15.896173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-02 00:56:15.896212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-02 00:56:15.896225 | orchestrator | 2026-01-02 00:56:15.896235 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-01-02 00:56:15.896244 | orchestrator | Friday 02 January 2026 00:52:09 +0000 (0:00:03.682) 0:02:53.791 ******** 2026-01-02 00:56:15.896272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-02 00:56:15.896289 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.896300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-02 00:56:15.896315 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.896336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-02 00:56:15.896348 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.896357 | orchestrator | 2026-01-02 00:56:15.896365 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-01-02 00:56:15.896373 | orchestrator | Friday 02 January 2026 00:52:10 +0000 (0:00:01.332) 0:02:55.124 ******** 2026-01-02 00:56:15.896382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-02 00:56:15.896392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-02 00:56:15.896402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-02 00:56:15.896412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-02 00:56:15.896421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-02 00:56:15.896433 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.896442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-02 00:56:15.896453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-02 00:56:15.896462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-02 00:56:15.896471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-02 00:56:15.896479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-02 00:56:15.896498 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.896506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-02 00:56:15.896520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-02 00:56:15.896528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-02 00:56:15.896537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-02 00:56:15.896545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-02 00:56:15.896553 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.896561 | orchestrator | 2026-01-02 00:56:15.896569 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-01-02 00:56:15.896638 | orchestrator | Friday 02 January 2026 00:52:11 +0000 (0:00:00.987) 0:02:56.112 ******** 2026-01-02 00:56:15.896647 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:15.896655 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:15.896663 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:15.896671 | orchestrator | 2026-01-02 00:56:15.896684 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-01-02 00:56:15.896692 | orchestrator | Friday 02 January 2026 00:52:13 +0000 (0:00:01.513) 0:02:57.625 ******** 2026-01-02 00:56:15.896700 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:15.896708 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:15.896717 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:15.896725 | orchestrator | 2026-01-02 00:56:15.896733 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-01-02 00:56:15.896741 | orchestrator | Friday 02 January 2026 00:52:15 +0000 (0:00:02.172) 0:02:59.798 ******** 2026-01-02 00:56:15.896749 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.896757 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.896765 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.896773 | orchestrator | 2026-01-02 00:56:15.896781 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-01-02 00:56:15.896789 | orchestrator | Friday 02 January 2026 00:52:15 +0000 (0:00:00.346) 0:03:00.145 ******** 2026-01-02 00:56:15.896798 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.896806 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.896814 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.896822 | orchestrator | 2026-01-02 00:56:15.896830 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-01-02 00:56:15.896838 | orchestrator | Friday 02 January 2026 00:52:16 +0000 (0:00:00.602) 0:03:00.748 ******** 2026-01-02 00:56:15.896846 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:15.896854 | orchestrator | 2026-01-02 00:56:15.896862 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-01-02 00:56:15.896877 | orchestrator | Friday 02 January 2026 00:52:17 +0000 (0:00:01.039) 0:03:01.787 ******** 2026-01-02 00:56:15.896886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-02 00:56:15.896902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-02 00:56:15.896912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-02 00:56:15.896925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-02 00:56:15.896935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-02 00:56:15.896946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-02 00:56:15.896956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-02 00:56:15.896970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-02 00:56:15.896983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-02 00:56:15.896992 | orchestrator | 2026-01-02 00:56:15.897000 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-01-02 00:56:15.897008 | orchestrator | Friday 02 January 2026 00:52:21 +0000 (0:00:04.152) 0:03:05.940 ******** 2026-01-02 00:56:15.897017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-02 00:56:15.897028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-02 00:56:15.897037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-02 00:56:15.897046 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.897059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-02 00:56:15.897074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-02 00:56:15.897083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-02 00:56:15.897091 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.897103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-02 00:56:15.897112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-02 00:56:15.897120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-02 00:56:15.897129 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.897137 | orchestrator | 2026-01-02 00:56:15.897156 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-01-02 00:56:15.897169 | orchestrator | Friday 02 January 2026 00:52:22 +0000 (0:00:01.016) 0:03:06.956 ******** 2026-01-02 00:56:15.897178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-02 00:56:15.897187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-02 00:56:15.897196 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.897204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-02 00:56:15.897212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-02 00:56:15.897221 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.897229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-02 00:56:15.897237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-02 00:56:15.897245 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.897253 | orchestrator | 2026-01-02 00:56:15.897261 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-01-02 00:56:15.897270 | orchestrator | Friday 02 January 2026 00:52:23 +0000 (0:00:00.982) 0:03:07.938 ******** 2026-01-02 00:56:15.897278 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:15.897286 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:15.897294 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:15.897302 | orchestrator | 2026-01-02 00:56:15.897310 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-01-02 00:56:15.897318 | orchestrator | Friday 02 January 2026 00:52:25 +0000 (0:00:01.411) 0:03:09.350 ******** 2026-01-02 00:56:15.897326 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:15.897334 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:15.897342 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:15.897350 | orchestrator | 2026-01-02 00:56:15.897358 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-01-02 00:56:15.897366 | orchestrator | Friday 02 January 2026 00:52:27 +0000 (0:00:02.528) 0:03:11.879 ******** 2026-01-02 00:56:15.897374 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.897382 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.897390 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.897398 | orchestrator | 2026-01-02 00:56:15.897406 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-01-02 00:56:15.897414 | orchestrator | Friday 02 January 2026 00:52:28 +0000 (0:00:00.634) 0:03:12.513 ******** 2026-01-02 00:56:15.897422 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:15.897430 | orchestrator | 2026-01-02 00:56:15.897438 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-01-02 00:56:15.897446 | orchestrator | Friday 02 January 2026 00:52:29 +0000 (0:00:01.007) 0:03:13.521 ******** 2026-01-02 00:56:15.897461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-02 00:56:15.897475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.897485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-02 00:56:15.897511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.897524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-02 00:56:15.897538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.897546 | orchestrator | 2026-01-02 00:56:15.897554 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-01-02 00:56:15.897562 | orchestrator | Friday 02 January 2026 00:52:33 +0000 (0:00:04.303) 0:03:17.825 ******** 2026-01-02 00:56:15.897596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-02 00:56:15.897607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.897615 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.897624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-02 00:56:15.897636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.897649 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.897661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-02 00:56:15.897670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.897679 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.897687 | orchestrator | 2026-01-02 00:56:15.897695 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-01-02 00:56:15.897703 | orchestrator | Friday 02 January 2026 00:52:34 +0000 (0:00:01.329) 0:03:19.154 ******** 2026-01-02 00:56:15.897712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-01-02 00:56:15.897720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-01-02 00:56:15.897728 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.897736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-01-02 00:56:15.897745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-01-02 00:56:15.897753 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.897761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-01-02 00:56:15.897769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-01-02 00:56:15.897781 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.897789 | orchestrator | 2026-01-02 00:56:15.897797 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-01-02 00:56:15.897805 | orchestrator | Friday 02 January 2026 00:52:36 +0000 (0:00:01.207) 0:03:20.362 ******** 2026-01-02 00:56:15.897813 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:15.897821 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:15.897835 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:15.897843 | orchestrator | 2026-01-02 00:56:15.897851 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-01-02 00:56:15.897859 | orchestrator | Friday 02 January 2026 00:52:37 +0000 (0:00:01.380) 0:03:21.743 ******** 2026-01-02 00:56:15.897867 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:15.897875 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:15.897883 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:15.897891 | orchestrator | 2026-01-02 00:56:15.897899 | orchestrator | TASK [include_role : manila] *************************************************** 2026-01-02 00:56:15.897907 | orchestrator | Friday 02 January 2026 00:52:39 +0000 (0:00:02.254) 0:03:23.997 ******** 2026-01-02 00:56:15.897915 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:15.897923 | orchestrator | 2026-01-02 00:56:15.897931 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-01-02 00:56:15.897939 | orchestrator | Friday 02 January 2026 00:52:41 +0000 (0:00:01.318) 0:03:25.315 ******** 2026-01-02 00:56:15.897947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-02 00:56:15.897961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.897970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.897979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.897996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-02 00:56:15.898005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.898013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.898067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.898077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-02 00:56:15.898090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.898099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.898110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.898119 | orchestrator | 2026-01-02 00:56:15.898128 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-01-02 00:56:15.898136 | orchestrator | Friday 02 January 2026 00:52:45 +0000 (0:00:04.057) 0:03:29.373 ******** 2026-01-02 00:56:15.898149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-02 00:56:15.898158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.898166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.898180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.898188 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.898200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-02 00:56:15.898209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.898217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.898238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.898247 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.898255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-02 00:56:15.898268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.898280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.898288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.898297 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.898305 | orchestrator | 2026-01-02 00:56:15.898313 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-01-02 00:56:15.898321 | orchestrator | Friday 02 January 2026 00:52:45 +0000 (0:00:00.703) 0:03:30.076 ******** 2026-01-02 00:56:15.898329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-01-02 00:56:15.898337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-01-02 00:56:15.898346 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.898354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-01-02 00:56:15.898367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-01-02 00:56:15.898375 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.898384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-01-02 00:56:15.898396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-01-02 00:56:15.898404 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.898412 | orchestrator | 2026-01-02 00:56:15.898420 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-01-02 00:56:15.898428 | orchestrator | Friday 02 January 2026 00:52:47 +0000 (0:00:01.266) 0:03:31.343 ******** 2026-01-02 00:56:15.898436 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:15.898445 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:15.898453 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:15.898461 | orchestrator | 2026-01-02 00:56:15.898469 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-01-02 00:56:15.898477 | orchestrator | Friday 02 January 2026 00:52:48 +0000 (0:00:01.415) 0:03:32.758 ******** 2026-01-02 00:56:15.898485 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:15.898493 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:15.898501 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:15.898509 | orchestrator | 2026-01-02 00:56:15.898517 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-01-02 00:56:15.898525 | orchestrator | Friday 02 January 2026 00:52:50 +0000 (0:00:02.265) 0:03:35.024 ******** 2026-01-02 00:56:15.898533 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:15.898541 | orchestrator | 2026-01-02 00:56:15.898550 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-01-02 00:56:15.898558 | orchestrator | Friday 02 January 2026 00:52:52 +0000 (0:00:01.347) 0:03:36.371 ******** 2026-01-02 00:56:15.898566 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-02 00:56:15.898594 | orchestrator | 2026-01-02 00:56:15.898603 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-01-02 00:56:15.898611 | orchestrator | Friday 02 January 2026 00:52:55 +0000 (0:00:03.033) 0:03:39.405 ******** 2026-01-02 00:56:15.898623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-02 00:56:15.898639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-02 00:56:15.898654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-02 00:56:15.898664 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.898675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-02 00:56:15.898684 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.898698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-02 00:56:15.898712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-02 00:56:15.898721 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.898729 | orchestrator | 2026-01-02 00:56:15.898737 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-01-02 00:56:15.898746 | orchestrator | Friday 02 January 2026 00:52:58 +0000 (0:00:03.188) 0:03:42.593 ******** 2026-01-02 00:56:15.898757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-02 00:56:15.898767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-02 00:56:15.898782 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.898797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-02 00:56:15.898807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-02 00:56:15.898815 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.898827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-02 00:56:15.898845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-02 00:56:15.898854 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.898862 | orchestrator | 2026-01-02 00:56:15.898870 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-01-02 00:56:15.898878 | orchestrator | Friday 02 January 2026 00:53:01 +0000 (0:00:02.715) 0:03:45.308 ******** 2026-01-02 00:56:15.898887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-02 00:56:15.898895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-02 00:56:15.898904 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.898913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-02 00:56:15.898932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-02 00:56:15.898953 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.898968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-02 00:56:15.898990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-02 00:56:15.899011 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.899024 | orchestrator | 2026-01-02 00:56:15.899037 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-01-02 00:56:15.899050 | orchestrator | Friday 02 January 2026 00:53:04 +0000 (0:00:03.000) 0:03:48.309 ******** 2026-01-02 00:56:15.899063 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:15.899074 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:15.899083 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:15.899091 | orchestrator | 2026-01-02 00:56:15.899099 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-01-02 00:56:15.899107 | orchestrator | Friday 02 January 2026 00:53:06 +0000 (0:00:02.015) 0:03:50.324 ******** 2026-01-02 00:56:15.899115 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.899123 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.899131 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.899139 | orchestrator | 2026-01-02 00:56:15.899147 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-01-02 00:56:15.899155 | orchestrator | Friday 02 January 2026 00:53:07 +0000 (0:00:01.236) 0:03:51.560 ******** 2026-01-02 00:56:15.899163 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.899171 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.899179 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.899187 | orchestrator | 2026-01-02 00:56:15.899195 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-01-02 00:56:15.899203 | orchestrator | Friday 02 January 2026 00:53:07 +0000 (0:00:00.309) 0:03:51.870 ******** 2026-01-02 00:56:15.899211 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:15.899219 | orchestrator | 2026-01-02 00:56:15.899228 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-01-02 00:56:15.899236 | orchestrator | Friday 02 January 2026 00:53:08 +0000 (0:00:01.289) 0:03:53.160 ******** 2026-01-02 00:56:15.899244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-02 00:56:15.899264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-02 00:56:15.899273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-02 00:56:15.899282 | orchestrator | 2026-01-02 00:56:15.899290 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-01-02 00:56:15.899298 | orchestrator | Friday 02 January 2026 00:53:10 +0000 (0:00:01.665) 0:03:54.825 ******** 2026-01-02 00:56:15.899314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-02 00:56:15.899323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-02 00:56:15.899332 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.899340 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.899348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-02 00:56:15.899362 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.899370 | orchestrator | 2026-01-02 00:56:15.899378 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-01-02 00:56:15.899386 | orchestrator | Friday 02 January 2026 00:53:10 +0000 (0:00:00.445) 0:03:55.270 ******** 2026-01-02 00:56:15.899395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-02 00:56:15.899404 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.899415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-02 00:56:15.899423 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.899431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-02 00:56:15.899440 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.899448 | orchestrator | 2026-01-02 00:56:15.899456 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-01-02 00:56:15.899464 | orchestrator | Friday 02 January 2026 00:53:11 +0000 (0:00:00.932) 0:03:56.203 ******** 2026-01-02 00:56:15.899472 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.899480 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.899488 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.899496 | orchestrator | 2026-01-02 00:56:15.899504 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-01-02 00:56:15.899512 | orchestrator | Friday 02 January 2026 00:53:12 +0000 (0:00:00.462) 0:03:56.666 ******** 2026-01-02 00:56:15.899520 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.899528 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.899536 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.899544 | orchestrator | 2026-01-02 00:56:15.899552 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-01-02 00:56:15.899560 | orchestrator | Friday 02 January 2026 00:53:13 +0000 (0:00:01.297) 0:03:57.964 ******** 2026-01-02 00:56:15.899569 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.899708 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.899716 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.899723 | orchestrator | 2026-01-02 00:56:15.899730 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-01-02 00:56:15.899745 | orchestrator | Friday 02 January 2026 00:53:14 +0000 (0:00:00.340) 0:03:58.304 ******** 2026-01-02 00:56:15.899752 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:15.899759 | orchestrator | 2026-01-02 00:56:15.899766 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-01-02 00:56:15.899773 | orchestrator | Friday 02 January 2026 00:53:15 +0000 (0:00:01.521) 0:03:59.826 ******** 2026-01-02 00:56:15.899781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-02 00:56:15.899796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.899808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.899816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.899827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-02 00:56:15.899835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.899846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-02 00:56:15.899854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-02 00:56:15.899865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-02 00:56:15.899872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.899880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.899892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.899904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.899911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-02 00:56:15.899921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-02 00:56:15.899928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.899939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.899947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-02 00:56:15.899961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-02 00:56:15.899969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-02 00:56:15.899976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.899986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-02 00:56:15.899994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-02 00:56:15.900006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.900017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-02 00:56:15.900025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-02 00:56:15.900032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.900041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-02 00:56:15.900049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-02 00:56:15.900060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-02 00:56:15.900074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.900081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.900091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-02 00:56:15.900099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.900107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-02 00:56:15.900122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.900130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-02 00:56:15.900137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.900144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-02 00:56:15.900154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-02 00:56:15.900162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.900173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-02 00:56:15.900184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.900191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-02 00:56:15.900199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-02 00:56:15.900209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.900216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-02 00:56:15.900354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-02 00:56:15.900367 | orchestrator | 2026-01-02 00:56:15.900374 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-01-02 00:56:15.900381 | orchestrator | Friday 02 January 2026 00:53:19 +0000 (0:00:04.305) 0:04:04.131 ******** 2026-01-02 00:56:15.900389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-02 00:56:15.900396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.900407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.900414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.900429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-02 00:56:15.900437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-02 00:56:15.900444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.900451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.900461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-02 00:56:15.900469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.900480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-02 00:56:15.900491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.900498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.900505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-02 00:56:15.900517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.900525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-02 00:56:15.900536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-02 00:56:15.900547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.900554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-02 00:56:15.900561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-02 00:56:15.900568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-02 00:56:15.900593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-02 00:56:15.900605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.900617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.900671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.900684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-02 00:56:15.900694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-02 00:56:15.900705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.900712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.900725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-02 00:56:15.900732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.900739 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.900747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-02 00:56:15.900756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-02 00:56:15.900768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-02 00:56:15.900775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.900786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.900794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-02 00:56:15.900801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-02 00:56:15.900809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-02 00:56:15.900822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-02 00:56:15.900830 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.900837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.900848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-02 00:56:15.900855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.900863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-02 00:56:15.900870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-02 00:56:15.900884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.900891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-02 00:56:15.900902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-02 00:56:15.900909 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.900916 | orchestrator | 2026-01-02 00:56:15.900924 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-01-02 00:56:15.900930 | orchestrator | Friday 02 January 2026 00:53:21 +0000 (0:00:01.528) 0:04:05.659 ******** 2026-01-02 00:56:15.900938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-01-02 00:56:15.900945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-01-02 00:56:15.900951 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.900959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-01-02 00:56:15.900967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-01-02 00:56:15.900975 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.900983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-01-02 00:56:15.900991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-01-02 00:56:15.901002 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.901010 | orchestrator | 2026-01-02 00:56:15.901017 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-01-02 00:56:15.901025 | orchestrator | Friday 02 January 2026 00:53:23 +0000 (0:00:02.151) 0:04:07.810 ******** 2026-01-02 00:56:15.901033 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:15.901041 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:15.901048 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:15.901056 | orchestrator | 2026-01-02 00:56:15.901064 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-01-02 00:56:15.901071 | orchestrator | Friday 02 January 2026 00:53:25 +0000 (0:00:01.507) 0:04:09.318 ******** 2026-01-02 00:56:15.901080 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:15.901087 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:15.901095 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:15.901103 | orchestrator | 2026-01-02 00:56:15.901111 | orchestrator | TASK [include_role : placement] ************************************************ 2026-01-02 00:56:15.901119 | orchestrator | Friday 02 January 2026 00:53:27 +0000 (0:00:02.170) 0:04:11.489 ******** 2026-01-02 00:56:15.901126 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:15.901134 | orchestrator | 2026-01-02 00:56:15.901142 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-01-02 00:56:15.901153 | orchestrator | Friday 02 January 2026 00:53:28 +0000 (0:00:01.201) 0:04:12.690 ******** 2026-01-02 00:56:15.901162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-02 00:56:15.901175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-02 00:56:15.901185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-02 00:56:15.901197 | orchestrator | 2026-01-02 00:56:15.901205 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-01-02 00:56:15.901213 | orchestrator | Friday 02 January 2026 00:53:32 +0000 (0:00:03.935) 0:04:16.626 ******** 2026-01-02 00:56:15.901221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-02 00:56:15.901230 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.901241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-02 00:56:15.901250 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.901261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-02 00:56:15.901270 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.901277 | orchestrator | 2026-01-02 00:56:15.901286 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-01-02 00:56:15.901293 | orchestrator | Friday 02 January 2026 00:53:32 +0000 (0:00:00.521) 0:04:17.148 ******** 2026-01-02 00:56:15.901302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-02 00:56:15.901311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-02 00:56:15.901324 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.901332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-02 00:56:15.901339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-02 00:56:15.901346 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.901353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-02 00:56:15.901359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-02 00:56:15.901366 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.901373 | orchestrator | 2026-01-02 00:56:15.901380 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-01-02 00:56:15.901387 | orchestrator | Friday 02 January 2026 00:53:33 +0000 (0:00:00.788) 0:04:17.936 ******** 2026-01-02 00:56:15.901394 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:15.901401 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:15.901408 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:15.901415 | orchestrator | 2026-01-02 00:56:15.901421 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-01-02 00:56:15.901428 | orchestrator | Friday 02 January 2026 00:53:35 +0000 (0:00:02.071) 0:04:20.008 ******** 2026-01-02 00:56:15.901435 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:15.901442 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:15.901449 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:15.901456 | orchestrator | 2026-01-02 00:56:15.901462 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-01-02 00:56:15.901470 | orchestrator | Friday 02 January 2026 00:53:37 +0000 (0:00:01.933) 0:04:21.941 ******** 2026-01-02 00:56:15.901476 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:15.901483 | orchestrator | 2026-01-02 00:56:15.901495 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-01-02 00:56:15.901502 | orchestrator | Friday 02 January 2026 00:53:39 +0000 (0:00:01.650) 0:04:23.592 ******** 2026-01-02 00:56:15.901510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-02 00:56:15.901527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.901535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.901543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-02 00:56:15.901553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.901560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.901593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-02 00:56:15.901606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.901614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.901621 | orchestrator | 2026-01-02 00:56:15.901628 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-01-02 00:56:15.901635 | orchestrator | Friday 02 January 2026 00:53:43 +0000 (0:00:04.463) 0:04:28.055 ******** 2026-01-02 00:56:15.901645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-02 00:56:15.901657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-02 00:56:15.901670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.901678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.901685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.901692 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.901702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.901709 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.901717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-02 00:56:15.901731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.901739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.901746 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.901753 | orchestrator | 2026-01-02 00:56:15.901760 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-01-02 00:56:15.901767 | orchestrator | Friday 02 January 2026 00:53:45 +0000 (0:00:01.277) 0:04:29.333 ******** 2026-01-02 00:56:15.901774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-02 00:56:15.901782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-02 00:56:15.901789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-02 00:56:15.901796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-02 00:56:15.901803 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.901810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-02 00:56:15.901820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-02 00:56:15.901828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-02 00:56:15.901838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-02 00:56:15.901845 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.901852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-02 00:56:15.901859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-02 00:56:15.901866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-02 00:56:15.901873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-02 00:56:15.901880 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.901887 | orchestrator | 2026-01-02 00:56:15.901897 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-01-02 00:56:15.901905 | orchestrator | Friday 02 January 2026 00:53:46 +0000 (0:00:00.992) 0:04:30.325 ******** 2026-01-02 00:56:15.901911 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:15.901918 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:15.901925 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:15.901932 | orchestrator | 2026-01-02 00:56:15.901939 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-01-02 00:56:15.901946 | orchestrator | Friday 02 January 2026 00:53:47 +0000 (0:00:01.623) 0:04:31.949 ******** 2026-01-02 00:56:15.901953 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:15.901960 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:15.901967 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:15.901974 | orchestrator | 2026-01-02 00:56:15.901980 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-01-02 00:56:15.901987 | orchestrator | Friday 02 January 2026 00:53:49 +0000 (0:00:02.244) 0:04:34.194 ******** 2026-01-02 00:56:15.901994 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:15.902001 | orchestrator | 2026-01-02 00:56:15.902008 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-01-02 00:56:15.902037 | orchestrator | Friday 02 January 2026 00:53:51 +0000 (0:00:01.627) 0:04:35.821 ******** 2026-01-02 00:56:15.902045 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-01-02 00:56:15.902053 | orchestrator | 2026-01-02 00:56:15.902061 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-01-02 00:56:15.902068 | orchestrator | Friday 02 January 2026 00:53:52 +0000 (0:00:00.838) 0:04:36.660 ******** 2026-01-02 00:56:15.902075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-02 00:56:15.902083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-02 00:56:15.902097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-02 00:56:15.902105 | orchestrator | 2026-01-02 00:56:15.902111 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-01-02 00:56:15.902118 | orchestrator | Friday 02 January 2026 00:53:57 +0000 (0:00:04.729) 0:04:41.390 ******** 2026-01-02 00:56:15.902125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-02 00:56:15.902132 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.902139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-02 00:56:15.902146 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.902158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-02 00:56:15.902165 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.902172 | orchestrator | 2026-01-02 00:56:15.902179 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-01-02 00:56:15.902186 | orchestrator | Friday 02 January 2026 00:53:58 +0000 (0:00:01.072) 0:04:42.462 ******** 2026-01-02 00:56:15.902192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-02 00:56:15.902200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-02 00:56:15.902207 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.902214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-02 00:56:15.902224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-02 00:56:15.902231 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.902238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-02 00:56:15.902245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-02 00:56:15.902252 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.902259 | orchestrator | 2026-01-02 00:56:15.902266 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-02 00:56:15.902273 | orchestrator | Friday 02 January 2026 00:53:59 +0000 (0:00:01.659) 0:04:44.121 ******** 2026-01-02 00:56:15.902280 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:15.902287 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:15.902294 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:15.902300 | orchestrator | 2026-01-02 00:56:15.902310 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-02 00:56:15.902317 | orchestrator | Friday 02 January 2026 00:54:02 +0000 (0:00:02.690) 0:04:46.812 ******** 2026-01-02 00:56:15.902324 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:15.902331 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:15.902338 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:15.902345 | orchestrator | 2026-01-02 00:56:15.902352 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-01-02 00:56:15.902358 | orchestrator | Friday 02 January 2026 00:54:06 +0000 (0:00:03.513) 0:04:50.325 ******** 2026-01-02 00:56:15.902366 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-01-02 00:56:15.902373 | orchestrator | 2026-01-02 00:56:15.902380 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-01-02 00:56:15.902387 | orchestrator | Friday 02 January 2026 00:54:07 +0000 (0:00:01.480) 0:04:51.806 ******** 2026-01-02 00:56:15.902394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-02 00:56:15.902401 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.902419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-02 00:56:15.902427 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.902434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-02 00:56:15.902445 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.902452 | orchestrator | 2026-01-02 00:56:15.902459 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-01-02 00:56:15.902466 | orchestrator | Friday 02 January 2026 00:54:08 +0000 (0:00:01.305) 0:04:53.112 ******** 2026-01-02 00:56:15.902473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-02 00:56:15.902480 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.902487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-02 00:56:15.902562 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.902596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-02 00:56:15.902604 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.902611 | orchestrator | 2026-01-02 00:56:15.902618 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-01-02 00:56:15.902626 | orchestrator | Friday 02 January 2026 00:54:10 +0000 (0:00:01.396) 0:04:54.508 ******** 2026-01-02 00:56:15.902633 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.902640 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.902647 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.902653 | orchestrator | 2026-01-02 00:56:15.902660 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-02 00:56:15.902667 | orchestrator | Friday 02 January 2026 00:54:12 +0000 (0:00:01.964) 0:04:56.473 ******** 2026-01-02 00:56:15.902675 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:15.902682 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:15.902689 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:15.902696 | orchestrator | 2026-01-02 00:56:15.902702 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-02 00:56:15.902709 | orchestrator | Friday 02 January 2026 00:54:14 +0000 (0:00:02.472) 0:04:58.945 ******** 2026-01-02 00:56:15.902716 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:15.902723 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:15.902730 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:15.902736 | orchestrator | 2026-01-02 00:56:15.902744 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-01-02 00:56:15.902751 | orchestrator | Friday 02 January 2026 00:54:18 +0000 (0:00:04.173) 0:05:03.118 ******** 2026-01-02 00:56:15.902764 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-1, testbed-node-0, testbed-node-2 => (item=nova-serialproxy) 2026-01-02 00:56:15.902771 | orchestrator | 2026-01-02 00:56:15.902778 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-01-02 00:56:15.902785 | orchestrator | Friday 02 January 2026 00:54:19 +0000 (0:00:00.913) 0:05:04.032 ******** 2026-01-02 00:56:15.902798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-02 00:56:15.902806 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.902814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-02 00:56:15.902821 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.902828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-02 00:56:15.902835 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.902842 | orchestrator | 2026-01-02 00:56:15.902849 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-01-02 00:56:15.902856 | orchestrator | Friday 02 January 2026 00:54:21 +0000 (0:00:01.414) 0:05:05.447 ******** 2026-01-02 00:56:15.902863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-02 00:56:15.902870 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.902880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-02 00:56:15.902888 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.902895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-02 00:56:15.902916 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.902932 | orchestrator | 2026-01-02 00:56:15.902940 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-01-02 00:56:15.902947 | orchestrator | Friday 02 January 2026 00:54:22 +0000 (0:00:01.435) 0:05:06.883 ******** 2026-01-02 00:56:15.902953 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.902961 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.902968 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.902975 | orchestrator | 2026-01-02 00:56:15.902982 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-02 00:56:15.902989 | orchestrator | Friday 02 January 2026 00:54:24 +0000 (0:00:01.608) 0:05:08.491 ******** 2026-01-02 00:56:15.902996 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:15.903007 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:15.903015 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:15.903021 | orchestrator | 2026-01-02 00:56:15.903028 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-02 00:56:15.903035 | orchestrator | Friday 02 January 2026 00:54:26 +0000 (0:00:02.516) 0:05:11.008 ******** 2026-01-02 00:56:15.903042 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:15.903049 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:15.903056 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:15.903062 | orchestrator | 2026-01-02 00:56:15.903070 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-01-02 00:56:15.903076 | orchestrator | Friday 02 January 2026 00:54:30 +0000 (0:00:03.581) 0:05:14.590 ******** 2026-01-02 00:56:15.903083 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:15.903090 | orchestrator | 2026-01-02 00:56:15.903097 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-01-02 00:56:15.903104 | orchestrator | Friday 02 January 2026 00:54:31 +0000 (0:00:01.655) 0:05:16.245 ******** 2026-01-02 00:56:15.903111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-02 00:56:15.903118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-02 00:56:15.903129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-02 00:56:15.903143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-02 00:56:15.903151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.903163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-02 00:56:15.903171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-02 00:56:15.903178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-02 00:56:15.903186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-02 00:56:15.903201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-02 00:56:15.903209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.903221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-02 00:56:15.903229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-02 00:56:15.903236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-02 00:56:15.903244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.903255 | orchestrator | 2026-01-02 00:56:15.903264 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-01-02 00:56:15.903271 | orchestrator | Friday 02 January 2026 00:54:35 +0000 (0:00:03.859) 0:05:20.105 ******** 2026-01-02 00:56:15.903282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-02 00:56:15.903289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-02 00:56:15.903301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-02 00:56:15.903309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-02 00:56:15.903317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.903324 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.903334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-02 00:56:15.903346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-02 00:56:15.903354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-02 00:56:15.903365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-02 00:56:15.903373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.903380 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.903388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-02 00:56:15.903399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-02 00:56:15.903409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-02 00:56:15.903416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-02 00:56:15.903427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-02 00:56:15.903435 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.903442 | orchestrator | 2026-01-02 00:56:15.903449 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-01-02 00:56:15.903456 | orchestrator | Friday 02 January 2026 00:54:36 +0000 (0:00:00.780) 0:05:20.886 ******** 2026-01-02 00:56:15.903463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-02 00:56:15.903470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-02 00:56:15.903477 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.903484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-02 00:56:15.903491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-02 00:56:15.903498 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.903510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-02 00:56:15.903517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-02 00:56:15.903524 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.903531 | orchestrator | 2026-01-02 00:56:15.903538 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-01-02 00:56:15.903545 | orchestrator | Friday 02 January 2026 00:54:38 +0000 (0:00:01.584) 0:05:22.470 ******** 2026-01-02 00:56:15.903551 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:15.903558 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:15.903565 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:15.903615 | orchestrator | 2026-01-02 00:56:15.903623 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-01-02 00:56:15.903630 | orchestrator | Friday 02 January 2026 00:54:39 +0000 (0:00:01.407) 0:05:23.877 ******** 2026-01-02 00:56:15.903637 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:15.903643 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:15.903650 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:15.903657 | orchestrator | 2026-01-02 00:56:15.903664 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-01-02 00:56:15.903671 | orchestrator | Friday 02 January 2026 00:54:41 +0000 (0:00:02.133) 0:05:26.010 ******** 2026-01-02 00:56:15.903678 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:15.903685 | orchestrator | 2026-01-02 00:56:15.903692 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-01-02 00:56:15.903705 | orchestrator | Friday 02 January 2026 00:54:43 +0000 (0:00:01.363) 0:05:27.374 ******** 2026-01-02 00:56:15.903713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-02 00:56:15.903726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-02 00:56:15.903734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-02 00:56:15.903747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-02 00:56:15.903760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-02 00:56:15.903772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-02 00:56:15.903781 | orchestrator | 2026-01-02 00:56:15.903787 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-01-02 00:56:15.903795 | orchestrator | Friday 02 January 2026 00:54:48 +0000 (0:00:05.682) 0:05:33.057 ******** 2026-01-02 00:56:15.903807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-02 00:56:15.903815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-02 00:56:15.903823 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.903833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-02 00:56:15.903841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-02 00:56:15.903852 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.903860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-02 00:56:15.903872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-02 00:56:15.903879 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.903887 | orchestrator | 2026-01-02 00:56:15.903894 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-01-02 00:56:15.903901 | orchestrator | Friday 02 January 2026 00:54:49 +0000 (0:00:00.675) 0:05:33.732 ******** 2026-01-02 00:56:15.903908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-01-02 00:56:15.903916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-02 00:56:15.903926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-02 00:56:15.903934 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.903941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-01-02 00:56:15.903948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-02 00:56:15.903955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-02 00:56:15.903962 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.903969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-01-02 00:56:15.903981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-02 00:56:15.903992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-02 00:56:15.904000 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.904008 | orchestrator | 2026-01-02 00:56:15.904015 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-01-02 00:56:15.904022 | orchestrator | Friday 02 January 2026 00:54:50 +0000 (0:00:00.935) 0:05:34.668 ******** 2026-01-02 00:56:15.904029 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.904035 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.904042 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.904049 | orchestrator | 2026-01-02 00:56:15.904056 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-01-02 00:56:15.904063 | orchestrator | Friday 02 January 2026 00:54:51 +0000 (0:00:00.834) 0:05:35.502 ******** 2026-01-02 00:56:15.904069 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.904076 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.904083 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.904090 | orchestrator | 2026-01-02 00:56:15.904097 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-01-02 00:56:15.904103 | orchestrator | Friday 02 January 2026 00:54:52 +0000 (0:00:01.423) 0:05:36.926 ******** 2026-01-02 00:56:15.904110 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:15.904117 | orchestrator | 2026-01-02 00:56:15.904124 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-01-02 00:56:15.904131 | orchestrator | Friday 02 January 2026 00:54:54 +0000 (0:00:01.508) 0:05:38.434 ******** 2026-01-02 00:56:15.904138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-02 00:56:15.904145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-02 00:56:15.904156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:56:15.904165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:56:15.904178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-02 00:56:15.904190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-02 00:56:15.904198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-02 00:56:15.904205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:56:15.904213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:56:15.904223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-02 00:56:15.904235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-02 00:56:15.904247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-02 00:56:15.904258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:56:15.904266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:56:15.904273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-02 00:56:15.904281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-02 00:56:15.904292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-02 00:56:15.904305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:56:15.904316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:56:15.904324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-02 00:56:15.904332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-02 00:56:15.904343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-02 00:56:15.904357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:56:15.904365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:56:15.904373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-02 00:56:15.904384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-02 00:56:15.904392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-02 00:56:15.904399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:56:15.904411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:56:15.904418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-02 00:56:15.904425 | orchestrator | 2026-01-02 00:56:15.904433 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-01-02 00:56:15.904440 | orchestrator | Friday 02 January 2026 00:54:58 +0000 (0:00:04.726) 0:05:43.161 ******** 2026-01-02 00:56:15.904505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-02 00:56:15.904552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-02 00:56:15.904566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:56:15.904593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:56:15.904601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-02 00:56:15.904618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-02 00:56:15.904627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-02 00:56:15.904640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:56:15.904647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:56:15.904655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-02 00:56:15.904662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-02 00:56:15.904678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-02 00:56:15.904685 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.904693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:56:15.904701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:56:15.904712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-02 00:56:15.904720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-02 00:56:15.904728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-02 00:56:15.904749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:56:15.904757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-02 00:56:15.904764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:56:15.904775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-02 00:56:15.904783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-02 00:56:15.904790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:56:15.904798 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.904805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:56:15.904817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-02 00:56:15.904829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-02 00:56:15.904840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-02 00:56:15.904848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:56:15.904855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:56:15.904863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-02 00:56:15.904875 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.904881 | orchestrator | 2026-01-02 00:56:15.904889 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-01-02 00:56:15.904896 | orchestrator | Friday 02 January 2026 00:55:00 +0000 (0:00:01.245) 0:05:44.406 ******** 2026-01-02 00:56:15.904903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-01-02 00:56:15.904910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-01-02 00:56:15.904918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-02 00:56:15.904930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-02 00:56:15.904937 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.904944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-01-02 00:56:15.904951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-01-02 00:56:15.904958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-02 00:56:15.904966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-02 00:56:15.904973 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.904983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-01-02 00:56:15.904991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-01-02 00:56:15.904998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-02 00:56:15.905005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-02 00:56:15.905017 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.905024 | orchestrator | 2026-01-02 00:56:15.905032 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-01-02 00:56:15.905038 | orchestrator | Friday 02 January 2026 00:55:01 +0000 (0:00:01.098) 0:05:45.505 ******** 2026-01-02 00:56:15.905045 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.905052 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.905059 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.905066 | orchestrator | 2026-01-02 00:56:15.905073 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-01-02 00:56:15.905080 | orchestrator | Friday 02 January 2026 00:55:01 +0000 (0:00:00.466) 0:05:45.971 ******** 2026-01-02 00:56:15.905087 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.905094 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.905100 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.905107 | orchestrator | 2026-01-02 00:56:15.905114 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-01-02 00:56:15.905121 | orchestrator | Friday 02 January 2026 00:55:03 +0000 (0:00:01.534) 0:05:47.506 ******** 2026-01-02 00:56:15.905128 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:15.905135 | orchestrator | 2026-01-02 00:56:15.905142 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-01-02 00:56:15.905149 | orchestrator | Friday 02 January 2026 00:55:05 +0000 (0:00:01.878) 0:05:49.384 ******** 2026-01-02 00:56:15.905160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-02 00:56:15.905169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-02 00:56:15.905180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-02 00:56:15.905193 | orchestrator | 2026-01-02 00:56:15.905201 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-01-02 00:56:15.905208 | orchestrator | Friday 02 January 2026 00:55:07 +0000 (0:00:02.571) 0:05:51.956 ******** 2026-01-02 00:56:15.905215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-02 00:56:15.905222 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.905248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-02 00:56:15.905256 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.905264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-02 00:56:15.905275 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.905282 | orchestrator | 2026-01-02 00:56:15.905289 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-01-02 00:56:15.905299 | orchestrator | Friday 02 January 2026 00:55:08 +0000 (0:00:00.415) 0:05:52.371 ******** 2026-01-02 00:56:15.905307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-02 00:56:15.905314 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.905321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-02 00:56:15.905328 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.905335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-02 00:56:15.905342 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.905349 | orchestrator | 2026-01-02 00:56:15.905356 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-01-02 00:56:15.905363 | orchestrator | Friday 02 January 2026 00:55:09 +0000 (0:00:01.080) 0:05:53.451 ******** 2026-01-02 00:56:15.905370 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.905377 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.905384 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.905391 | orchestrator | 2026-01-02 00:56:15.905398 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-01-02 00:56:15.905404 | orchestrator | Friday 02 January 2026 00:55:09 +0000 (0:00:00.452) 0:05:53.904 ******** 2026-01-02 00:56:15.905411 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.905418 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.905425 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.905432 | orchestrator | 2026-01-02 00:56:15.905439 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-01-02 00:56:15.905446 | orchestrator | Friday 02 January 2026 00:55:11 +0000 (0:00:01.449) 0:05:55.353 ******** 2026-01-02 00:56:15.905453 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:15.905461 | orchestrator | 2026-01-02 00:56:15.905468 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-01-02 00:56:15.905474 | orchestrator | Friday 02 January 2026 00:55:12 +0000 (0:00:01.820) 0:05:57.173 ******** 2026-01-02 00:56:15.905481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-02 00:56:15.905493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-02 00:56:15.905510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-02 00:56:15.905518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-02 00:56:15.905526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-02 00:56:15.905536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-02 00:56:15.905548 | orchestrator | 2026-01-02 00:56:15.905555 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-01-02 00:56:15.905562 | orchestrator | Friday 02 January 2026 00:55:19 +0000 (0:00:06.299) 0:06:03.473 ******** 2026-01-02 00:56:15.905588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-02 00:56:15.905596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-02 00:56:15.905603 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.905611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-02 00:56:15.905622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-02 00:56:15.905634 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.905641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-02 00:56:15.905652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-02 00:56:15.905660 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.905667 | orchestrator | 2026-01-02 00:56:15.905674 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-01-02 00:56:15.905681 | orchestrator | Friday 02 January 2026 00:55:19 +0000 (0:00:00.703) 0:06:04.177 ******** 2026-01-02 00:56:15.905688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-02 00:56:15.905695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-02 00:56:15.905702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-02 00:56:15.905709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-02 00:56:15.905717 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.905724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-02 00:56:15.905730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-02 00:56:15.905743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-02 00:56:15.905753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-02 00:56:15.905760 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.905767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-02 00:56:15.905775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-02 00:56:15.905782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-02 00:56:15.905789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-02 00:56:15.905796 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.905803 | orchestrator | 2026-01-02 00:56:15.905810 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-01-02 00:56:15.905816 | orchestrator | Friday 02 January 2026 00:55:21 +0000 (0:00:01.916) 0:06:06.093 ******** 2026-01-02 00:56:15.905823 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:15.905830 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:15.905837 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:15.905844 | orchestrator | 2026-01-02 00:56:15.905851 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-01-02 00:56:15.905861 | orchestrator | Friday 02 January 2026 00:55:23 +0000 (0:00:01.491) 0:06:07.585 ******** 2026-01-02 00:56:15.905869 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:15.905876 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:15.905883 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:15.905890 | orchestrator | 2026-01-02 00:56:15.905897 | orchestrator | TASK [include_role : swift] **************************************************** 2026-01-02 00:56:15.905904 | orchestrator | Friday 02 January 2026 00:55:25 +0000 (0:00:02.221) 0:06:09.806 ******** 2026-01-02 00:56:15.905910 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.905917 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.905924 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.905931 | orchestrator | 2026-01-02 00:56:15.905938 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-01-02 00:56:15.905945 | orchestrator | Friday 02 January 2026 00:55:25 +0000 (0:00:00.358) 0:06:10.164 ******** 2026-01-02 00:56:15.905952 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.905959 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.905966 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.905972 | orchestrator | 2026-01-02 00:56:15.905979 | orchestrator | TASK [include_role : trove] **************************************************** 2026-01-02 00:56:15.905986 | orchestrator | Friday 02 January 2026 00:55:26 +0000 (0:00:00.335) 0:06:10.500 ******** 2026-01-02 00:56:15.905993 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.906000 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.906007 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.906013 | orchestrator | 2026-01-02 00:56:15.906047 | orchestrator | TASK [include_role : venus] **************************************************** 2026-01-02 00:56:15.906054 | orchestrator | Friday 02 January 2026 00:55:26 +0000 (0:00:00.642) 0:06:11.143 ******** 2026-01-02 00:56:15.906067 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.906074 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.906081 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.906088 | orchestrator | 2026-01-02 00:56:15.906094 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-01-02 00:56:15.906101 | orchestrator | Friday 02 January 2026 00:55:27 +0000 (0:00:00.347) 0:06:11.490 ******** 2026-01-02 00:56:15.906108 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.906115 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.906123 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.906130 | orchestrator | 2026-01-02 00:56:15.906137 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-01-02 00:56:15.906144 | orchestrator | Friday 02 January 2026 00:55:27 +0000 (0:00:00.310) 0:06:11.800 ******** 2026-01-02 00:56:15.906151 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.906158 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.906164 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.906171 | orchestrator | 2026-01-02 00:56:15.906178 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-01-02 00:56:15.906184 | orchestrator | Friday 02 January 2026 00:55:28 +0000 (0:00:00.895) 0:06:12.696 ******** 2026-01-02 00:56:15.906191 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:15.906198 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:15.906205 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:15.906212 | orchestrator | 2026-01-02 00:56:15.906219 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-01-02 00:56:15.906226 | orchestrator | Friday 02 January 2026 00:55:29 +0000 (0:00:00.784) 0:06:13.480 ******** 2026-01-02 00:56:15.906233 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:15.906240 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:15.906246 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:15.906253 | orchestrator | 2026-01-02 00:56:15.906260 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-01-02 00:56:15.906267 | orchestrator | Friday 02 January 2026 00:55:29 +0000 (0:00:00.370) 0:06:13.851 ******** 2026-01-02 00:56:15.906273 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:15.906280 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:15.906290 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:15.906297 | orchestrator | 2026-01-02 00:56:15.906304 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-01-02 00:56:15.906311 | orchestrator | Friday 02 January 2026 00:55:30 +0000 (0:00:00.892) 0:06:14.743 ******** 2026-01-02 00:56:15.906318 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:15.906325 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:15.906332 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:15.906338 | orchestrator | 2026-01-02 00:56:15.906345 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-01-02 00:56:15.906352 | orchestrator | Friday 02 January 2026 00:55:31 +0000 (0:00:01.255) 0:06:15.999 ******** 2026-01-02 00:56:15.906359 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:15.906366 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:15.906373 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:15.906379 | orchestrator | 2026-01-02 00:56:15.906386 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-01-02 00:56:15.906393 | orchestrator | Friday 02 January 2026 00:55:32 +0000 (0:00:00.930) 0:06:16.929 ******** 2026-01-02 00:56:15.906400 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:15.906407 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:15.906413 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:15.906420 | orchestrator | 2026-01-02 00:56:15.906427 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-01-02 00:56:15.906434 | orchestrator | Friday 02 January 2026 00:55:42 +0000 (0:00:09.700) 0:06:26.629 ******** 2026-01-02 00:56:15.906441 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:15.906452 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:15.906459 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:15.906465 | orchestrator | 2026-01-02 00:56:15.906472 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-01-02 00:56:15.906479 | orchestrator | Friday 02 January 2026 00:55:43 +0000 (0:00:00.762) 0:06:27.392 ******** 2026-01-02 00:56:15.906486 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:15.906492 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:15.906499 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:15.906507 | orchestrator | 2026-01-02 00:56:15.906519 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-01-02 00:56:15.906531 | orchestrator | Friday 02 January 2026 00:55:58 +0000 (0:00:14.889) 0:06:42.282 ******** 2026-01-02 00:56:15.906538 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:15.906550 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:15.906557 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:15.906564 | orchestrator | 2026-01-02 00:56:15.906609 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-01-02 00:56:15.906618 | orchestrator | Friday 02 January 2026 00:55:59 +0000 (0:00:01.128) 0:06:43.410 ******** 2026-01-02 00:56:15.906625 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:15.906631 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:15.906638 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:15.906645 | orchestrator | 2026-01-02 00:56:15.906652 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-01-02 00:56:15.906658 | orchestrator | Friday 02 January 2026 00:56:03 +0000 (0:00:04.626) 0:06:48.036 ******** 2026-01-02 00:56:15.906665 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.906672 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.906679 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.906685 | orchestrator | 2026-01-02 00:56:15.906692 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-01-02 00:56:15.906699 | orchestrator | Friday 02 January 2026 00:56:04 +0000 (0:00:00.362) 0:06:48.399 ******** 2026-01-02 00:56:15.906705 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.906712 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.906719 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.906726 | orchestrator | 2026-01-02 00:56:15.906732 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-01-02 00:56:15.906739 | orchestrator | Friday 02 January 2026 00:56:04 +0000 (0:00:00.359) 0:06:48.758 ******** 2026-01-02 00:56:15.906746 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.906752 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.906759 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.906766 | orchestrator | 2026-01-02 00:56:15.906772 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-01-02 00:56:15.906779 | orchestrator | Friday 02 January 2026 00:56:05 +0000 (0:00:00.850) 0:06:49.609 ******** 2026-01-02 00:56:15.906786 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.906792 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.906799 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.906805 | orchestrator | 2026-01-02 00:56:15.906813 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-01-02 00:56:15.906820 | orchestrator | Friday 02 January 2026 00:56:05 +0000 (0:00:00.390) 0:06:49.999 ******** 2026-01-02 00:56:15.906826 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.906833 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.906840 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.906847 | orchestrator | 2026-01-02 00:56:15.906854 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-01-02 00:56:15.906861 | orchestrator | Friday 02 January 2026 00:56:06 +0000 (0:00:00.374) 0:06:50.374 ******** 2026-01-02 00:56:15.906867 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:15.906874 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:15.906886 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:15.906893 | orchestrator | 2026-01-02 00:56:15.906899 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-01-02 00:56:15.906906 | orchestrator | Friday 02 January 2026 00:56:06 +0000 (0:00:00.399) 0:06:50.774 ******** 2026-01-02 00:56:15.906913 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:15.906920 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:15.906926 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:15.906933 | orchestrator | 2026-01-02 00:56:15.906940 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-01-02 00:56:15.906947 | orchestrator | Friday 02 January 2026 00:56:11 +0000 (0:00:05.203) 0:06:55.977 ******** 2026-01-02 00:56:15.906954 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:15.906961 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:15.906967 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:15.906974 | orchestrator | 2026-01-02 00:56:15.906985 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:56:15.906992 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-01-02 00:56:15.906999 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-01-02 00:56:15.907006 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-01-02 00:56:15.907013 | orchestrator | 2026-01-02 00:56:15.907020 | orchestrator | 2026-01-02 00:56:15.907027 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:56:15.907034 | orchestrator | Friday 02 January 2026 00:56:12 +0000 (0:00:00.944) 0:06:56.922 ******** 2026-01-02 00:56:15.907040 | orchestrator | =============================================================================== 2026-01-02 00:56:15.907047 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 14.89s 2026-01-02 00:56:15.907054 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.70s 2026-01-02 00:56:15.907061 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 8.59s 2026-01-02 00:56:15.907067 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 6.61s 2026-01-02 00:56:15.907074 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.30s 2026-01-02 00:56:15.907081 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.70s 2026-01-02 00:56:15.907088 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.68s 2026-01-02 00:56:15.907095 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 5.43s 2026-01-02 00:56:15.907102 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 5.20s 2026-01-02 00:56:15.907113 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.73s 2026-01-02 00:56:15.907120 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.73s 2026-01-02 00:56:15.907127 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.63s 2026-01-02 00:56:15.907134 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.46s 2026-01-02 00:56:15.907140 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.40s 2026-01-02 00:56:15.907147 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.36s 2026-01-02 00:56:15.907154 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.31s 2026-01-02 00:56:15.907160 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.30s 2026-01-02 00:56:15.907167 | orchestrator | proxysql-config : Copying over nova-cell ProxySQL rules config ---------- 4.17s 2026-01-02 00:56:15.907174 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 4.15s 2026-01-02 00:56:15.907190 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.06s 2026-01-02 00:56:15.907197 | orchestrator | 2026-01-02 00:56:15 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:56:15.907204 | orchestrator | 2026-01-02 00:56:15 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:56:15.907210 | orchestrator | 2026-01-02 00:56:15 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:56:15.907217 | orchestrator | 2026-01-02 00:56:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:56:18.941065 | orchestrator | 2026-01-02 00:56:18 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:56:18.943382 | orchestrator | 2026-01-02 00:56:18 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:56:18.947199 | orchestrator | 2026-01-02 00:56:18 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:56:18.947624 | orchestrator | 2026-01-02 00:56:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:56:21.990331 | orchestrator | 2026-01-02 00:56:21 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:56:21.991124 | orchestrator | 2026-01-02 00:56:21 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:56:21.991800 | orchestrator | 2026-01-02 00:56:21 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:56:21.991830 | orchestrator | 2026-01-02 00:56:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:56:25.063873 | orchestrator | 2026-01-02 00:56:25 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:56:25.064477 | orchestrator | 2026-01-02 00:56:25 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:56:25.065470 | orchestrator | 2026-01-02 00:56:25 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:56:25.065834 | orchestrator | 2026-01-02 00:56:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:56:28.102971 | orchestrator | 2026-01-02 00:56:28 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:56:28.104685 | orchestrator | 2026-01-02 00:56:28 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:56:28.108070 | orchestrator | 2026-01-02 00:56:28 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:56:28.108148 | orchestrator | 2026-01-02 00:56:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:56:31.202791 | orchestrator | 2026-01-02 00:56:31 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:56:31.203596 | orchestrator | 2026-01-02 00:56:31 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:56:31.204613 | orchestrator | 2026-01-02 00:56:31 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:56:31.204662 | orchestrator | 2026-01-02 00:56:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:56:34.253842 | orchestrator | 2026-01-02 00:56:34 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:56:34.253972 | orchestrator | 2026-01-02 00:56:34 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:56:34.254527 | orchestrator | 2026-01-02 00:56:34 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:56:34.254834 | orchestrator | 2026-01-02 00:56:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:56:37.302628 | orchestrator | 2026-01-02 00:56:37 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:56:37.303173 | orchestrator | 2026-01-02 00:56:37 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:56:37.305099 | orchestrator | 2026-01-02 00:56:37 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:56:37.305127 | orchestrator | 2026-01-02 00:56:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:56:40.349260 | orchestrator | 2026-01-02 00:56:40 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:56:40.350945 | orchestrator | 2026-01-02 00:56:40 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:56:40.352093 | orchestrator | 2026-01-02 00:56:40 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:56:40.352308 | orchestrator | 2026-01-02 00:56:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:56:43.395750 | orchestrator | 2026-01-02 00:56:43 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:56:43.397096 | orchestrator | 2026-01-02 00:56:43 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:56:43.398339 | orchestrator | 2026-01-02 00:56:43 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:56:43.398400 | orchestrator | 2026-01-02 00:56:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:56:46.441298 | orchestrator | 2026-01-02 00:56:46 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:56:46.442331 | orchestrator | 2026-01-02 00:56:46 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:56:46.443398 | orchestrator | 2026-01-02 00:56:46 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:56:46.443413 | orchestrator | 2026-01-02 00:56:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:56:49.495386 | orchestrator | 2026-01-02 00:56:49 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:56:49.498758 | orchestrator | 2026-01-02 00:56:49 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:56:49.500809 | orchestrator | 2026-01-02 00:56:49 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:56:49.500844 | orchestrator | 2026-01-02 00:56:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:56:52.563319 | orchestrator | 2026-01-02 00:56:52 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:56:52.568295 | orchestrator | 2026-01-02 00:56:52 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:56:52.570839 | orchestrator | 2026-01-02 00:56:52 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:56:52.571260 | orchestrator | 2026-01-02 00:56:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:56:55.619165 | orchestrator | 2026-01-02 00:56:55 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:56:55.620708 | orchestrator | 2026-01-02 00:56:55 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:56:55.622789 | orchestrator | 2026-01-02 00:56:55 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:56:55.622875 | orchestrator | 2026-01-02 00:56:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:56:58.668033 | orchestrator | 2026-01-02 00:56:58 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:56:58.669226 | orchestrator | 2026-01-02 00:56:58 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:56:58.672951 | orchestrator | 2026-01-02 00:56:58 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:56:58.673006 | orchestrator | 2026-01-02 00:56:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:57:01.712130 | orchestrator | 2026-01-02 00:57:01 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:57:01.714345 | orchestrator | 2026-01-02 00:57:01 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:57:01.716798 | orchestrator | 2026-01-02 00:57:01 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:57:01.716973 | orchestrator | 2026-01-02 00:57:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:57:04.759946 | orchestrator | 2026-01-02 00:57:04 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:57:04.761995 | orchestrator | 2026-01-02 00:57:04 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:57:04.763879 | orchestrator | 2026-01-02 00:57:04 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:57:04.763971 | orchestrator | 2026-01-02 00:57:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:57:07.813316 | orchestrator | 2026-01-02 00:57:07 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:57:07.814348 | orchestrator | 2026-01-02 00:57:07 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:57:07.816620 | orchestrator | 2026-01-02 00:57:07 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:57:07.816688 | orchestrator | 2026-01-02 00:57:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:57:10.864835 | orchestrator | 2026-01-02 00:57:10 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:57:10.866217 | orchestrator | 2026-01-02 00:57:10 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:57:10.867380 | orchestrator | 2026-01-02 00:57:10 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:57:10.867441 | orchestrator | 2026-01-02 00:57:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:57:13.916452 | orchestrator | 2026-01-02 00:57:13 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:57:13.918847 | orchestrator | 2026-01-02 00:57:13 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:57:13.919367 | orchestrator | 2026-01-02 00:57:13 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:57:13.919401 | orchestrator | 2026-01-02 00:57:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:57:16.967703 | orchestrator | 2026-01-02 00:57:16 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:57:16.968823 | orchestrator | 2026-01-02 00:57:16 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:57:16.971350 | orchestrator | 2026-01-02 00:57:16 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:57:16.971398 | orchestrator | 2026-01-02 00:57:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:57:20.019045 | orchestrator | 2026-01-02 00:57:20 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:57:20.022153 | orchestrator | 2026-01-02 00:57:20 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:57:20.022697 | orchestrator | 2026-01-02 00:57:20 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:57:20.022723 | orchestrator | 2026-01-02 00:57:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:57:23.078116 | orchestrator | 2026-01-02 00:57:23 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:57:23.078991 | orchestrator | 2026-01-02 00:57:23 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:57:23.080300 | orchestrator | 2026-01-02 00:57:23 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:57:23.080336 | orchestrator | 2026-01-02 00:57:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:57:26.124707 | orchestrator | 2026-01-02 00:57:26 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:57:26.125637 | orchestrator | 2026-01-02 00:57:26 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:57:26.126878 | orchestrator | 2026-01-02 00:57:26 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:57:26.126925 | orchestrator | 2026-01-02 00:57:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:57:29.179562 | orchestrator | 2026-01-02 00:57:29 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:57:29.181362 | orchestrator | 2026-01-02 00:57:29 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:57:29.182924 | orchestrator | 2026-01-02 00:57:29 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:57:29.182969 | orchestrator | 2026-01-02 00:57:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:57:32.230869 | orchestrator | 2026-01-02 00:57:32 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:57:32.234304 | orchestrator | 2026-01-02 00:57:32 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:57:32.236822 | orchestrator | 2026-01-02 00:57:32 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:57:32.236879 | orchestrator | 2026-01-02 00:57:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:57:35.287869 | orchestrator | 2026-01-02 00:57:35 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:57:35.289605 | orchestrator | 2026-01-02 00:57:35 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:57:35.293128 | orchestrator | 2026-01-02 00:57:35 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:57:35.293159 | orchestrator | 2026-01-02 00:57:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:57:38.342361 | orchestrator | 2026-01-02 00:57:38 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:57:38.344099 | orchestrator | 2026-01-02 00:57:38 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:57:38.345051 | orchestrator | 2026-01-02 00:57:38 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:57:38.345410 | orchestrator | 2026-01-02 00:57:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:57:41.409365 | orchestrator | 2026-01-02 00:57:41 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:57:41.411255 | orchestrator | 2026-01-02 00:57:41 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:57:41.413702 | orchestrator | 2026-01-02 00:57:41 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:57:41.413756 | orchestrator | 2026-01-02 00:57:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:57:44.458217 | orchestrator | 2026-01-02 00:57:44 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:57:44.459267 | orchestrator | 2026-01-02 00:57:44 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:57:44.460699 | orchestrator | 2026-01-02 00:57:44 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:57:44.460799 | orchestrator | 2026-01-02 00:57:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:57:47.514524 | orchestrator | 2026-01-02 00:57:47 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:57:47.517383 | orchestrator | 2026-01-02 00:57:47 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:57:47.519248 | orchestrator | 2026-01-02 00:57:47 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:57:47.519300 | orchestrator | 2026-01-02 00:57:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:57:50.557143 | orchestrator | 2026-01-02 00:57:50 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:57:50.561636 | orchestrator | 2026-01-02 00:57:50 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:57:50.561781 | orchestrator | 2026-01-02 00:57:50 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:57:50.561800 | orchestrator | 2026-01-02 00:57:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:57:53.613910 | orchestrator | 2026-01-02 00:57:53 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:57:53.615270 | orchestrator | 2026-01-02 00:57:53 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:57:53.617595 | orchestrator | 2026-01-02 00:57:53 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:57:53.617680 | orchestrator | 2026-01-02 00:57:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:57:56.663272 | orchestrator | 2026-01-02 00:57:56 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:57:56.664893 | orchestrator | 2026-01-02 00:57:56 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:57:56.666722 | orchestrator | 2026-01-02 00:57:56 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:57:56.666768 | orchestrator | 2026-01-02 00:57:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:57:59.716552 | orchestrator | 2026-01-02 00:57:59 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:57:59.718106 | orchestrator | 2026-01-02 00:57:59 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:57:59.720238 | orchestrator | 2026-01-02 00:57:59 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:57:59.720293 | orchestrator | 2026-01-02 00:57:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:58:02.769855 | orchestrator | 2026-01-02 00:58:02 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:58:02.771095 | orchestrator | 2026-01-02 00:58:02 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:58:02.773292 | orchestrator | 2026-01-02 00:58:02 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:58:02.773394 | orchestrator | 2026-01-02 00:58:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:58:05.822263 | orchestrator | 2026-01-02 00:58:05 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:58:05.824371 | orchestrator | 2026-01-02 00:58:05 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:58:05.827418 | orchestrator | 2026-01-02 00:58:05 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:58:05.827496 | orchestrator | 2026-01-02 00:58:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:58:08.875408 | orchestrator | 2026-01-02 00:58:08 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:58:08.877048 | orchestrator | 2026-01-02 00:58:08 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:58:08.881692 | orchestrator | 2026-01-02 00:58:08 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:58:08.881751 | orchestrator | 2026-01-02 00:58:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:58:11.930871 | orchestrator | 2026-01-02 00:58:11 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:58:11.933142 | orchestrator | 2026-01-02 00:58:11 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:58:11.935252 | orchestrator | 2026-01-02 00:58:11 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:58:11.935288 | orchestrator | 2026-01-02 00:58:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:58:14.979822 | orchestrator | 2026-01-02 00:58:14 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:58:14.981766 | orchestrator | 2026-01-02 00:58:14 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:58:14.984018 | orchestrator | 2026-01-02 00:58:14 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:58:14.984761 | orchestrator | 2026-01-02 00:58:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:58:18.032087 | orchestrator | 2026-01-02 00:58:18 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:58:18.034634 | orchestrator | 2026-01-02 00:58:18 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:58:18.036914 | orchestrator | 2026-01-02 00:58:18 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:58:18.037903 | orchestrator | 2026-01-02 00:58:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:58:21.085911 | orchestrator | 2026-01-02 00:58:21 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:58:21.088835 | orchestrator | 2026-01-02 00:58:21 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:58:21.091062 | orchestrator | 2026-01-02 00:58:21 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:58:21.091579 | orchestrator | 2026-01-02 00:58:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:58:24.137853 | orchestrator | 2026-01-02 00:58:24 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:58:24.139800 | orchestrator | 2026-01-02 00:58:24 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:58:24.143136 | orchestrator | 2026-01-02 00:58:24 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:58:24.143213 | orchestrator | 2026-01-02 00:58:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:58:27.195615 | orchestrator | 2026-01-02 00:58:27 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:58:27.197982 | orchestrator | 2026-01-02 00:58:27 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:58:27.200243 | orchestrator | 2026-01-02 00:58:27 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:58:27.200389 | orchestrator | 2026-01-02 00:58:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:58:30.245557 | orchestrator | 2026-01-02 00:58:30 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:58:30.246975 | orchestrator | 2026-01-02 00:58:30 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:58:30.248735 | orchestrator | 2026-01-02 00:58:30 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:58:30.248762 | orchestrator | 2026-01-02 00:58:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:58:33.289107 | orchestrator | 2026-01-02 00:58:33 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:58:33.290065 | orchestrator | 2026-01-02 00:58:33 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:58:33.290088 | orchestrator | 2026-01-02 00:58:33 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:58:33.290096 | orchestrator | 2026-01-02 00:58:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:58:36.338368 | orchestrator | 2026-01-02 00:58:36 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state STARTED 2026-01-02 00:58:36.339782 | orchestrator | 2026-01-02 00:58:36 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:58:36.343763 | orchestrator | 2026-01-02 00:58:36 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:58:36.343841 | orchestrator | 2026-01-02 00:58:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:58:39.402953 | orchestrator | 2026-01-02 00:58:39 | INFO  | Task ed1e3134-9ea4-4ca0-85f0-d48c2e24720d is in state SUCCESS 2026-01-02 00:58:39.406616 | orchestrator | 2026-01-02 00:58:39.406706 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-02 00:58:39.406725 | orchestrator | 2.16.14 2026-01-02 00:58:39.406743 | orchestrator | 2026-01-02 00:58:39.406757 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-01-02 00:58:39.406773 | orchestrator | 2026-01-02 00:58:39.406788 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-02 00:58:39.406803 | orchestrator | Friday 02 January 2026 00:46:49 +0000 (0:00:00.797) 0:00:00.797 ******** 2026-01-02 00:58:39.406819 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:58:39.406836 | orchestrator | 2026-01-02 00:58:39.406850 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-02 00:58:39.406877 | orchestrator | Friday 02 January 2026 00:46:50 +0000 (0:00:01.077) 0:00:01.875 ******** 2026-01-02 00:58:39.406895 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.406913 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.406929 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.406945 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.406961 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.406977 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.406992 | orchestrator | 2026-01-02 00:58:39.407008 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-02 00:58:39.407049 | orchestrator | Friday 02 January 2026 00:46:52 +0000 (0:00:01.576) 0:00:03.451 ******** 2026-01-02 00:58:39.407234 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.407252 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.407268 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.407285 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.407301 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.407316 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.407384 | orchestrator | 2026-01-02 00:58:39.407404 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-02 00:58:39.407439 | orchestrator | Friday 02 January 2026 00:46:53 +0000 (0:00:00.956) 0:00:04.407 ******** 2026-01-02 00:58:39.407454 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.407469 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.407484 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.407500 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.407514 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.407529 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.407538 | orchestrator | 2026-01-02 00:58:39.407547 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-02 00:58:39.407558 | orchestrator | Friday 02 January 2026 00:46:54 +0000 (0:00:01.005) 0:00:05.413 ******** 2026-01-02 00:58:39.407573 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.407587 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.407602 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.407616 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.407630 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.407645 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.407658 | orchestrator | 2026-01-02 00:58:39.407673 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-02 00:58:39.407688 | orchestrator | Friday 02 January 2026 00:46:55 +0000 (0:00:00.737) 0:00:06.150 ******** 2026-01-02 00:58:39.407702 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.407718 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.407733 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.407748 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.407763 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.407778 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.407792 | orchestrator | 2026-01-02 00:58:39.407808 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-02 00:58:39.407824 | orchestrator | Friday 02 January 2026 00:46:55 +0000 (0:00:00.630) 0:00:06.781 ******** 2026-01-02 00:58:39.407838 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.407853 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.407868 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.407882 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.407894 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.407907 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.407920 | orchestrator | 2026-01-02 00:58:39.407933 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-02 00:58:39.407947 | orchestrator | Friday 02 January 2026 00:46:56 +0000 (0:00:00.880) 0:00:07.662 ******** 2026-01-02 00:58:39.407960 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.407975 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.407988 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.408001 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.408015 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.408028 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.408041 | orchestrator | 2026-01-02 00:58:39.408050 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-02 00:58:39.408058 | orchestrator | Friday 02 January 2026 00:46:57 +0000 (0:00:00.791) 0:00:08.453 ******** 2026-01-02 00:58:39.408066 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.408074 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.408082 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.408090 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.408109 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.408117 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.408125 | orchestrator | 2026-01-02 00:58:39.408133 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-02 00:58:39.408142 | orchestrator | Friday 02 January 2026 00:46:58 +0000 (0:00:00.947) 0:00:09.401 ******** 2026-01-02 00:58:39.408150 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-02 00:58:39.408158 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-02 00:58:39.408166 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-02 00:58:39.408174 | orchestrator | 2026-01-02 00:58:39.408182 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-02 00:58:39.408190 | orchestrator | Friday 02 January 2026 00:46:58 +0000 (0:00:00.552) 0:00:09.953 ******** 2026-01-02 00:58:39.408198 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.408206 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.408214 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.408277 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.408286 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.408294 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.408302 | orchestrator | 2026-01-02 00:58:39.408310 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-02 00:58:39.408319 | orchestrator | Friday 02 January 2026 00:47:00 +0000 (0:00:01.194) 0:00:11.147 ******** 2026-01-02 00:58:39.408327 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-02 00:58:39.408335 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-02 00:58:39.408343 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-02 00:58:39.408351 | orchestrator | 2026-01-02 00:58:39.408359 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-02 00:58:39.408373 | orchestrator | Friday 02 January 2026 00:47:03 +0000 (0:00:03.192) 0:00:14.340 ******** 2026-01-02 00:58:39.408582 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-02 00:58:39.408602 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-02 00:58:39.408617 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-02 00:58:39.408630 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.408645 | orchestrator | 2026-01-02 00:58:39.408659 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-02 00:58:39.408673 | orchestrator | Friday 02 January 2026 00:47:04 +0000 (0:00:00.679) 0:00:15.020 ******** 2026-01-02 00:58:39.408689 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.408708 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.408757 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.408772 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.408786 | orchestrator | 2026-01-02 00:58:39.408801 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-02 00:58:39.408814 | orchestrator | Friday 02 January 2026 00:47:04 +0000 (0:00:00.752) 0:00:15.772 ******** 2026-01-02 00:58:39.408828 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.408874 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.408885 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.408893 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.408901 | orchestrator | 2026-01-02 00:58:39.408910 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-02 00:58:39.408918 | orchestrator | Friday 02 January 2026 00:47:05 +0000 (0:00:00.443) 0:00:16.215 ******** 2026-01-02 00:58:39.408938 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-02 00:47:00.929465', 'end': '2026-01-02 00:47:01.227728', 'delta': '0:00:00.298263', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.408957 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-02 00:47:01.958383', 'end': '2026-01-02 00:47:02.270238', 'delta': '0:00:00.311855', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.408966 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-02 00:47:02.851322', 'end': '2026-01-02 00:47:03.191418', 'delta': '0:00:00.340096', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.408974 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.408982 | orchestrator | 2026-01-02 00:58:39.408990 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-02 00:58:39.408999 | orchestrator | Friday 02 January 2026 00:47:05 +0000 (0:00:00.157) 0:00:16.373 ******** 2026-01-02 00:58:39.409051 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.409061 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.409069 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.409077 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.409085 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.409093 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.409101 | orchestrator | 2026-01-02 00:58:39.409109 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-02 00:58:39.409117 | orchestrator | Friday 02 January 2026 00:47:06 +0000 (0:00:01.327) 0:00:17.700 ******** 2026-01-02 00:58:39.409125 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-02 00:58:39.409133 | orchestrator | 2026-01-02 00:58:39.409141 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-02 00:58:39.409149 | orchestrator | Friday 02 January 2026 00:47:07 +0000 (0:00:00.846) 0:00:18.547 ******** 2026-01-02 00:58:39.409157 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.409165 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.409172 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.409180 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.409188 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.409196 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.409204 | orchestrator | 2026-01-02 00:58:39.409212 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-02 00:58:39.409300 | orchestrator | Friday 02 January 2026 00:47:09 +0000 (0:00:01.851) 0:00:20.398 ******** 2026-01-02 00:58:39.409310 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.409318 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.409326 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.409334 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.409342 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.409350 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.409358 | orchestrator | 2026-01-02 00:58:39.409365 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-02 00:58:39.409374 | orchestrator | Friday 02 January 2026 00:47:11 +0000 (0:00:02.085) 0:00:22.484 ******** 2026-01-02 00:58:39.409381 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.409389 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.409397 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.409405 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.409442 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.409453 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.409461 | orchestrator | 2026-01-02 00:58:39.409469 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-02 00:58:39.409477 | orchestrator | Friday 02 January 2026 00:47:13 +0000 (0:00:02.348) 0:00:24.833 ******** 2026-01-02 00:58:39.409485 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.409493 | orchestrator | 2026-01-02 00:58:39.409501 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-02 00:58:39.409509 | orchestrator | Friday 02 January 2026 00:47:14 +0000 (0:00:00.251) 0:00:25.084 ******** 2026-01-02 00:58:39.409517 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.409525 | orchestrator | 2026-01-02 00:58:39.409533 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-02 00:58:39.409545 | orchestrator | Friday 02 January 2026 00:47:14 +0000 (0:00:00.306) 0:00:25.390 ******** 2026-01-02 00:58:39.409558 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.409571 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.409585 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.409641 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.409658 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.409672 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.409685 | orchestrator | 2026-01-02 00:58:39.409694 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-02 00:58:39.409710 | orchestrator | Friday 02 January 2026 00:47:15 +0000 (0:00:01.051) 0:00:26.442 ******** 2026-01-02 00:58:39.409718 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.409726 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.409734 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.409742 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.409811 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.409821 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.409829 | orchestrator | 2026-01-02 00:58:39.409837 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-02 00:58:39.409845 | orchestrator | Friday 02 January 2026 00:47:16 +0000 (0:00:01.284) 0:00:27.726 ******** 2026-01-02 00:58:39.409859 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.409868 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.409876 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.409884 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.409892 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.409928 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.409938 | orchestrator | 2026-01-02 00:58:39.409946 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-02 00:58:39.409981 | orchestrator | Friday 02 January 2026 00:47:17 +0000 (0:00:00.788) 0:00:28.515 ******** 2026-01-02 00:58:39.409992 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.410000 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.410008 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.410057 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.410068 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.410076 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.410084 | orchestrator | 2026-01-02 00:58:39.410093 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-02 00:58:39.410101 | orchestrator | Friday 02 January 2026 00:47:18 +0000 (0:00:00.762) 0:00:29.278 ******** 2026-01-02 00:58:39.410109 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.410117 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.410125 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.410133 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.410164 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.410173 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.410181 | orchestrator | 2026-01-02 00:58:39.410189 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-02 00:58:39.410197 | orchestrator | Friday 02 January 2026 00:47:18 +0000 (0:00:00.578) 0:00:29.857 ******** 2026-01-02 00:58:39.410205 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.410213 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.410221 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.410229 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.410237 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.410245 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.410253 | orchestrator | 2026-01-02 00:58:39.410261 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-02 00:58:39.410269 | orchestrator | Friday 02 January 2026 00:47:19 +0000 (0:00:00.839) 0:00:30.696 ******** 2026-01-02 00:58:39.410278 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.410286 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.410294 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.410302 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.410310 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.410318 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.410326 | orchestrator | 2026-01-02 00:58:39.410334 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-02 00:58:39.410342 | orchestrator | Friday 02 January 2026 00:47:20 +0000 (0:00:00.678) 0:00:31.375 ******** 2026-01-02 00:58:39.410355 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c483f3a2--63e3--5a58--8db6--ff291b90fd92-osd--block--c483f3a2--63e3--5a58--8db6--ff291b90fd92', 'dm-uuid-LVM-kadOhytslGICfsMPpKKIVUaEJZeEZBk73B7QIjOP9WodUfze1OHCoMt864UsTUvw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.410374 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7b4d4f98--8928--5a24--8a9c--c2096dcbe0fa-osd--block--7b4d4f98--8928--5a24--8a9c--c2096dcbe0fa', 'dm-uuid-LVM-sxJm4x8SlvbGRmvWJwUw9wXTKNruugDANlwCAkwEOOoflkJUMHpUFsVEuSUEhryA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.410392 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.410401 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.410467 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.410486 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.410500 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.410514 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.410526 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.410541 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--98c0a427--0bfe--5560--90fa--409a46d34f73-osd--block--98c0a427--0bfe--5560--90fa--409a46d34f73', 'dm-uuid-LVM-ujYeRjdD1qfODf03CZCJdSrEePIiQB0u1Giu1X49vSEhSEheZdpGGJEEew5YAOc0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.410550 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.410567 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b563cbc7--469d--5dd4--bc68--32b49ff22a36-osd--block--b563cbc7--469d--5dd4--bc68--32b49ff22a36', 'dm-uuid-LVM-aPTuh7VgWuNL0o8yp0aA0k4J5EcwWp1UwtvMBnJ1KazEPyOojPH041G8du5gyEEG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.410584 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_817579c1-b31d-4bbf-8af4-60793d227397', 'scsi-SQEMU_QEMU_HARDDISK_817579c1-b31d-4bbf-8af4-60793d227397'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_817579c1-b31d-4bbf-8af4-60793d227397-part1', 'scsi-SQEMU_QEMU_HARDDISK_817579c1-b31d-4bbf-8af4-60793d227397-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_817579c1-b31d-4bbf-8af4-60793d227397-part14', 'scsi-SQEMU_QEMU_HARDDISK_817579c1-b31d-4bbf-8af4-60793d227397-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_817579c1-b31d-4bbf-8af4-60793d227397-part15', 'scsi-SQEMU_QEMU_HARDDISK_817579c1-b31d-4bbf-8af4-60793d227397-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_817579c1-b31d-4bbf-8af4-60793d227397-part16', 'scsi-SQEMU_QEMU_HARDDISK_817579c1-b31d-4bbf-8af4-60793d227397-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:58:39.410601 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c483f3a2--63e3--5a58--8db6--ff291b90fd92-osd--block--c483f3a2--63e3--5a58--8db6--ff291b90fd92'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wNoFtq-1fxT-BlVw-9ASv-vo95-eRJy-yzlXtr', 'scsi-0QEMU_QEMU_HARDDISK_6d9d2903-81fe-42d1-9111-d7d9a87231b0', 'scsi-SQEMU_QEMU_HARDDISK_6d9d2903-81fe-42d1-9111-d7d9a87231b0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:58:39.410611 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--7b4d4f98--8928--5a24--8a9c--c2096dcbe0fa-osd--block--7b4d4f98--8928--5a24--8a9c--c2096dcbe0fa'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dgyX59-ld2G-gwPN-ZkQx-fE5Q-h7ke-QieFAN', 'scsi-0QEMU_QEMU_HARDDISK_91cfe094-4682-4bfc-95e3-88354566cb8a', 'scsi-SQEMU_QEMU_HARDDISK_91cfe094-4682-4bfc-95e3-88354566cb8a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:58:39.410625 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.410639 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ace49a83-40fe-462c-82a5-a32ee72a9346', 'scsi-SQEMU_QEMU_HARDDISK_ace49a83-40fe-462c-82a5-a32ee72a9346'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:58:39.410649 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.410658 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-02-00-03-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:58:39.410666 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.410680 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.410688 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.410697 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.410710 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8c17e839--2cbb--5f17--abcc--9f26ae111b42-osd--block--8c17e839--2cbb--5f17--abcc--9f26ae111b42', 'dm-uuid-LVM-CniWHMALJAJrblTkLmpMQNyFIUQNReVPb8Z2UREu9VHvJMqzpWRcds7QSRTO0ZNz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.410719 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.410731 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--37cfd703--64b2--55b0--ad28--4f6812d5fa0d-osd--block--37cfd703--64b2--55b0--ad28--4f6812d5fa0d', 'dm-uuid-LVM-xRKpP4K50Lzg4Aow2riAhqUnqA6bb9ERpDH3KbjKE8JTEzAW2NyffvPUW8kVZatV'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.410740 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.410748 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.410762 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.410770 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.410789 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfac6910-579b-4d78-84a3-2d39a75847a6', 'scsi-SQEMU_QEMU_HARDDISK_cfac6910-579b-4d78-84a3-2d39a75847a6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfac6910-579b-4d78-84a3-2d39a75847a6-part1', 'scsi-SQEMU_QEMU_HARDDISK_cfac6910-579b-4d78-84a3-2d39a75847a6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfac6910-579b-4d78-84a3-2d39a75847a6-part14', 'scsi-SQEMU_QEMU_HARDDISK_cfac6910-579b-4d78-84a3-2d39a75847a6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfac6910-579b-4d78-84a3-2d39a75847a6-part15', 'scsi-SQEMU_QEMU_HARDDISK_cfac6910-579b-4d78-84a3-2d39a75847a6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfac6910-579b-4d78-84a3-2d39a75847a6-part16', 'scsi-SQEMU_QEMU_HARDDISK_cfac6910-579b-4d78-84a3-2d39a75847a6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:58:39.410800 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.410808 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.410816 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--98c0a427--0bfe--5560--90fa--409a46d34f73-osd--block--98c0a427--0bfe--5560--90fa--409a46d34f73'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zcsi9f-4FfR-03B6-eElj-zHps-d8Au-IF9oXe', 'scsi-0QEMU_QEMU_HARDDISK_84499345-a879-443a-82ee-40e5571fa8cd', 'scsi-SQEMU_QEMU_HARDDISK_84499345-a879-443a-82ee-40e5571fa8cd'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:58:39.410830 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.410839 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b563cbc7--469d--5dd4--bc68--32b49ff22a36-osd--block--b563cbc7--469d--5dd4--bc68--32b49ff22a36'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wqmApf-v7ib-7Bs3-YcaK-OSLi-GEEa-ycSF6r', 'scsi-0QEMU_QEMU_HARDDISK_7a849538-9b89-4e07-840a-8a2ecc10a58d', 'scsi-SQEMU_QEMU_HARDDISK_7a849538-9b89-4e07-840a-8a2ecc10a58d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:58:39.410847 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.410856 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.410869 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_496b1234-da7e-4975-8125-a1f8cbe1a452', 'scsi-SQEMU_QEMU_HARDDISK_496b1234-da7e-4975-8125-a1f8cbe1a452'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:58:39.410881 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.410890 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-02-00-03-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:58:39.410906 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4449738-099c-443f-90a1-9eef773d53ef', 'scsi-SQEMU_QEMU_HARDDISK_f4449738-099c-443f-90a1-9eef773d53ef'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4449738-099c-443f-90a1-9eef773d53ef-part1', 'scsi-SQEMU_QEMU_HARDDISK_f4449738-099c-443f-90a1-9eef773d53ef-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4449738-099c-443f-90a1-9eef773d53ef-part14', 'scsi-SQEMU_QEMU_HARDDISK_f4449738-099c-443f-90a1-9eef773d53ef-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4449738-099c-443f-90a1-9eef773d53ef-part15', 'scsi-SQEMU_QEMU_HARDDISK_f4449738-099c-443f-90a1-9eef773d53ef-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4449738-099c-443f-90a1-9eef773d53ef-part16', 'scsi-SQEMU_QEMU_HARDDISK_f4449738-099c-443f-90a1-9eef773d53ef-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:58:39.410922 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8c17e839--2cbb--5f17--abcc--9f26ae111b42-osd--block--8c17e839--2cbb--5f17--abcc--9f26ae111b42'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-I3Hbgj-sUvA-JwwA-9Uln-iaSG-kJrt-Ae9QOg', 'scsi-0QEMU_QEMU_HARDDISK_3f193762-36b0-4c27-b28e-8efb206edc66', 'scsi-SQEMU_QEMU_HARDDISK_3f193762-36b0-4c27-b28e-8efb206edc66'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:58:39.410935 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--37cfd703--64b2--55b0--ad28--4f6812d5fa0d-osd--block--37cfd703--64b2--55b0--ad28--4f6812d5fa0d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-KNoUEf-TJ8y-mEou-hIgr-GLCl-tNSf-zuT3gs', 'scsi-0QEMU_QEMU_HARDDISK_26cdd52f-83be-4086-bce2-9cb6df4f24ab', 'scsi-SQEMU_QEMU_HARDDISK_26cdd52f-83be-4086-bce2-9cb6df4f24ab'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:58:39.410944 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a47a132-03ad-4adf-a37b-d405efe1a07c', 'scsi-SQEMU_QEMU_HARDDISK_3a47a132-03ad-4adf-a37b-d405efe1a07c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:58:39.410957 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-02-00-03-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:58:39.410965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.410975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.410989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.411003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.411026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.411039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.411047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.411061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.411070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b48e1683-8cea-4971-bdc8-cd04d1d3aa28', 'scsi-SQEMU_QEMU_HARDDISK_b48e1683-8cea-4971-bdc8-cd04d1d3aa28'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b48e1683-8cea-4971-bdc8-cd04d1d3aa28-part1', 'scsi-SQEMU_QEMU_HARDDISK_b48e1683-8cea-4971-bdc8-cd04d1d3aa28-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b48e1683-8cea-4971-bdc8-cd04d1d3aa28-part14', 'scsi-SQEMU_QEMU_HARDDISK_b48e1683-8cea-4971-bdc8-cd04d1d3aa28-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b48e1683-8cea-4971-bdc8-cd04d1d3aa28-part15', 'scsi-SQEMU_QEMU_HARDDISK_b48e1683-8cea-4971-bdc8-cd04d1d3aa28-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b48e1683-8cea-4971-bdc8-cd04d1d3aa28-part16', 'scsi-SQEMU_QEMU_HARDDISK_b48e1683-8cea-4971-bdc8-cd04d1d3aa28-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:58:39.411085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-02-00-03-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:58:39.411094 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.411106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.411114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.411127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.411136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.411144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.411153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.411161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.411169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.411185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac41253e-4ec4-41ef-b319-b223dc253c92', 'scsi-SQEMU_QEMU_HARDDISK_ac41253e-4ec4-41ef-b319-b223dc253c92'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac41253e-4ec4-41ef-b319-b223dc253c92-part1', 'scsi-SQEMU_QEMU_HARDDISK_ac41253e-4ec4-41ef-b319-b223dc253c92-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac41253e-4ec4-41ef-b319-b223dc253c92-part14', 'scsi-SQEMU_QEMU_HARDDISK_ac41253e-4ec4-41ef-b319-b223dc253c92-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac41253e-4ec4-41ef-b319-b223dc253c92-part15', 'scsi-SQEMU_QEMU_HARDDISK_ac41253e-4ec4-41ef-b319-b223dc253c92-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac41253e-4ec4-41ef-b319-b223dc253c92-part16', 'scsi-SQEMU_QEMU_HARDDISK_ac41253e-4ec4-41ef-b319-b223dc253c92-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:58:39.411199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-02-00-03-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:58:39.411208 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.411216 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.411224 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.411233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.411242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.411250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.411259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.411272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.411342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.411364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.411373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:58:39.411382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d352e57-4d3e-4622-b1e9-3f51c1a118c4', 'scsi-SQEMU_QEMU_HARDDISK_3d352e57-4d3e-4622-b1e9-3f51c1a118c4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d352e57-4d3e-4622-b1e9-3f51c1a118c4-part1', 'scsi-SQEMU_QEMU_HARDDISK_3d352e57-4d3e-4622-b1e9-3f51c1a118c4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d352e57-4d3e-4622-b1e9-3f51c1a118c4-part14', 'scsi-SQEMU_QEMU_HARDDISK_3d352e57-4d3e-4622-b1e9-3f51c1a118c4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d352e57-4d3e-4622-b1e9-3f51c1a118c4-part15', 'scsi-SQEMU_QEMU_HARDDISK_3d352e57-4d3e-4622-b1e9-3f51c1a118c4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d352e57-4d3e-4622-b1e9-3f51c1a118c4-part16', 'scsi-SQEMU_QEMU_HARDDISK_3d352e57-4d3e-4622-b1e9-3f51c1a118c4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:58:39.411399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-02-00-03-04-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:58:39.411408 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.411439 | orchestrator | 2026-01-02 00:58:39.411448 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-02 00:58:39.411462 | orchestrator | Friday 02 January 2026 00:47:22 +0000 (0:00:02.113) 0:00:33.489 ******** 2026-01-02 00:58:39.411474 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c483f3a2--63e3--5a58--8db6--ff291b90fd92-osd--block--c483f3a2--63e3--5a58--8db6--ff291b90fd92', 'dm-uuid-LVM-kadOhytslGICfsMPpKKIVUaEJZeEZBk73B7QIjOP9WodUfze1OHCoMt864UsTUvw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.411484 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--98c0a427--0bfe--5560--90fa--409a46d34f73-osd--block--98c0a427--0bfe--5560--90fa--409a46d34f73', 'dm-uuid-LVM-ujYeRjdD1qfODf03CZCJdSrEePIiQB0u1Giu1X49vSEhSEheZdpGGJEEew5YAOc0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.411493 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7b4d4f98--8928--5a24--8a9c--c2096dcbe0fa-osd--block--7b4d4f98--8928--5a24--8a9c--c2096dcbe0fa', 'dm-uuid-LVM-sxJm4x8SlvbGRmvWJwUw9wXTKNruugDANlwCAkwEOOoflkJUMHpUFsVEuSUEhryA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.411501 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b563cbc7--469d--5dd4--bc68--32b49ff22a36-osd--block--b563cbc7--469d--5dd4--bc68--32b49ff22a36', 'dm-uuid-LVM-aPTuh7VgWuNL0o8yp0aA0k4J5EcwWp1UwtvMBnJ1KazEPyOojPH041G8du5gyEEG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.411858 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.411991 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412011 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412024 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412036 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8c17e839--2cbb--5f17--abcc--9f26ae111b42-osd--block--8c17e839--2cbb--5f17--abcc--9f26ae111b42', 'dm-uuid-LVM-CniWHMALJAJrblTkLmpMQNyFIUQNReVPb8Z2UREu9VHvJMqzpWRcds7QSRTO0ZNz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412049 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412078 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--37cfd703--64b2--55b0--ad28--4f6812d5fa0d-osd--block--37cfd703--64b2--55b0--ad28--4f6812d5fa0d', 'dm-uuid-LVM-xRKpP4K50Lzg4Aow2riAhqUnqA6bb9ERpDH3KbjKE8JTEzAW2NyffvPUW8kVZatV'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412104 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412117 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412128 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412140 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412151 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412163 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412182 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412207 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412220 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412231 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412242 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412255 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412285 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfac6910-579b-4d78-84a3-2d39a75847a6', 'scsi-SQEMU_QEMU_HARDDISK_cfac6910-579b-4d78-84a3-2d39a75847a6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfac6910-579b-4d78-84a3-2d39a75847a6-part1', 'scsi-SQEMU_QEMU_HARDDISK_cfac6910-579b-4d78-84a3-2d39a75847a6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfac6910-579b-4d78-84a3-2d39a75847a6-part14', 'scsi-SQEMU_QEMU_HARDDISK_cfac6910-579b-4d78-84a3-2d39a75847a6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfac6910-579b-4d78-84a3-2d39a75847a6-part15', 'scsi-SQEMU_QEMU_HARDDISK_cfac6910-579b-4d78-84a3-2d39a75847a6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfac6910-579b-4d78-84a3-2d39a75847a6-part16', 'scsi-SQEMU_QEMU_HARDDISK_cfac6910-579b-4d78-84a3-2d39a75847a6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412320 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412333 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412348 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412361 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412392 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--98c0a427--0bfe--5560--90fa--409a46d34f73-osd--block--98c0a427--0bfe--5560--90fa--409a46d34f73'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zcsi9f-4FfR-03B6-eElj-zHps-d8Au-IF9oXe', 'scsi-0QEMU_QEMU_HARDDISK_84499345-a879-443a-82ee-40e5571fa8cd', 'scsi-SQEMU_QEMU_HARDDISK_84499345-a879-443a-82ee-40e5571fa8cd'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412408 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412458 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412472 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412501 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4449738-099c-443f-90a1-9eef773d53ef', 'scsi-SQEMU_QEMU_HARDDISK_f4449738-099c-443f-90a1-9eef773d53ef'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4449738-099c-443f-90a1-9eef773d53ef-part1', 'scsi-SQEMU_QEMU_HARDDISK_f4449738-099c-443f-90a1-9eef773d53ef-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4449738-099c-443f-90a1-9eef773d53ef-part14', 'scsi-SQEMU_QEMU_HARDDISK_f4449738-099c-443f-90a1-9eef773d53ef-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4449738-099c-443f-90a1-9eef773d53ef-part15', 'scsi-SQEMU_QEMU_HARDDISK_f4449738-099c-443f-90a1-9eef773d53ef-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4449738-099c-443f-90a1-9eef773d53ef-part16', 'scsi-SQEMU_QEMU_HARDDISK_f4449738-099c-443f-90a1-9eef773d53ef-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412522 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412534 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b563cbc7--469d--5dd4--bc68--32b49ff22a36-osd--block--b563cbc7--469d--5dd4--bc68--32b49ff22a36'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wqmApf-v7ib-7Bs3-YcaK-OSLi-GEEa-ycSF6r', 'scsi-0QEMU_QEMU_HARDDISK_7a849538-9b89-4e07-840a-8a2ecc10a58d', 'scsi-SQEMU_QEMU_HARDDISK_7a849538-9b89-4e07-840a-8a2ecc10a58d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412546 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_496b1234-da7e-4975-8125-a1f8cbe1a452', 'scsi-SQEMU_QEMU_HARDDISK_496b1234-da7e-4975-8125-a1f8cbe1a452'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412557 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-02-00-03-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412584 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412602 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412614 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--8c17e839--2cbb--5f17--abcc--9f26ae111b42-osd--block--8c17e839--2cbb--5f17--abcc--9f26ae111b42'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-I3Hbgj-sUvA-JwwA-9Uln-iaSG-kJrt-Ae9QOg', 'scsi-0QEMU_QEMU_HARDDISK_3f193762-36b0-4c27-b28e-8efb206edc66', 'scsi-SQEMU_QEMU_HARDDISK_3f193762-36b0-4c27-b28e-8efb206edc66'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412626 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412659 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_817579c1-b31d-4bbf-8af4-60793d227397', 'scsi-SQEMU_QEMU_HARDDISK_817579c1-b31d-4bbf-8af4-60793d227397'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_817579c1-b31d-4bbf-8af4-60793d227397-part1', 'scsi-SQEMU_QEMU_HARDDISK_817579c1-b31d-4bbf-8af4-60793d227397-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_817579c1-b31d-4bbf-8af4-60793d227397-part14', 'scsi-SQEMU_QEMU_HARDDISK_817579c1-b31d-4bbf-8af4-60793d227397-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_817579c1-b31d-4bbf-8af4-60793d227397-part15', 'scsi-SQEMU_QEMU_HARDDISK_817579c1-b31d-4bbf-8af4-60793d227397-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_817579c1-b31d-4bbf-8af4-60793d227397-part16', 'scsi-SQEMU_QEMU_HARDDISK_817579c1-b31d-4bbf-8af4-60793d227397-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412690 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412709 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c483f3a2--63e3--5a58--8db6--ff291b90fd92-osd--block--c483f3a2--63e3--5a58--8db6--ff291b90fd92'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wNoFtq-1fxT-BlVw-9ASv-vo95-eRJy-yzlXtr', 'scsi-0QEMU_QEMU_HARDDISK_6d9d2903-81fe-42d1-9111-d7d9a87231b0', 'scsi-SQEMU_QEMU_HARDDISK_6d9d2903-81fe-42d1-9111-d7d9a87231b0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412730 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--37cfd703--64b2--55b0--ad28--4f6812d5fa0d-osd--block--37cfd703--64b2--55b0--ad28--4f6812d5fa0d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-KNoUEf-TJ8y-mEou-hIgr-GLCl-tNSf-zuT3gs', 'scsi-0QEMU_QEMU_HARDDISK_26cdd52f-83be-4086-bce2-9cb6df4f24ab', 'scsi-SQEMU_QEMU_HARDDISK_26cdd52f-83be-4086-bce2-9cb6df4f24ab'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412770 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--7b4d4f98--8928--5a24--8a9c--c2096dcbe0fa-osd--block--7b4d4f98--8928--5a24--8a9c--c2096dcbe0fa'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dgyX59-ld2G-gwPN-ZkQx-fE5Q-h7ke-QieFAN', 'scsi-0QEMU_QEMU_HARDDISK_91cfe094-4682-4bfc-95e3-88354566cb8a', 'scsi-SQEMU_QEMU_HARDDISK_91cfe094-4682-4bfc-95e3-88354566cb8a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412798 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412818 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ace49a83-40fe-462c-82a5-a32ee72a9346', 'scsi-SQEMU_QEMU_HARDDISK_ace49a83-40fe-462c-82a5-a32ee72a9346'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412837 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-02-00-03-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412857 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412888 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a47a132-03ad-4adf-a37b-d405efe1a07c', 'scsi-SQEMU_QEMU_HARDDISK_3a47a132-03ad-4adf-a37b-d405efe1a07c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412908 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.412934 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412952 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412964 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-02-00-03-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412975 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.412995 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b48e1683-8cea-4971-bdc8-cd04d1d3aa28', 'scsi-SQEMU_QEMU_HARDDISK_b48e1683-8cea-4971-bdc8-cd04d1d3aa28'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b48e1683-8cea-4971-bdc8-cd04d1d3aa28-part1', 'scsi-SQEMU_QEMU_HARDDISK_b48e1683-8cea-4971-bdc8-cd04d1d3aa28-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b48e1683-8cea-4971-bdc8-cd04d1d3aa28-part14', 'scsi-SQEMU_QEMU_HARDDISK_b48e1683-8cea-4971-bdc8-cd04d1d3aa28-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b48e1683-8cea-4971-bdc8-cd04d1d3aa28-part15', 'scsi-SQEMU_QEMU_HARDDISK_b48e1683-8cea-4971-bdc8-cd04d1d3aa28-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b48e1683-8cea-4971-bdc8-cd04d1d3aa28-part16', 'scsi-SQEMU_QEMU_HARDDISK_b48e1683-8cea-4971-bdc8-cd04d1d3aa28-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.413020 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.413033 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.413045 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.413057 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.413074 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.413093 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.413112 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac41253e-4ec4-41ef-b319-b223dc253c92', 'scsi-SQEMU_QEMU_HARDDISK_ac41253e-4ec4-41ef-b319-b223dc253c92'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac41253e-4ec4-41ef-b319-b223dc253c92-part1', 'scsi-SQEMU_QEMU_HARDDISK_ac41253e-4ec4-41ef-b319-b223dc253c92-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac41253e-4ec4-41ef-b319-b223dc253c92-part14', 'scsi-SQEMU_QEMU_HARDDISK_ac41253e-4ec4-41ef-b319-b223dc253c92-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac41253e-4ec4-41ef-b319-b223dc253c92-part15', 'scsi-SQEMU_QEMU_HARDDISK_ac41253e-4ec4-41ef-b319-b223dc253c92-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac41253e-4ec4-41ef-b319-b223dc253c92-part16', 'scsi-SQEMU_QEMU_HARDDISK_ac41253e-4ec4-41ef-b319-b223dc253c92-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.413132 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-02-00-03-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.413144 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-02-00-03-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.413156 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.413173 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.413186 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.413197 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.413213 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.413226 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.413237 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.413249 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.413267 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.413278 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.413296 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.413313 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.413326 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d352e57-4d3e-4622-b1e9-3f51c1a118c4', 'scsi-SQEMU_QEMU_HARDDISK_3d352e57-4d3e-4622-b1e9-3f51c1a118c4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d352e57-4d3e-4622-b1e9-3f51c1a118c4-part1', 'scsi-SQEMU_QEMU_HARDDISK_3d352e57-4d3e-4622-b1e9-3f51c1a118c4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d352e57-4d3e-4622-b1e9-3f51c1a118c4-part14', 'scsi-SQEMU_QEMU_HARDDISK_3d352e57-4d3e-4622-b1e9-3f51c1a118c4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d352e57-4d3e-4622-b1e9-3f51c1a118c4-part15', 'scsi-SQEMU_QEMU_HARDDISK_3d352e57-4d3e-4622-b1e9-3f51c1a118c4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d352e57-4d3e-4622-b1e9-3f51c1a118c4-part16', 'scsi-SQEMU_QEMU_HARDDISK_3d352e57-4d3e-4622-b1e9-3f51c1a118c4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.413345 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-02-00-03-04-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:58:39.413361 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.413380 | orchestrator | 2026-01-02 00:58:39.413407 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-02 00:58:39.413492 | orchestrator | Friday 02 January 2026 00:47:23 +0000 (0:00:01.298) 0:00:34.787 ******** 2026-01-02 00:58:39.413512 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.413529 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.413544 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.413562 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.413582 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.413601 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.413619 | orchestrator | 2026-01-02 00:58:39.413639 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-02 00:58:39.413657 | orchestrator | Friday 02 January 2026 00:47:25 +0000 (0:00:01.485) 0:00:36.273 ******** 2026-01-02 00:58:39.413676 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.413687 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.413698 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.413709 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.413719 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.413731 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.413741 | orchestrator | 2026-01-02 00:58:39.413763 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-02 00:58:39.413775 | orchestrator | Friday 02 January 2026 00:47:26 +0000 (0:00:00.918) 0:00:37.191 ******** 2026-01-02 00:58:39.413786 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.413797 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.413808 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.413819 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.413831 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.413842 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.413853 | orchestrator | 2026-01-02 00:58:39.413864 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-02 00:58:39.413884 | orchestrator | Friday 02 January 2026 00:47:27 +0000 (0:00:01.258) 0:00:38.449 ******** 2026-01-02 00:58:39.413895 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.413906 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.413917 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.413928 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.413939 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.413950 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.413961 | orchestrator | 2026-01-02 00:58:39.413977 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-02 00:58:39.413995 | orchestrator | Friday 02 January 2026 00:47:28 +0000 (0:00:00.872) 0:00:39.321 ******** 2026-01-02 00:58:39.414011 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.414090 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.414107 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.414118 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.414129 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.414140 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.414151 | orchestrator | 2026-01-02 00:58:39.414165 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-02 00:58:39.414185 | orchestrator | Friday 02 January 2026 00:47:29 +0000 (0:00:00.890) 0:00:40.212 ******** 2026-01-02 00:58:39.414203 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.414222 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.414239 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.414257 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.414275 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.414294 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.414311 | orchestrator | 2026-01-02 00:58:39.414329 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-02 00:58:39.414350 | orchestrator | Friday 02 January 2026 00:47:29 +0000 (0:00:00.744) 0:00:40.957 ******** 2026-01-02 00:58:39.414364 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-01-02 00:58:39.414468 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-01-02 00:58:39.414482 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-01-02 00:58:39.414493 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-02 00:58:39.414504 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-01-02 00:58:39.414518 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-01-02 00:58:39.414537 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-01-02 00:58:39.414555 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-01-02 00:58:39.414574 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-01-02 00:58:39.414592 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-01-02 00:58:39.414612 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-01-02 00:58:39.414631 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-01-02 00:58:39.414648 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-02 00:58:39.414669 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-01-02 00:58:39.414686 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-01-02 00:58:39.414705 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-01-02 00:58:39.414717 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-02 00:58:39.414728 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-01-02 00:58:39.414739 | orchestrator | 2026-01-02 00:58:39.414750 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-02 00:58:39.414762 | orchestrator | Friday 02 January 2026 00:47:32 +0000 (0:00:02.892) 0:00:43.849 ******** 2026-01-02 00:58:39.414773 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-02 00:58:39.414784 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-02 00:58:39.414795 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-02 00:58:39.414819 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.414830 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-02 00:58:39.414841 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-02 00:58:39.414852 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-02 00:58:39.414863 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.414874 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-02 00:58:39.414910 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-02 00:58:39.414922 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-02 00:58:39.414933 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-02 00:58:39.414944 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-02 00:58:39.414955 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-02 00:58:39.414966 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-02 00:58:39.414978 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-02 00:58:39.414989 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-02 00:58:39.415000 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.415011 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.415022 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.415041 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-02 00:58:39.415053 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-02 00:58:39.415064 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-02 00:58:39.415078 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.415097 | orchestrator | 2026-01-02 00:58:39.415113 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-02 00:58:39.415130 | orchestrator | Friday 02 January 2026 00:47:33 +0000 (0:00:00.878) 0:00:44.727 ******** 2026-01-02 00:58:39.415148 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.415166 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.415185 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.415206 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:58:39.415224 | orchestrator | 2026-01-02 00:58:39.415243 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-02 00:58:39.415263 | orchestrator | Friday 02 January 2026 00:47:35 +0000 (0:00:01.602) 0:00:46.330 ******** 2026-01-02 00:58:39.415282 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.415302 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.415321 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.415335 | orchestrator | 2026-01-02 00:58:39.415347 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-02 00:58:39.415358 | orchestrator | Friday 02 January 2026 00:47:35 +0000 (0:00:00.479) 0:00:46.809 ******** 2026-01-02 00:58:39.415369 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.415380 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.415391 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.415402 | orchestrator | 2026-01-02 00:58:39.415448 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-02 00:58:39.415462 | orchestrator | Friday 02 January 2026 00:47:36 +0000 (0:00:00.589) 0:00:47.399 ******** 2026-01-02 00:58:39.415473 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.415485 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.415496 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.415507 | orchestrator | 2026-01-02 00:58:39.415517 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-02 00:58:39.415529 | orchestrator | Friday 02 January 2026 00:47:37 +0000 (0:00:01.024) 0:00:48.424 ******** 2026-01-02 00:58:39.415552 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.415563 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.415574 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.415585 | orchestrator | 2026-01-02 00:58:39.415596 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-02 00:58:39.415608 | orchestrator | Friday 02 January 2026 00:47:39 +0000 (0:00:01.662) 0:00:50.087 ******** 2026-01-02 00:58:39.415619 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-02 00:58:39.415630 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-02 00:58:39.415641 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-02 00:58:39.415652 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.415663 | orchestrator | 2026-01-02 00:58:39.415674 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-02 00:58:39.415685 | orchestrator | Friday 02 January 2026 00:47:39 +0000 (0:00:00.554) 0:00:50.642 ******** 2026-01-02 00:58:39.415696 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-02 00:58:39.415707 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-02 00:58:39.415719 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-02 00:58:39.415730 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.415740 | orchestrator | 2026-01-02 00:58:39.415751 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-02 00:58:39.415762 | orchestrator | Friday 02 January 2026 00:47:40 +0000 (0:00:00.570) 0:00:51.212 ******** 2026-01-02 00:58:39.415773 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-02 00:58:39.415784 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-02 00:58:39.415795 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-02 00:58:39.415806 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.415816 | orchestrator | 2026-01-02 00:58:39.415827 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-02 00:58:39.415894 | orchestrator | Friday 02 January 2026 00:47:40 +0000 (0:00:00.499) 0:00:51.712 ******** 2026-01-02 00:58:39.415906 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.415917 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.415928 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.415939 | orchestrator | 2026-01-02 00:58:39.415951 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-02 00:58:39.415962 | orchestrator | Friday 02 January 2026 00:47:41 +0000 (0:00:00.639) 0:00:52.352 ******** 2026-01-02 00:58:39.415973 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-02 00:58:39.415985 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-02 00:58:39.416009 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-02 00:58:39.416021 | orchestrator | 2026-01-02 00:58:39.416060 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-02 00:58:39.416073 | orchestrator | Friday 02 January 2026 00:47:43 +0000 (0:00:01.885) 0:00:54.237 ******** 2026-01-02 00:58:39.416084 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-02 00:58:39.416096 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-02 00:58:39.416107 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-02 00:58:39.416118 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-02 00:58:39.416129 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-02 00:58:39.416148 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-02 00:58:39.416159 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-02 00:58:39.416170 | orchestrator | 2026-01-02 00:58:39.416181 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-02 00:58:39.416207 | orchestrator | Friday 02 January 2026 00:47:44 +0000 (0:00:00.986) 0:00:55.224 ******** 2026-01-02 00:58:39.416227 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-02 00:58:39.416245 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-02 00:58:39.416263 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-02 00:58:39.416280 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-02 00:58:39.416297 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-02 00:58:39.416315 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-02 00:58:39.416335 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-02 00:58:39.416355 | orchestrator | 2026-01-02 00:58:39.416374 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-02 00:58:39.416393 | orchestrator | Friday 02 January 2026 00:47:46 +0000 (0:00:02.710) 0:00:57.934 ******** 2026-01-02 00:58:39.416406 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:58:39.416487 | orchestrator | 2026-01-02 00:58:39.416501 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-02 00:58:39.416513 | orchestrator | Friday 02 January 2026 00:47:48 +0000 (0:00:01.432) 0:00:59.366 ******** 2026-01-02 00:58:39.416524 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:58:39.416535 | orchestrator | 2026-01-02 00:58:39.416547 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-02 00:58:39.416558 | orchestrator | Friday 02 January 2026 00:47:49 +0000 (0:00:01.291) 0:01:00.658 ******** 2026-01-02 00:58:39.416569 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.416580 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.416591 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.416602 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.416613 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.416624 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.416635 | orchestrator | 2026-01-02 00:58:39.416646 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-02 00:58:39.416658 | orchestrator | Friday 02 January 2026 00:47:50 +0000 (0:00:01.223) 0:01:01.881 ******** 2026-01-02 00:58:39.416668 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.416680 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.416691 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.416702 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.416713 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.416724 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.416735 | orchestrator | 2026-01-02 00:58:39.416746 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-02 00:58:39.416758 | orchestrator | Friday 02 January 2026 00:47:52 +0000 (0:00:01.116) 0:01:02.998 ******** 2026-01-02 00:58:39.416768 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.416779 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.416791 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.416801 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.416813 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.416824 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.416836 | orchestrator | 2026-01-02 00:58:39.416846 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-02 00:58:39.416858 | orchestrator | Friday 02 January 2026 00:47:53 +0000 (0:00:01.126) 0:01:04.125 ******** 2026-01-02 00:58:39.416869 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.416879 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.416898 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.416908 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.416918 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.416927 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.416937 | orchestrator | 2026-01-02 00:58:39.416947 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-02 00:58:39.416957 | orchestrator | Friday 02 January 2026 00:47:54 +0000 (0:00:01.092) 0:01:05.217 ******** 2026-01-02 00:58:39.416967 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.416978 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.416988 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.416998 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.417008 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.417027 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.417038 | orchestrator | 2026-01-02 00:58:39.417049 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-02 00:58:39.417058 | orchestrator | Friday 02 January 2026 00:47:55 +0000 (0:00:01.504) 0:01:06.722 ******** 2026-01-02 00:58:39.417068 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.417078 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.417088 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.417098 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.417108 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.417118 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.417128 | orchestrator | 2026-01-02 00:58:39.417138 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-02 00:58:39.417149 | orchestrator | Friday 02 January 2026 00:47:56 +0000 (0:00:00.569) 0:01:07.291 ******** 2026-01-02 00:58:39.417159 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.417169 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.417187 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.417197 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.417207 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.417217 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.417226 | orchestrator | 2026-01-02 00:58:39.417236 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-02 00:58:39.417246 | orchestrator | Friday 02 January 2026 00:47:56 +0000 (0:00:00.672) 0:01:07.963 ******** 2026-01-02 00:58:39.417256 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.417266 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.417276 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.417286 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.417296 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.417306 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.417316 | orchestrator | 2026-01-02 00:58:39.417325 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-02 00:58:39.417336 | orchestrator | Friday 02 January 2026 00:47:58 +0000 (0:00:01.169) 0:01:09.133 ******** 2026-01-02 00:58:39.417346 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.417356 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.417366 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.417376 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.417385 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.417395 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.417406 | orchestrator | 2026-01-02 00:58:39.417437 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-02 00:58:39.417449 | orchestrator | Friday 02 January 2026 00:47:59 +0000 (0:00:01.751) 0:01:10.884 ******** 2026-01-02 00:58:39.417458 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.417469 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.417479 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.417489 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.417499 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.417510 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.417526 | orchestrator | 2026-01-02 00:58:39.417536 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-02 00:58:39.417546 | orchestrator | Friday 02 January 2026 00:48:01 +0000 (0:00:01.390) 0:01:12.275 ******** 2026-01-02 00:58:39.417556 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.417566 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.417576 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.417586 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.417596 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.417606 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.417616 | orchestrator | 2026-01-02 00:58:39.417625 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-02 00:58:39.417636 | orchestrator | Friday 02 January 2026 00:48:02 +0000 (0:00:01.184) 0:01:13.459 ******** 2026-01-02 00:58:39.417645 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.417655 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.417665 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.417674 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.417684 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.417694 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.417703 | orchestrator | 2026-01-02 00:58:39.417713 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-02 00:58:39.417723 | orchestrator | Friday 02 January 2026 00:48:03 +0000 (0:00:00.837) 0:01:14.297 ******** 2026-01-02 00:58:39.417733 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.417743 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.417753 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.417763 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.417773 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.417783 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.417792 | orchestrator | 2026-01-02 00:58:39.417802 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-02 00:58:39.417812 | orchestrator | Friday 02 January 2026 00:48:04 +0000 (0:00:01.224) 0:01:15.522 ******** 2026-01-02 00:58:39.417822 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.417832 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.417842 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.417851 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.417861 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.417871 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.417881 | orchestrator | 2026-01-02 00:58:39.417890 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-02 00:58:39.417900 | orchestrator | Friday 02 January 2026 00:48:05 +0000 (0:00:01.111) 0:01:16.633 ******** 2026-01-02 00:58:39.417910 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.417920 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.417929 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.417939 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.417949 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.417958 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.417968 | orchestrator | 2026-01-02 00:58:39.417978 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-02 00:58:39.417988 | orchestrator | Friday 02 January 2026 00:48:06 +0000 (0:00:00.914) 0:01:17.547 ******** 2026-01-02 00:58:39.417998 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.418008 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.418069 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.418081 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.418099 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.418109 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.418119 | orchestrator | 2026-01-02 00:58:39.418129 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-02 00:58:39.418139 | orchestrator | Friday 02 January 2026 00:48:07 +0000 (0:00:00.626) 0:01:18.174 ******** 2026-01-02 00:58:39.418149 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.418166 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.418176 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.418186 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.418196 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.418205 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.418215 | orchestrator | 2026-01-02 00:58:39.418225 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-02 00:58:39.418240 | orchestrator | Friday 02 January 2026 00:48:07 +0000 (0:00:00.801) 0:01:18.975 ******** 2026-01-02 00:58:39.418255 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.418271 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.418281 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.418291 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.418301 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.418311 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.418320 | orchestrator | 2026-01-02 00:58:39.418331 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-02 00:58:39.418340 | orchestrator | Friday 02 January 2026 00:48:08 +0000 (0:00:00.625) 0:01:19.601 ******** 2026-01-02 00:58:39.418350 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.418360 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.418370 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.418379 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.418389 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.418399 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.418408 | orchestrator | 2026-01-02 00:58:39.418449 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-01-02 00:58:39.418468 | orchestrator | Friday 02 January 2026 00:48:10 +0000 (0:00:01.501) 0:01:21.102 ******** 2026-01-02 00:58:39.418485 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:58:39.418502 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:58:39.418513 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:58:39.418522 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:58:39.418533 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:58:39.418542 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:58:39.418552 | orchestrator | 2026-01-02 00:58:39.418562 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-01-02 00:58:39.418572 | orchestrator | Friday 02 January 2026 00:48:12 +0000 (0:00:01.900) 0:01:23.003 ******** 2026-01-02 00:58:39.418582 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:58:39.418591 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:58:39.418601 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:58:39.418611 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:58:39.418620 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:58:39.418630 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:58:39.418640 | orchestrator | 2026-01-02 00:58:39.418650 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-01-02 00:58:39.418660 | orchestrator | Friday 02 January 2026 00:48:14 +0000 (0:00:02.745) 0:01:25.749 ******** 2026-01-02 00:58:39.418670 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:58:39.418680 | orchestrator | 2026-01-02 00:58:39.418690 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-01-02 00:58:39.418700 | orchestrator | Friday 02 January 2026 00:48:15 +0000 (0:00:01.160) 0:01:26.909 ******** 2026-01-02 00:58:39.418710 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.418719 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.418730 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.418746 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.418773 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.418791 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.418807 | orchestrator | 2026-01-02 00:58:39.418823 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-01-02 00:58:39.418850 | orchestrator | Friday 02 January 2026 00:48:16 +0000 (0:00:00.537) 0:01:27.447 ******** 2026-01-02 00:58:39.418865 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.418881 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.418897 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.418914 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.418929 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.418943 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.418959 | orchestrator | 2026-01-02 00:58:39.418973 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-01-02 00:58:39.418988 | orchestrator | Friday 02 January 2026 00:48:17 +0000 (0:00:00.837) 0:01:28.285 ******** 2026-01-02 00:58:39.419005 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-02 00:58:39.419020 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-02 00:58:39.419037 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-02 00:58:39.419053 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-02 00:58:39.419070 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-02 00:58:39.419086 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-02 00:58:39.419103 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-02 00:58:39.419119 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-02 00:58:39.419135 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-02 00:58:39.419152 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-02 00:58:39.419197 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-02 00:58:39.419218 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-02 00:58:39.419234 | orchestrator | 2026-01-02 00:58:39.419252 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-01-02 00:58:39.419269 | orchestrator | Friday 02 January 2026 00:48:18 +0000 (0:00:01.557) 0:01:29.843 ******** 2026-01-02 00:58:39.419287 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:58:39.419305 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:58:39.419323 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:58:39.419348 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:58:39.419365 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:58:39.419381 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:58:39.419397 | orchestrator | 2026-01-02 00:58:39.419435 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-01-02 00:58:39.419464 | orchestrator | Friday 02 January 2026 00:48:20 +0000 (0:00:01.132) 0:01:30.976 ******** 2026-01-02 00:58:39.419480 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.419497 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.419514 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.419531 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.419547 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.419563 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.419581 | orchestrator | 2026-01-02 00:58:39.419598 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-01-02 00:58:39.419616 | orchestrator | Friday 02 January 2026 00:48:20 +0000 (0:00:00.545) 0:01:31.521 ******** 2026-01-02 00:58:39.419633 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.419649 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.419667 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.419678 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.419688 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.419710 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.419720 | orchestrator | 2026-01-02 00:58:39.419730 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-01-02 00:58:39.419740 | orchestrator | Friday 02 January 2026 00:48:21 +0000 (0:00:00.724) 0:01:32.245 ******** 2026-01-02 00:58:39.419750 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.419760 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.419770 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.419780 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.419790 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.419799 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.419809 | orchestrator | 2026-01-02 00:58:39.419819 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-01-02 00:58:39.419829 | orchestrator | Friday 02 January 2026 00:48:21 +0000 (0:00:00.572) 0:01:32.818 ******** 2026-01-02 00:58:39.419840 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:58:39.419851 | orchestrator | 2026-01-02 00:58:39.419861 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-01-02 00:58:39.419878 | orchestrator | Friday 02 January 2026 00:48:22 +0000 (0:00:01.061) 0:01:33.879 ******** 2026-01-02 00:58:39.419893 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.419910 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.419926 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.419941 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.419956 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.419973 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.419991 | orchestrator | 2026-01-02 00:58:39.420009 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-01-02 00:58:39.420025 | orchestrator | Friday 02 January 2026 00:49:23 +0000 (0:01:00.281) 0:02:34.161 ******** 2026-01-02 00:58:39.420040 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-02 00:58:39.420050 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-02 00:58:39.420060 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-02 00:58:39.420070 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.420080 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-02 00:58:39.420089 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-02 00:58:39.420099 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-02 00:58:39.420109 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.420118 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-02 00:58:39.420131 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-02 00:58:39.420147 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-02 00:58:39.420162 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.420178 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-02 00:58:39.420194 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-02 00:58:39.420210 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-02 00:58:39.420226 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.420241 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-02 00:58:39.420257 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-02 00:58:39.420273 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-02 00:58:39.420290 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.420338 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-02 00:58:39.420370 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-02 00:58:39.420381 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-02 00:58:39.420391 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.420401 | orchestrator | 2026-01-02 00:58:39.420412 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-01-02 00:58:39.420491 | orchestrator | Friday 02 January 2026 00:49:23 +0000 (0:00:00.711) 0:02:34.873 ******** 2026-01-02 00:58:39.420507 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.420517 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.420526 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.420536 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.420546 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.420556 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.420566 | orchestrator | 2026-01-02 00:58:39.420583 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-01-02 00:58:39.420592 | orchestrator | Friday 02 January 2026 00:49:24 +0000 (0:00:00.723) 0:02:35.596 ******** 2026-01-02 00:58:39.420600 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.420608 | orchestrator | 2026-01-02 00:58:39.420616 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-01-02 00:58:39.420625 | orchestrator | Friday 02 January 2026 00:49:24 +0000 (0:00:00.128) 0:02:35.724 ******** 2026-01-02 00:58:39.420633 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.420640 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.420649 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.420657 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.420664 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.420672 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.420680 | orchestrator | 2026-01-02 00:58:39.420688 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-01-02 00:58:39.420696 | orchestrator | Friday 02 January 2026 00:49:25 +0000 (0:00:00.662) 0:02:36.387 ******** 2026-01-02 00:58:39.420704 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.420717 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.420731 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.420745 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.420757 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.420770 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.420783 | orchestrator | 2026-01-02 00:58:39.420796 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-01-02 00:58:39.420810 | orchestrator | Friday 02 January 2026 00:49:26 +0000 (0:00:00.928) 0:02:37.315 ******** 2026-01-02 00:58:39.420823 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.420837 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.420848 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.420856 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.420864 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.420872 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.420880 | orchestrator | 2026-01-02 00:58:39.420889 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-01-02 00:58:39.420896 | orchestrator | Friday 02 January 2026 00:49:27 +0000 (0:00:00.677) 0:02:37.993 ******** 2026-01-02 00:58:39.420905 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.420913 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.420921 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.420929 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.420937 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.420946 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.420954 | orchestrator | 2026-01-02 00:58:39.420962 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-01-02 00:58:39.420970 | orchestrator | Friday 02 January 2026 00:49:29 +0000 (0:00:02.882) 0:02:40.875 ******** 2026-01-02 00:58:39.420985 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.420993 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.421002 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.421011 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.421025 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.421038 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.421050 | orchestrator | 2026-01-02 00:58:39.421062 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-01-02 00:58:39.421075 | orchestrator | Friday 02 January 2026 00:49:30 +0000 (0:00:00.726) 0:02:41.602 ******** 2026-01-02 00:58:39.421089 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:58:39.421105 | orchestrator | 2026-01-02 00:58:39.421117 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-01-02 00:58:39.421125 | orchestrator | Friday 02 January 2026 00:49:32 +0000 (0:00:01.505) 0:02:43.108 ******** 2026-01-02 00:58:39.421133 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.421141 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.421149 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.421157 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.421165 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.421173 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.421181 | orchestrator | 2026-01-02 00:58:39.421189 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-01-02 00:58:39.421197 | orchestrator | Friday 02 January 2026 00:49:33 +0000 (0:00:01.177) 0:02:44.285 ******** 2026-01-02 00:58:39.421205 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.421213 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.421221 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.421245 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.421254 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.421262 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.421269 | orchestrator | 2026-01-02 00:58:39.421277 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-01-02 00:58:39.421285 | orchestrator | Friday 02 January 2026 00:49:34 +0000 (0:00:00.993) 0:02:45.279 ******** 2026-01-02 00:58:39.421293 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.421301 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.421329 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.421338 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.421346 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.421354 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.421362 | orchestrator | 2026-01-02 00:58:39.421370 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-01-02 00:58:39.421378 | orchestrator | Friday 02 January 2026 00:49:35 +0000 (0:00:01.289) 0:02:46.569 ******** 2026-01-02 00:58:39.421386 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.421394 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.421402 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.421410 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.421438 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.421446 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.421454 | orchestrator | 2026-01-02 00:58:39.421463 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-01-02 00:58:39.421477 | orchestrator | Friday 02 January 2026 00:49:36 +0000 (0:00:00.926) 0:02:47.495 ******** 2026-01-02 00:58:39.421485 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.421493 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.421501 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.421509 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.421517 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.421525 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.421542 | orchestrator | 2026-01-02 00:58:39.421551 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-01-02 00:58:39.421559 | orchestrator | Friday 02 January 2026 00:49:37 +0000 (0:00:01.393) 0:02:48.888 ******** 2026-01-02 00:58:39.421567 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.421575 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.421583 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.421591 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.421599 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.421607 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.421615 | orchestrator | 2026-01-02 00:58:39.421624 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-01-02 00:58:39.421632 | orchestrator | Friday 02 January 2026 00:49:38 +0000 (0:00:00.802) 0:02:49.691 ******** 2026-01-02 00:58:39.421640 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.421648 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.421656 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.421664 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.421676 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.421689 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.421702 | orchestrator | 2026-01-02 00:58:39.421716 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-01-02 00:58:39.421730 | orchestrator | Friday 02 January 2026 00:49:39 +0000 (0:00:00.997) 0:02:50.688 ******** 2026-01-02 00:58:39.421745 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.421759 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.421772 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.421786 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.421798 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.421811 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.421822 | orchestrator | 2026-01-02 00:58:39.421836 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-01-02 00:58:39.421849 | orchestrator | Friday 02 January 2026 00:49:40 +0000 (0:00:01.132) 0:02:51.820 ******** 2026-01-02 00:58:39.421862 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.421876 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.421892 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.421905 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.421917 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.421930 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.421943 | orchestrator | 2026-01-02 00:58:39.421956 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-01-02 00:58:39.421971 | orchestrator | Friday 02 January 2026 00:49:42 +0000 (0:00:01.267) 0:02:53.088 ******** 2026-01-02 00:58:39.421988 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-5, testbed-node-0, testbed-node-4, testbed-node-1, testbed-node-2 2026-01-02 00:58:39.422003 | orchestrator | 2026-01-02 00:58:39.422051 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-01-02 00:58:39.422070 | orchestrator | Friday 02 January 2026 00:49:43 +0000 (0:00:01.428) 0:02:54.517 ******** 2026-01-02 00:58:39.422083 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-01-02 00:58:39.422096 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-01-02 00:58:39.422110 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-01-02 00:58:39.422125 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-01-02 00:58:39.422138 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-01-02 00:58:39.422152 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-01-02 00:58:39.422165 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-01-02 00:58:39.422179 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-01-02 00:58:39.422193 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-01-02 00:58:39.422216 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-01-02 00:58:39.422224 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-01-02 00:58:39.422234 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-01-02 00:58:39.422248 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-01-02 00:58:39.422261 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-01-02 00:58:39.422275 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-01-02 00:58:39.422288 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-01-02 00:58:39.422299 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-01-02 00:58:39.422311 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-01-02 00:58:39.422349 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-01-02 00:58:39.422365 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-01-02 00:58:39.422380 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-01-02 00:58:39.422395 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-01-02 00:58:39.422409 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-01-02 00:58:39.422444 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-01-02 00:58:39.422459 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-01-02 00:58:39.422473 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-01-02 00:58:39.422487 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-01-02 00:58:39.422503 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-01-02 00:58:39.422525 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-01-02 00:58:39.422541 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-01-02 00:58:39.422555 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-01-02 00:58:39.422569 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-01-02 00:58:39.422581 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-01-02 00:58:39.422592 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-01-02 00:58:39.422605 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-01-02 00:58:39.422619 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-01-02 00:58:39.422633 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-01-02 00:58:39.422645 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-01-02 00:58:39.422658 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-01-02 00:58:39.422671 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-01-02 00:58:39.422684 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-01-02 00:58:39.422697 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-01-02 00:58:39.422711 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-01-02 00:58:39.422726 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-01-02 00:58:39.422740 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-01-02 00:58:39.422756 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-02 00:58:39.422770 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-01-02 00:58:39.422784 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-01-02 00:58:39.422798 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-02 00:58:39.422811 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-01-02 00:58:39.422824 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-02 00:58:39.422838 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-02 00:58:39.422865 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-02 00:58:39.422879 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-02 00:58:39.422892 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-02 00:58:39.422902 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-02 00:58:39.422910 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-02 00:58:39.422918 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-02 00:58:39.422925 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-02 00:58:39.422933 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-02 00:58:39.422941 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-02 00:58:39.422949 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-02 00:58:39.422957 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-02 00:58:39.422965 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-02 00:58:39.422973 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-02 00:58:39.422981 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-02 00:58:39.422990 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-02 00:58:39.423004 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-02 00:58:39.423017 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-02 00:58:39.423030 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-02 00:58:39.423043 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-02 00:58:39.423057 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-02 00:58:39.423070 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-02 00:58:39.423082 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-02 00:58:39.423096 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-02 00:58:39.423111 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-02 00:58:39.423151 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-02 00:58:39.423167 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-02 00:58:39.423181 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-02 00:58:39.423196 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-02 00:58:39.423211 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-02 00:58:39.423225 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-02 00:58:39.423234 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-02 00:58:39.423243 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-01-02 00:58:39.423251 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-02 00:58:39.423265 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-01-02 00:58:39.423276 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-01-02 00:58:39.423289 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-01-02 00:58:39.423303 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-01-02 00:58:39.423316 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-01-02 00:58:39.423329 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-01-02 00:58:39.423342 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-01-02 00:58:39.423368 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-01-02 00:58:39.423384 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-01-02 00:58:39.423398 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-01-02 00:58:39.423466 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-01-02 00:58:39.423483 | orchestrator | 2026-01-02 00:58:39.423497 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-01-02 00:58:39.423510 | orchestrator | Friday 02 January 2026 00:49:50 +0000 (0:00:07.182) 0:03:01.699 ******** 2026-01-02 00:58:39.423524 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.423538 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.423551 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.423562 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:58:39.423644 | orchestrator | 2026-01-02 00:58:39.423658 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-01-02 00:58:39.423671 | orchestrator | Friday 02 January 2026 00:49:51 +0000 (0:00:01.019) 0:03:02.718 ******** 2026-01-02 00:58:39.423685 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-02 00:58:39.423699 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-02 00:58:39.423714 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-02 00:58:39.423727 | orchestrator | 2026-01-02 00:58:39.423740 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-01-02 00:58:39.423759 | orchestrator | Friday 02 January 2026 00:49:53 +0000 (0:00:01.433) 0:03:04.152 ******** 2026-01-02 00:58:39.423779 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-02 00:58:39.423800 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-02 00:58:39.423819 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-02 00:58:39.423836 | orchestrator | 2026-01-02 00:58:39.423853 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-01-02 00:58:39.423872 | orchestrator | Friday 02 January 2026 00:49:55 +0000 (0:00:01.840) 0:03:05.993 ******** 2026-01-02 00:58:39.423889 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.423906 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.423922 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.423938 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.423954 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.423970 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.423985 | orchestrator | 2026-01-02 00:58:39.424001 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-01-02 00:58:39.424017 | orchestrator | Friday 02 January 2026 00:49:55 +0000 (0:00:00.765) 0:03:06.758 ******** 2026-01-02 00:58:39.424034 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.424048 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.424063 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.424078 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.424093 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.424109 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.424126 | orchestrator | 2026-01-02 00:58:39.424143 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-01-02 00:58:39.424160 | orchestrator | Friday 02 January 2026 00:49:56 +0000 (0:00:01.153) 0:03:07.912 ******** 2026-01-02 00:58:39.424176 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.424211 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.424229 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.424245 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.424262 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.424277 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.424289 | orchestrator | 2026-01-02 00:58:39.424332 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-01-02 00:58:39.424346 | orchestrator | Friday 02 January 2026 00:49:57 +0000 (0:00:00.976) 0:03:08.889 ******** 2026-01-02 00:58:39.424358 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.424371 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.424384 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.424397 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.424411 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.424446 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.424461 | orchestrator | 2026-01-02 00:58:39.424476 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-01-02 00:58:39.424491 | orchestrator | Friday 02 January 2026 00:49:58 +0000 (0:00:01.015) 0:03:09.905 ******** 2026-01-02 00:58:39.424505 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.424520 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.424535 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.424549 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.424575 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.424589 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.424602 | orchestrator | 2026-01-02 00:58:39.424616 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-01-02 00:58:39.424631 | orchestrator | Friday 02 January 2026 00:50:00 +0000 (0:00:01.239) 0:03:11.144 ******** 2026-01-02 00:58:39.424645 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.424656 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.424666 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.424676 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.424687 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.424698 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.424709 | orchestrator | 2026-01-02 00:58:39.424720 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-01-02 00:58:39.424731 | orchestrator | Friday 02 January 2026 00:50:00 +0000 (0:00:00.723) 0:03:11.868 ******** 2026-01-02 00:58:39.424742 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.424753 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.424764 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.424775 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.424788 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.424800 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.424812 | orchestrator | 2026-01-02 00:58:39.424824 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-01-02 00:58:39.424835 | orchestrator | Friday 02 January 2026 00:50:01 +0000 (0:00:00.557) 0:03:12.426 ******** 2026-01-02 00:58:39.424848 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.424860 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.424872 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.424883 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.424895 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.424905 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.424918 | orchestrator | 2026-01-02 00:58:39.424929 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-01-02 00:58:39.424941 | orchestrator | Friday 02 January 2026 00:50:02 +0000 (0:00:00.739) 0:03:13.165 ******** 2026-01-02 00:58:39.424952 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.424964 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.424991 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.425003 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.425014 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.425026 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.425039 | orchestrator | 2026-01-02 00:58:39.425052 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-01-02 00:58:39.425065 | orchestrator | Friday 02 January 2026 00:50:05 +0000 (0:00:03.326) 0:03:16.492 ******** 2026-01-02 00:58:39.425076 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.425088 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.425100 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.425112 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.425124 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.425137 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.425150 | orchestrator | 2026-01-02 00:58:39.425161 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-01-02 00:58:39.425175 | orchestrator | Friday 02 January 2026 00:50:06 +0000 (0:00:00.950) 0:03:17.442 ******** 2026-01-02 00:58:39.425188 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.425200 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.425212 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.425224 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.425235 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.425247 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.425259 | orchestrator | 2026-01-02 00:58:39.425271 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-01-02 00:58:39.425283 | orchestrator | Friday 02 January 2026 00:50:07 +0000 (0:00:00.884) 0:03:18.327 ******** 2026-01-02 00:58:39.425294 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.425305 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.425319 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.425333 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.425347 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.425360 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.425371 | orchestrator | 2026-01-02 00:58:39.425383 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-01-02 00:58:39.425395 | orchestrator | Friday 02 January 2026 00:50:08 +0000 (0:00:01.357) 0:03:19.684 ******** 2026-01-02 00:58:39.425407 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-02 00:58:39.425483 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-02 00:58:39.425495 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-02 00:58:39.425507 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.425542 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.425556 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.425568 | orchestrator | 2026-01-02 00:58:39.425580 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-01-02 00:58:39.425592 | orchestrator | Friday 02 January 2026 00:50:09 +0000 (0:00:00.763) 0:03:20.449 ******** 2026-01-02 00:58:39.425607 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-01-02 00:58:39.425630 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-01-02 00:58:39.425644 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.425667 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-01-02 00:58:39.425680 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-01-02 00:58:39.425692 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-01-02 00:58:39.425703 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-01-02 00:58:39.425714 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.425724 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.425735 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.425746 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.425757 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.425768 | orchestrator | 2026-01-02 00:58:39.425778 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-01-02 00:58:39.425789 | orchestrator | Friday 02 January 2026 00:50:10 +0000 (0:00:01.272) 0:03:21.722 ******** 2026-01-02 00:58:39.425800 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.425810 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.425822 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.425834 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.425845 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.425857 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.425868 | orchestrator | 2026-01-02 00:58:39.425879 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-01-02 00:58:39.425891 | orchestrator | Friday 02 January 2026 00:50:11 +0000 (0:00:00.747) 0:03:22.470 ******** 2026-01-02 00:58:39.425903 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.425914 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.425925 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.425936 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.425947 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.425957 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.425968 | orchestrator | 2026-01-02 00:58:39.425979 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-02 00:58:39.425990 | orchestrator | Friday 02 January 2026 00:50:12 +0000 (0:00:00.896) 0:03:23.367 ******** 2026-01-02 00:58:39.426000 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.426011 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.426048 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.426060 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.426070 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.426080 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.426089 | orchestrator | 2026-01-02 00:58:39.426099 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-02 00:58:39.426110 | orchestrator | Friday 02 January 2026 00:50:13 +0000 (0:00:00.686) 0:03:24.054 ******** 2026-01-02 00:58:39.426120 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.426130 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.426156 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.426167 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.426178 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.426189 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.426199 | orchestrator | 2026-01-02 00:58:39.426209 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-02 00:58:39.426242 | orchestrator | Friday 02 January 2026 00:50:13 +0000 (0:00:00.899) 0:03:24.953 ******** 2026-01-02 00:58:39.426253 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.426263 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.426273 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.426283 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.426295 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.426305 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.426316 | orchestrator | 2026-01-02 00:58:39.426329 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-02 00:58:39.426341 | orchestrator | Friday 02 January 2026 00:50:14 +0000 (0:00:00.671) 0:03:25.625 ******** 2026-01-02 00:58:39.426353 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.426363 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.426375 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.426386 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.426396 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.426407 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.426446 | orchestrator | 2026-01-02 00:58:39.426457 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-02 00:58:39.426468 | orchestrator | Friday 02 January 2026 00:50:15 +0000 (0:00:00.813) 0:03:26.438 ******** 2026-01-02 00:58:39.426478 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-02 00:58:39.426489 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-02 00:58:39.426499 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-02 00:58:39.426510 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.426520 | orchestrator | 2026-01-02 00:58:39.426531 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-02 00:58:39.426541 | orchestrator | Friday 02 January 2026 00:50:15 +0000 (0:00:00.360) 0:03:26.799 ******** 2026-01-02 00:58:39.426551 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-02 00:58:39.426561 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-02 00:58:39.426572 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-02 00:58:39.426582 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.426592 | orchestrator | 2026-01-02 00:58:39.426603 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-02 00:58:39.426613 | orchestrator | Friday 02 January 2026 00:50:16 +0000 (0:00:00.406) 0:03:27.206 ******** 2026-01-02 00:58:39.426623 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-02 00:58:39.426633 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-02 00:58:39.426643 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-02 00:58:39.426652 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.426663 | orchestrator | 2026-01-02 00:58:39.426673 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-02 00:58:39.426683 | orchestrator | Friday 02 January 2026 00:50:16 +0000 (0:00:00.372) 0:03:27.579 ******** 2026-01-02 00:58:39.426694 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.426704 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.426713 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.426722 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.426730 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.426740 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.426749 | orchestrator | 2026-01-02 00:58:39.426759 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-02 00:58:39.426779 | orchestrator | Friday 02 January 2026 00:50:17 +0000 (0:00:00.745) 0:03:28.324 ******** 2026-01-02 00:58:39.426790 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-02 00:58:39.426800 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-02 00:58:39.426811 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-01-02 00:58:39.426821 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-02 00:58:39.426832 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.426842 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-01-02 00:58:39.426852 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.426863 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-01-02 00:58:39.426873 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.426884 | orchestrator | 2026-01-02 00:58:39.426897 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-01-02 00:58:39.426908 | orchestrator | Friday 02 January 2026 00:50:19 +0000 (0:00:02.620) 0:03:30.945 ******** 2026-01-02 00:58:39.426919 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:58:39.426929 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:58:39.426939 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:58:39.426949 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:58:39.426959 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:58:39.426970 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:58:39.426980 | orchestrator | 2026-01-02 00:58:39.426991 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-02 00:58:39.427001 | orchestrator | Friday 02 January 2026 00:50:22 +0000 (0:00:03.008) 0:03:33.953 ******** 2026-01-02 00:58:39.427011 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:58:39.427022 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:58:39.427032 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:58:39.427043 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:58:39.427053 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:58:39.427063 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:58:39.427074 | orchestrator | 2026-01-02 00:58:39.427084 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-01-02 00:58:39.427094 | orchestrator | Friday 02 January 2026 00:50:23 +0000 (0:00:01.012) 0:03:34.966 ******** 2026-01-02 00:58:39.427105 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.427115 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.427125 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.427136 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:58:39.427147 | orchestrator | 2026-01-02 00:58:39.427157 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-01-02 00:58:39.427187 | orchestrator | Friday 02 January 2026 00:50:24 +0000 (0:00:00.898) 0:03:35.864 ******** 2026-01-02 00:58:39.427198 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.427208 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.427219 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.427229 | orchestrator | 2026-01-02 00:58:39.427239 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-01-02 00:58:39.427250 | orchestrator | Friday 02 January 2026 00:50:25 +0000 (0:00:00.294) 0:03:36.158 ******** 2026-01-02 00:58:39.427260 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:58:39.427271 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:58:39.427281 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:58:39.427292 | orchestrator | 2026-01-02 00:58:39.427302 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-01-02 00:58:39.427313 | orchestrator | Friday 02 January 2026 00:50:26 +0000 (0:00:01.488) 0:03:37.647 ******** 2026-01-02 00:58:39.427323 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-02 00:58:39.427339 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-02 00:58:39.427350 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-02 00:58:39.427368 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.427378 | orchestrator | 2026-01-02 00:58:39.427389 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-01-02 00:58:39.427399 | orchestrator | Friday 02 January 2026 00:50:27 +0000 (0:00:00.569) 0:03:38.216 ******** 2026-01-02 00:58:39.427409 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.427436 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.427446 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.427456 | orchestrator | 2026-01-02 00:58:39.427466 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-01-02 00:58:39.427477 | orchestrator | Friday 02 January 2026 00:50:27 +0000 (0:00:00.313) 0:03:38.530 ******** 2026-01-02 00:58:39.427487 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.427498 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.427508 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.427518 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:58:39.427529 | orchestrator | 2026-01-02 00:58:39.427539 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-01-02 00:58:39.427550 | orchestrator | Friday 02 January 2026 00:50:28 +0000 (0:00:01.055) 0:03:39.585 ******** 2026-01-02 00:58:39.427560 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-02 00:58:39.427571 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-02 00:58:39.427581 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-02 00:58:39.427591 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.427602 | orchestrator | 2026-01-02 00:58:39.427612 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-01-02 00:58:39.427623 | orchestrator | Friday 02 January 2026 00:50:29 +0000 (0:00:00.422) 0:03:40.007 ******** 2026-01-02 00:58:39.427633 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.427643 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.427652 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.427662 | orchestrator | 2026-01-02 00:58:39.427672 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-01-02 00:58:39.427682 | orchestrator | Friday 02 January 2026 00:50:29 +0000 (0:00:00.381) 0:03:40.389 ******** 2026-01-02 00:58:39.427693 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.427704 | orchestrator | 2026-01-02 00:58:39.427714 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-01-02 00:58:39.427724 | orchestrator | Friday 02 January 2026 00:50:29 +0000 (0:00:00.304) 0:03:40.694 ******** 2026-01-02 00:58:39.427734 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.427744 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.427754 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.427765 | orchestrator | 2026-01-02 00:58:39.427775 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-01-02 00:58:39.427785 | orchestrator | Friday 02 January 2026 00:50:30 +0000 (0:00:00.340) 0:03:41.035 ******** 2026-01-02 00:58:39.427796 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.427806 | orchestrator | 2026-01-02 00:58:39.427816 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-01-02 00:58:39.427827 | orchestrator | Friday 02 January 2026 00:50:30 +0000 (0:00:00.249) 0:03:41.284 ******** 2026-01-02 00:58:39.427837 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.427848 | orchestrator | 2026-01-02 00:58:39.427858 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-01-02 00:58:39.427868 | orchestrator | Friday 02 January 2026 00:50:30 +0000 (0:00:00.217) 0:03:41.502 ******** 2026-01-02 00:58:39.427879 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.427889 | orchestrator | 2026-01-02 00:58:39.427899 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-01-02 00:58:39.427910 | orchestrator | Friday 02 January 2026 00:50:30 +0000 (0:00:00.131) 0:03:41.633 ******** 2026-01-02 00:58:39.427928 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.427938 | orchestrator | 2026-01-02 00:58:39.427948 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-01-02 00:58:39.427959 | orchestrator | Friday 02 January 2026 00:50:31 +0000 (0:00:00.803) 0:03:42.436 ******** 2026-01-02 00:58:39.427969 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.427979 | orchestrator | 2026-01-02 00:58:39.427990 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-01-02 00:58:39.428000 | orchestrator | Friday 02 January 2026 00:50:31 +0000 (0:00:00.261) 0:03:42.698 ******** 2026-01-02 00:58:39.428010 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-02 00:58:39.428020 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-02 00:58:39.428031 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-02 00:58:39.428041 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.428051 | orchestrator | 2026-01-02 00:58:39.428062 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-01-02 00:58:39.428089 | orchestrator | Friday 02 January 2026 00:50:32 +0000 (0:00:00.463) 0:03:43.161 ******** 2026-01-02 00:58:39.428100 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.428110 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.428120 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.428131 | orchestrator | 2026-01-02 00:58:39.428141 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-01-02 00:58:39.428152 | orchestrator | Friday 02 January 2026 00:50:32 +0000 (0:00:00.345) 0:03:43.506 ******** 2026-01-02 00:58:39.428162 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.428172 | orchestrator | 2026-01-02 00:58:39.428183 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-01-02 00:58:39.428193 | orchestrator | Friday 02 January 2026 00:50:32 +0000 (0:00:00.239) 0:03:43.746 ******** 2026-01-02 00:58:39.428204 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.428215 | orchestrator | 2026-01-02 00:58:39.428226 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-01-02 00:58:39.428242 | orchestrator | Friday 02 January 2026 00:50:33 +0000 (0:00:00.251) 0:03:43.998 ******** 2026-01-02 00:58:39.428254 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.428265 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.428276 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.428287 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:58:39.428298 | orchestrator | 2026-01-02 00:58:39.428308 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-01-02 00:58:39.428319 | orchestrator | Friday 02 January 2026 00:50:34 +0000 (0:00:01.172) 0:03:45.170 ******** 2026-01-02 00:58:39.428330 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.428340 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.428349 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.428359 | orchestrator | 2026-01-02 00:58:39.428369 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-01-02 00:58:39.428379 | orchestrator | Friday 02 January 2026 00:50:34 +0000 (0:00:00.357) 0:03:45.528 ******** 2026-01-02 00:58:39.428389 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:58:39.428399 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:58:39.428409 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:58:39.428473 | orchestrator | 2026-01-02 00:58:39.428486 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-01-02 00:58:39.428497 | orchestrator | Friday 02 January 2026 00:50:35 +0000 (0:00:01.368) 0:03:46.897 ******** 2026-01-02 00:58:39.428508 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-02 00:58:39.428518 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-02 00:58:39.428528 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-02 00:58:39.428545 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.428554 | orchestrator | 2026-01-02 00:58:39.428563 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-01-02 00:58:39.428573 | orchestrator | Friday 02 January 2026 00:50:36 +0000 (0:00:00.918) 0:03:47.815 ******** 2026-01-02 00:58:39.428582 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.428591 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.428600 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.428609 | orchestrator | 2026-01-02 00:58:39.428618 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-01-02 00:58:39.428627 | orchestrator | Friday 02 January 2026 00:50:37 +0000 (0:00:00.638) 0:03:48.453 ******** 2026-01-02 00:58:39.428636 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.428645 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.428654 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.428662 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:58:39.428672 | orchestrator | 2026-01-02 00:58:39.428682 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-01-02 00:58:39.428691 | orchestrator | Friday 02 January 2026 00:50:38 +0000 (0:00:01.208) 0:03:49.662 ******** 2026-01-02 00:58:39.428700 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.428709 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.428719 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.428728 | orchestrator | 2026-01-02 00:58:39.428738 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-01-02 00:58:39.428747 | orchestrator | Friday 02 January 2026 00:50:39 +0000 (0:00:00.599) 0:03:50.262 ******** 2026-01-02 00:58:39.428755 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:58:39.428765 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:58:39.428774 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:58:39.428783 | orchestrator | 2026-01-02 00:58:39.428792 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-01-02 00:58:39.428801 | orchestrator | Friday 02 January 2026 00:50:40 +0000 (0:00:01.333) 0:03:51.595 ******** 2026-01-02 00:58:39.428810 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-02 00:58:39.428820 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-02 00:58:39.428829 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-02 00:58:39.428838 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.428847 | orchestrator | 2026-01-02 00:58:39.428857 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-01-02 00:58:39.428866 | orchestrator | Friday 02 January 2026 00:50:41 +0000 (0:00:00.651) 0:03:52.247 ******** 2026-01-02 00:58:39.428875 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.428884 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.428893 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.428901 | orchestrator | 2026-01-02 00:58:39.428910 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-01-02 00:58:39.428919 | orchestrator | Friday 02 January 2026 00:50:41 +0000 (0:00:00.353) 0:03:52.601 ******** 2026-01-02 00:58:39.428928 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.428937 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.428946 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.428955 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.428965 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.428990 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.429000 | orchestrator | 2026-01-02 00:58:39.429008 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-01-02 00:58:39.429017 | orchestrator | Friday 02 January 2026 00:50:42 +0000 (0:00:00.986) 0:03:53.587 ******** 2026-01-02 00:58:39.429026 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.429037 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.429053 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.429062 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:58:39.429071 | orchestrator | 2026-01-02 00:58:39.429080 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-01-02 00:58:39.429089 | orchestrator | Friday 02 January 2026 00:50:43 +0000 (0:00:00.988) 0:03:54.576 ******** 2026-01-02 00:58:39.429099 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.429108 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.429123 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.429132 | orchestrator | 2026-01-02 00:58:39.429142 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-01-02 00:58:39.429151 | orchestrator | Friday 02 January 2026 00:50:44 +0000 (0:00:00.556) 0:03:55.133 ******** 2026-01-02 00:58:39.429160 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:58:39.429169 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:58:39.429178 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:58:39.429187 | orchestrator | 2026-01-02 00:58:39.429196 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-01-02 00:58:39.429205 | orchestrator | Friday 02 January 2026 00:50:45 +0000 (0:00:01.355) 0:03:56.488 ******** 2026-01-02 00:58:39.429214 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-02 00:58:39.429223 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-02 00:58:39.429232 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-02 00:58:39.429241 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.429250 | orchestrator | 2026-01-02 00:58:39.429259 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-01-02 00:58:39.429268 | orchestrator | Friday 02 January 2026 00:50:46 +0000 (0:00:00.651) 0:03:57.139 ******** 2026-01-02 00:58:39.429277 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.429287 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.429296 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.429305 | orchestrator | 2026-01-02 00:58:39.429314 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-01-02 00:58:39.429323 | orchestrator | 2026-01-02 00:58:39.429333 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-02 00:58:39.429342 | orchestrator | Friday 02 January 2026 00:50:46 +0000 (0:00:00.587) 0:03:57.727 ******** 2026-01-02 00:58:39.429351 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:58:39.429361 | orchestrator | 2026-01-02 00:58:39.429370 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-02 00:58:39.429379 | orchestrator | Friday 02 January 2026 00:50:47 +0000 (0:00:00.813) 0:03:58.541 ******** 2026-01-02 00:58:39.429388 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:58:39.429397 | orchestrator | 2026-01-02 00:58:39.429406 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-02 00:58:39.429429 | orchestrator | Friday 02 January 2026 00:50:48 +0000 (0:00:00.521) 0:03:59.062 ******** 2026-01-02 00:58:39.429438 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.429448 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.429457 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.429466 | orchestrator | 2026-01-02 00:58:39.429475 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-02 00:58:39.429484 | orchestrator | Friday 02 January 2026 00:50:49 +0000 (0:00:01.090) 0:04:00.152 ******** 2026-01-02 00:58:39.429493 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.429502 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.429511 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.429520 | orchestrator | 2026-01-02 00:58:39.429529 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-02 00:58:39.429544 | orchestrator | Friday 02 January 2026 00:50:49 +0000 (0:00:00.375) 0:04:00.528 ******** 2026-01-02 00:58:39.429553 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.429562 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.429571 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.429580 | orchestrator | 2026-01-02 00:58:39.429589 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-02 00:58:39.429599 | orchestrator | Friday 02 January 2026 00:50:49 +0000 (0:00:00.386) 0:04:00.914 ******** 2026-01-02 00:58:39.429608 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.429617 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.429626 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.429636 | orchestrator | 2026-01-02 00:58:39.429644 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-02 00:58:39.429653 | orchestrator | Friday 02 January 2026 00:50:50 +0000 (0:00:00.372) 0:04:01.286 ******** 2026-01-02 00:58:39.429662 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.429671 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.429680 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.429689 | orchestrator | 2026-01-02 00:58:39.429698 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-02 00:58:39.429707 | orchestrator | Friday 02 January 2026 00:50:51 +0000 (0:00:01.212) 0:04:02.499 ******** 2026-01-02 00:58:39.429716 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.429725 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.429734 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.429744 | orchestrator | 2026-01-02 00:58:39.429753 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-02 00:58:39.429762 | orchestrator | Friday 02 January 2026 00:50:51 +0000 (0:00:00.374) 0:04:02.873 ******** 2026-01-02 00:58:39.429785 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.429795 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.429805 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.429814 | orchestrator | 2026-01-02 00:58:39.429823 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-02 00:58:39.429833 | orchestrator | Friday 02 January 2026 00:50:52 +0000 (0:00:00.533) 0:04:03.407 ******** 2026-01-02 00:58:39.429842 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.429854 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.429864 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.429874 | orchestrator | 2026-01-02 00:58:39.429884 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-02 00:58:39.429894 | orchestrator | Friday 02 January 2026 00:50:53 +0000 (0:00:00.886) 0:04:04.294 ******** 2026-01-02 00:58:39.429904 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.429914 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.429924 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.429933 | orchestrator | 2026-01-02 00:58:39.429947 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-02 00:58:39.429957 | orchestrator | Friday 02 January 2026 00:50:54 +0000 (0:00:01.160) 0:04:05.454 ******** 2026-01-02 00:58:39.429966 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.429975 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.429984 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.429994 | orchestrator | 2026-01-02 00:58:39.430003 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-02 00:58:39.430012 | orchestrator | Friday 02 January 2026 00:50:54 +0000 (0:00:00.350) 0:04:05.804 ******** 2026-01-02 00:58:39.430048 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.430057 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.430066 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.430076 | orchestrator | 2026-01-02 00:58:39.430086 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-02 00:58:39.430095 | orchestrator | Friday 02 January 2026 00:50:55 +0000 (0:00:00.390) 0:04:06.195 ******** 2026-01-02 00:58:39.430111 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.430121 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.430130 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.430139 | orchestrator | 2026-01-02 00:58:39.430148 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-02 00:58:39.430158 | orchestrator | Friday 02 January 2026 00:50:55 +0000 (0:00:00.379) 0:04:06.575 ******** 2026-01-02 00:58:39.430167 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.430176 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.430185 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.430195 | orchestrator | 2026-01-02 00:58:39.430204 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-02 00:58:39.430213 | orchestrator | Friday 02 January 2026 00:50:55 +0000 (0:00:00.324) 0:04:06.899 ******** 2026-01-02 00:58:39.430223 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.430232 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.430241 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.430250 | orchestrator | 2026-01-02 00:58:39.430259 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-02 00:58:39.430267 | orchestrator | Friday 02 January 2026 00:50:56 +0000 (0:00:00.591) 0:04:07.490 ******** 2026-01-02 00:58:39.430273 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.430278 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.430284 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.430289 | orchestrator | 2026-01-02 00:58:39.430296 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-02 00:58:39.430307 | orchestrator | Friday 02 January 2026 00:50:56 +0000 (0:00:00.326) 0:04:07.817 ******** 2026-01-02 00:58:39.430312 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.430318 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.430323 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.430328 | orchestrator | 2026-01-02 00:58:39.430334 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-02 00:58:39.430339 | orchestrator | Friday 02 January 2026 00:50:57 +0000 (0:00:00.306) 0:04:08.123 ******** 2026-01-02 00:58:39.430345 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.430350 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.430356 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.430361 | orchestrator | 2026-01-02 00:58:39.430367 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-02 00:58:39.430372 | orchestrator | Friday 02 January 2026 00:50:57 +0000 (0:00:00.346) 0:04:08.470 ******** 2026-01-02 00:58:39.430378 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.430383 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.430388 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.430394 | orchestrator | 2026-01-02 00:58:39.430399 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-02 00:58:39.430405 | orchestrator | Friday 02 January 2026 00:50:57 +0000 (0:00:00.476) 0:04:08.947 ******** 2026-01-02 00:58:39.430410 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.430462 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.430473 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.430480 | orchestrator | 2026-01-02 00:58:39.430486 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-01-02 00:58:39.430492 | orchestrator | Friday 02 January 2026 00:50:58 +0000 (0:00:00.503) 0:04:09.450 ******** 2026-01-02 00:58:39.430498 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.430507 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.430516 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.430525 | orchestrator | 2026-01-02 00:58:39.430534 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-01-02 00:58:39.430543 | orchestrator | Friday 02 January 2026 00:50:58 +0000 (0:00:00.285) 0:04:09.735 ******** 2026-01-02 00:58:39.430553 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:58:39.430569 | orchestrator | 2026-01-02 00:58:39.430575 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-01-02 00:58:39.430581 | orchestrator | Friday 02 January 2026 00:50:59 +0000 (0:00:00.665) 0:04:10.400 ******** 2026-01-02 00:58:39.430586 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.430592 | orchestrator | 2026-01-02 00:58:39.430612 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-01-02 00:58:39.430618 | orchestrator | Friday 02 January 2026 00:50:59 +0000 (0:00:00.138) 0:04:10.539 ******** 2026-01-02 00:58:39.430623 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-02 00:58:39.430629 | orchestrator | 2026-01-02 00:58:39.430634 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-01-02 00:58:39.430641 | orchestrator | Friday 02 January 2026 00:51:00 +0000 (0:00:00.979) 0:04:11.518 ******** 2026-01-02 00:58:39.430651 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.430660 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.430668 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.430677 | orchestrator | 2026-01-02 00:58:39.430686 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-01-02 00:58:39.430695 | orchestrator | Friday 02 January 2026 00:51:00 +0000 (0:00:00.376) 0:04:11.895 ******** 2026-01-02 00:58:39.430704 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.430714 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.430728 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.430735 | orchestrator | 2026-01-02 00:58:39.430740 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-01-02 00:58:39.430746 | orchestrator | Friday 02 January 2026 00:51:01 +0000 (0:00:00.513) 0:04:12.409 ******** 2026-01-02 00:58:39.430751 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:58:39.430757 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:58:39.430762 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:58:39.430768 | orchestrator | 2026-01-02 00:58:39.430773 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-01-02 00:58:39.430779 | orchestrator | Friday 02 January 2026 00:51:02 +0000 (0:00:01.347) 0:04:13.756 ******** 2026-01-02 00:58:39.430784 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:58:39.430790 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:58:39.430795 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:58:39.430800 | orchestrator | 2026-01-02 00:58:39.430806 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-01-02 00:58:39.430811 | orchestrator | Friday 02 January 2026 00:51:03 +0000 (0:00:00.836) 0:04:14.593 ******** 2026-01-02 00:58:39.430817 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:58:39.430822 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:58:39.430827 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:58:39.430833 | orchestrator | 2026-01-02 00:58:39.430838 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-01-02 00:58:39.430844 | orchestrator | Friday 02 January 2026 00:51:04 +0000 (0:00:00.765) 0:04:15.358 ******** 2026-01-02 00:58:39.430849 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.430855 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.430861 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.430866 | orchestrator | 2026-01-02 00:58:39.430872 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-01-02 00:58:39.430877 | orchestrator | Friday 02 January 2026 00:51:05 +0000 (0:00:01.007) 0:04:16.365 ******** 2026-01-02 00:58:39.430883 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:58:39.430888 | orchestrator | 2026-01-02 00:58:39.430894 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-01-02 00:58:39.430899 | orchestrator | Friday 02 January 2026 00:51:07 +0000 (0:00:01.884) 0:04:18.250 ******** 2026-01-02 00:58:39.430904 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.430910 | orchestrator | 2026-01-02 00:58:39.430915 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-01-02 00:58:39.430930 | orchestrator | Friday 02 January 2026 00:51:07 +0000 (0:00:00.715) 0:04:18.965 ******** 2026-01-02 00:58:39.430936 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-02 00:58:39.430941 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 00:58:39.430947 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 00:58:39.430952 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-02 00:58:39.430958 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-01-02 00:58:39.430964 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-02 00:58:39.430969 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-02 00:58:39.430974 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-01-02 00:58:39.430980 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-01-02 00:58:39.430985 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-01-02 00:58:39.430991 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-02 00:58:39.430996 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-01-02 00:58:39.431002 | orchestrator | 2026-01-02 00:58:39.431008 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-01-02 00:58:39.431013 | orchestrator | Friday 02 January 2026 00:51:11 +0000 (0:00:03.750) 0:04:22.716 ******** 2026-01-02 00:58:39.431018 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:58:39.431023 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:58:39.431028 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:58:39.431033 | orchestrator | 2026-01-02 00:58:39.431038 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-01-02 00:58:39.431042 | orchestrator | Friday 02 January 2026 00:51:13 +0000 (0:00:01.469) 0:04:24.185 ******** 2026-01-02 00:58:39.431047 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.431052 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.431057 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.431062 | orchestrator | 2026-01-02 00:58:39.431067 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-01-02 00:58:39.431072 | orchestrator | Friday 02 January 2026 00:51:13 +0000 (0:00:00.287) 0:04:24.473 ******** 2026-01-02 00:58:39.431076 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.431081 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.431086 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.431091 | orchestrator | 2026-01-02 00:58:39.431096 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-01-02 00:58:39.431101 | orchestrator | Friday 02 January 2026 00:51:13 +0000 (0:00:00.453) 0:04:24.926 ******** 2026-01-02 00:58:39.431105 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:58:39.431124 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:58:39.431133 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:58:39.431141 | orchestrator | 2026-01-02 00:58:39.431150 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-01-02 00:58:39.431158 | orchestrator | Friday 02 January 2026 00:51:15 +0000 (0:00:01.918) 0:04:26.844 ******** 2026-01-02 00:58:39.431167 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:58:39.431176 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:58:39.431184 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:58:39.431193 | orchestrator | 2026-01-02 00:58:39.431201 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-01-02 00:58:39.431209 | orchestrator | Friday 02 January 2026 00:51:17 +0000 (0:00:01.319) 0:04:28.164 ******** 2026-01-02 00:58:39.431217 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.431225 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.431234 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.431242 | orchestrator | 2026-01-02 00:58:39.431255 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-01-02 00:58:39.431262 | orchestrator | Friday 02 January 2026 00:51:17 +0000 (0:00:00.246) 0:04:28.410 ******** 2026-01-02 00:58:39.431276 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:58:39.431285 | orchestrator | 2026-01-02 00:58:39.431292 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-01-02 00:58:39.431301 | orchestrator | Friday 02 January 2026 00:51:18 +0000 (0:00:00.815) 0:04:29.226 ******** 2026-01-02 00:58:39.431309 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.431317 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.431325 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.431333 | orchestrator | 2026-01-02 00:58:39.431341 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-01-02 00:58:39.431349 | orchestrator | Friday 02 January 2026 00:51:18 +0000 (0:00:00.455) 0:04:29.681 ******** 2026-01-02 00:58:39.431357 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.431365 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.431373 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.431381 | orchestrator | 2026-01-02 00:58:39.431389 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-01-02 00:58:39.431397 | orchestrator | Friday 02 January 2026 00:51:19 +0000 (0:00:00.408) 0:04:30.089 ******** 2026-01-02 00:58:39.431406 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:58:39.431428 | orchestrator | 2026-01-02 00:58:39.431437 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-01-02 00:58:39.431445 | orchestrator | Friday 02 January 2026 00:51:20 +0000 (0:00:01.629) 0:04:31.719 ******** 2026-01-02 00:58:39.431453 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:58:39.431461 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:58:39.431469 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:58:39.431477 | orchestrator | 2026-01-02 00:58:39.431486 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-01-02 00:58:39.431494 | orchestrator | Friday 02 January 2026 00:51:23 +0000 (0:00:02.926) 0:04:34.646 ******** 2026-01-02 00:58:39.431502 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:58:39.431510 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:58:39.431518 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:58:39.431526 | orchestrator | 2026-01-02 00:58:39.431534 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-01-02 00:58:39.431543 | orchestrator | Friday 02 January 2026 00:51:25 +0000 (0:00:02.045) 0:04:36.692 ******** 2026-01-02 00:58:39.431551 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:58:39.431559 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:58:39.431567 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:58:39.431575 | orchestrator | 2026-01-02 00:58:39.431583 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-01-02 00:58:39.431591 | orchestrator | Friday 02 January 2026 00:51:28 +0000 (0:00:02.325) 0:04:39.018 ******** 2026-01-02 00:58:39.431599 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:58:39.431607 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:58:39.431615 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:58:39.431623 | orchestrator | 2026-01-02 00:58:39.431631 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-01-02 00:58:39.431639 | orchestrator | Friday 02 January 2026 00:51:30 +0000 (0:00:02.572) 0:04:41.590 ******** 2026-01-02 00:58:39.431647 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:58:39.431655 | orchestrator | 2026-01-02 00:58:39.431663 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-01-02 00:58:39.431671 | orchestrator | Friday 02 January 2026 00:51:31 +0000 (0:00:00.748) 0:04:42.338 ******** 2026-01-02 00:58:39.431680 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-01-02 00:58:39.431694 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.431702 | orchestrator | 2026-01-02 00:58:39.431710 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-01-02 00:58:39.431718 | orchestrator | Friday 02 January 2026 00:51:53 +0000 (0:00:21.767) 0:05:04.106 ******** 2026-01-02 00:58:39.431726 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.431734 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.431742 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.431750 | orchestrator | 2026-01-02 00:58:39.431758 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-01-02 00:58:39.431766 | orchestrator | Friday 02 January 2026 00:52:03 +0000 (0:00:10.846) 0:05:14.952 ******** 2026-01-02 00:58:39.431774 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.431782 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.431790 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.431798 | orchestrator | 2026-01-02 00:58:39.431806 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-01-02 00:58:39.431827 | orchestrator | Friday 02 January 2026 00:52:04 +0000 (0:00:00.617) 0:05:15.569 ******** 2026-01-02 00:58:39.431838 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2fabda8972c8f4d67a7cf1a468a396f09cf87cbb'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-01-02 00:58:39.431853 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2fabda8972c8f4d67a7cf1a468a396f09cf87cbb'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-01-02 00:58:39.431863 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2fabda8972c8f4d67a7cf1a468a396f09cf87cbb'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-01-02 00:58:39.431873 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2fabda8972c8f4d67a7cf1a468a396f09cf87cbb'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-01-02 00:58:39.431882 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2fabda8972c8f4d67a7cf1a468a396f09cf87cbb'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-01-02 00:58:39.431890 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2fabda8972c8f4d67a7cf1a468a396f09cf87cbb'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__2fabda8972c8f4d67a7cf1a468a396f09cf87cbb'}])  2026-01-02 00:58:39.431901 | orchestrator | 2026-01-02 00:58:39.431910 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-02 00:58:39.431919 | orchestrator | Friday 02 January 2026 00:52:20 +0000 (0:00:15.513) 0:05:31.083 ******** 2026-01-02 00:58:39.431927 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.431935 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.431949 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.431957 | orchestrator | 2026-01-02 00:58:39.431966 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-01-02 00:58:39.431974 | orchestrator | Friday 02 January 2026 00:52:20 +0000 (0:00:00.398) 0:05:31.482 ******** 2026-01-02 00:58:39.431982 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:58:39.431991 | orchestrator | 2026-01-02 00:58:39.431999 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-01-02 00:58:39.432008 | orchestrator | Friday 02 January 2026 00:52:21 +0000 (0:00:00.807) 0:05:32.289 ******** 2026-01-02 00:58:39.432016 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.432026 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.432034 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.432044 | orchestrator | 2026-01-02 00:58:39.432053 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-01-02 00:58:39.432061 | orchestrator | Friday 02 January 2026 00:52:21 +0000 (0:00:00.326) 0:05:32.615 ******** 2026-01-02 00:58:39.432069 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.432077 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.432086 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.432094 | orchestrator | 2026-01-02 00:58:39.432102 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-01-02 00:58:39.432111 | orchestrator | Friday 02 January 2026 00:52:21 +0000 (0:00:00.334) 0:05:32.949 ******** 2026-01-02 00:58:39.432119 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-02 00:58:39.432127 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-02 00:58:39.432136 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-02 00:58:39.432144 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.432152 | orchestrator | 2026-01-02 00:58:39.432161 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-01-02 00:58:39.432169 | orchestrator | Friday 02 January 2026 00:52:22 +0000 (0:00:00.964) 0:05:33.913 ******** 2026-01-02 00:58:39.432177 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.432186 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.432207 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.432216 | orchestrator | 2026-01-02 00:58:39.432224 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-01-02 00:58:39.432232 | orchestrator | 2026-01-02 00:58:39.432241 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-02 00:58:39.432250 | orchestrator | Friday 02 January 2026 00:52:23 +0000 (0:00:00.868) 0:05:34.782 ******** 2026-01-02 00:58:39.432258 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:58:39.432267 | orchestrator | 2026-01-02 00:58:39.432275 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-02 00:58:39.432283 | orchestrator | Friday 02 January 2026 00:52:24 +0000 (0:00:00.533) 0:05:35.315 ******** 2026-01-02 00:58:39.432292 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:58:39.432301 | orchestrator | 2026-01-02 00:58:39.432309 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-02 00:58:39.432318 | orchestrator | Friday 02 January 2026 00:52:25 +0000 (0:00:00.787) 0:05:36.103 ******** 2026-01-02 00:58:39.432326 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.432334 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.432343 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.432351 | orchestrator | 2026-01-02 00:58:39.432359 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-02 00:58:39.432368 | orchestrator | Friday 02 January 2026 00:52:26 +0000 (0:00:00.926) 0:05:37.029 ******** 2026-01-02 00:58:39.432376 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.432390 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.432398 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.432407 | orchestrator | 2026-01-02 00:58:39.432431 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-02 00:58:39.432440 | orchestrator | Friday 02 January 2026 00:52:26 +0000 (0:00:00.319) 0:05:37.349 ******** 2026-01-02 00:58:39.432448 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.432456 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.432464 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.432473 | orchestrator | 2026-01-02 00:58:39.432481 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-02 00:58:39.432489 | orchestrator | Friday 02 January 2026 00:52:26 +0000 (0:00:00.574) 0:05:37.924 ******** 2026-01-02 00:58:39.432498 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.432506 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.432514 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.432522 | orchestrator | 2026-01-02 00:58:39.432531 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-02 00:58:39.432539 | orchestrator | Friday 02 January 2026 00:52:27 +0000 (0:00:00.315) 0:05:38.239 ******** 2026-01-02 00:58:39.432547 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.432556 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.432564 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.432573 | orchestrator | 2026-01-02 00:58:39.432581 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-02 00:58:39.432590 | orchestrator | Friday 02 January 2026 00:52:28 +0000 (0:00:00.774) 0:05:39.013 ******** 2026-01-02 00:58:39.432598 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.432606 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.432615 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.432624 | orchestrator | 2026-01-02 00:58:39.432633 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-02 00:58:39.432641 | orchestrator | Friday 02 January 2026 00:52:28 +0000 (0:00:00.350) 0:05:39.364 ******** 2026-01-02 00:58:39.432650 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.432657 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.432665 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.432673 | orchestrator | 2026-01-02 00:58:39.432681 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-02 00:58:39.432688 | orchestrator | Friday 02 January 2026 00:52:28 +0000 (0:00:00.583) 0:05:39.947 ******** 2026-01-02 00:58:39.432695 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.432704 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.432792 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.432813 | orchestrator | 2026-01-02 00:58:39.432818 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-02 00:58:39.432824 | orchestrator | Friday 02 January 2026 00:52:29 +0000 (0:00:00.827) 0:05:40.775 ******** 2026-01-02 00:58:39.432829 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.432834 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.432841 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.432849 | orchestrator | 2026-01-02 00:58:39.432857 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-02 00:58:39.432865 | orchestrator | Friday 02 January 2026 00:52:30 +0000 (0:00:00.873) 0:05:41.648 ******** 2026-01-02 00:58:39.432873 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.432881 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.432890 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.432898 | orchestrator | 2026-01-02 00:58:39.432906 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-02 00:58:39.432915 | orchestrator | Friday 02 January 2026 00:52:31 +0000 (0:00:00.389) 0:05:42.038 ******** 2026-01-02 00:58:39.432924 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.432932 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.432937 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.432947 | orchestrator | 2026-01-02 00:58:39.432952 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-02 00:58:39.432957 | orchestrator | Friday 02 January 2026 00:52:31 +0000 (0:00:00.319) 0:05:42.357 ******** 2026-01-02 00:58:39.432962 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.432966 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.432971 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.432976 | orchestrator | 2026-01-02 00:58:39.432981 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-02 00:58:39.433000 | orchestrator | Friday 02 January 2026 00:52:31 +0000 (0:00:00.591) 0:05:42.949 ******** 2026-01-02 00:58:39.433006 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.433011 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.433016 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.433020 | orchestrator | 2026-01-02 00:58:39.433025 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-02 00:58:39.433030 | orchestrator | Friday 02 January 2026 00:52:32 +0000 (0:00:00.456) 0:05:43.405 ******** 2026-01-02 00:58:39.433035 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.433040 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.433045 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.433050 | orchestrator | 2026-01-02 00:58:39.433054 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-02 00:58:39.433059 | orchestrator | Friday 02 January 2026 00:52:32 +0000 (0:00:00.357) 0:05:43.763 ******** 2026-01-02 00:58:39.433064 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.433069 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.433078 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.433136 | orchestrator | 2026-01-02 00:58:39.433141 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-02 00:58:39.433146 | orchestrator | Friday 02 January 2026 00:52:33 +0000 (0:00:00.319) 0:05:44.083 ******** 2026-01-02 00:58:39.433151 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.433156 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.433160 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.433165 | orchestrator | 2026-01-02 00:58:39.433170 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-02 00:58:39.433175 | orchestrator | Friday 02 January 2026 00:52:33 +0000 (0:00:00.623) 0:05:44.707 ******** 2026-01-02 00:58:39.433180 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.433185 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.433190 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.433195 | orchestrator | 2026-01-02 00:58:39.433200 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-02 00:58:39.433205 | orchestrator | Friday 02 January 2026 00:52:34 +0000 (0:00:00.384) 0:05:45.091 ******** 2026-01-02 00:58:39.433210 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.433215 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.433220 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.433224 | orchestrator | 2026-01-02 00:58:39.433229 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-02 00:58:39.433234 | orchestrator | Friday 02 January 2026 00:52:34 +0000 (0:00:00.527) 0:05:45.618 ******** 2026-01-02 00:58:39.433239 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.433244 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.433249 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.433253 | orchestrator | 2026-01-02 00:58:39.433258 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-01-02 00:58:39.433263 | orchestrator | Friday 02 January 2026 00:52:35 +0000 (0:00:01.013) 0:05:46.632 ******** 2026-01-02 00:58:39.433268 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-02 00:58:39.433273 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-02 00:58:39.433278 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-02 00:58:39.433288 | orchestrator | 2026-01-02 00:58:39.433292 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-01-02 00:58:39.433297 | orchestrator | Friday 02 January 2026 00:52:36 +0000 (0:00:00.654) 0:05:47.287 ******** 2026-01-02 00:58:39.433302 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:58:39.433308 | orchestrator | 2026-01-02 00:58:39.433312 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-01-02 00:58:39.433317 | orchestrator | Friday 02 January 2026 00:52:36 +0000 (0:00:00.639) 0:05:47.926 ******** 2026-01-02 00:58:39.433322 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:58:39.433327 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:58:39.433332 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:58:39.433337 | orchestrator | 2026-01-02 00:58:39.433342 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-01-02 00:58:39.433346 | orchestrator | Friday 02 January 2026 00:52:37 +0000 (0:00:00.758) 0:05:48.685 ******** 2026-01-02 00:58:39.433351 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.433356 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.433361 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.433366 | orchestrator | 2026-01-02 00:58:39.433371 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-01-02 00:58:39.433376 | orchestrator | Friday 02 January 2026 00:52:38 +0000 (0:00:00.605) 0:05:49.291 ******** 2026-01-02 00:58:39.433381 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-02 00:58:39.433389 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-02 00:58:39.433397 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-02 00:58:39.433405 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-01-02 00:58:39.433522 | orchestrator | 2026-01-02 00:58:39.433544 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-01-02 00:58:39.433549 | orchestrator | Friday 02 January 2026 00:52:49 +0000 (0:00:10.971) 0:06:00.263 ******** 2026-01-02 00:58:39.433554 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.433559 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.433564 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.433569 | orchestrator | 2026-01-02 00:58:39.433574 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-01-02 00:58:39.433579 | orchestrator | Friday 02 January 2026 00:52:49 +0000 (0:00:00.370) 0:06:00.633 ******** 2026-01-02 00:58:39.433584 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-02 00:58:39.433589 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-02 00:58:39.433594 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-02 00:58:39.433599 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-01-02 00:58:39.433604 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 00:58:39.433623 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 00:58:39.433629 | orchestrator | 2026-01-02 00:58:39.433633 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-01-02 00:58:39.433646 | orchestrator | Friday 02 January 2026 00:52:51 +0000 (0:00:02.289) 0:06:02.923 ******** 2026-01-02 00:58:39.433651 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-02 00:58:39.433655 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-02 00:58:39.433660 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-02 00:58:39.433665 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-02 00:58:39.433669 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-02 00:58:39.433680 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-01-02 00:58:39.433685 | orchestrator | 2026-01-02 00:58:39.433690 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-01-02 00:58:39.433699 | orchestrator | Friday 02 January 2026 00:52:53 +0000 (0:00:01.455) 0:06:04.379 ******** 2026-01-02 00:58:39.433709 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.433714 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.433719 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.433723 | orchestrator | 2026-01-02 00:58:39.433728 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-01-02 00:58:39.433733 | orchestrator | Friday 02 January 2026 00:52:54 +0000 (0:00:01.084) 0:06:05.463 ******** 2026-01-02 00:58:39.433737 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.433742 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.433747 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.433751 | orchestrator | 2026-01-02 00:58:39.433756 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-01-02 00:58:39.433760 | orchestrator | Friday 02 January 2026 00:52:54 +0000 (0:00:00.311) 0:06:05.775 ******** 2026-01-02 00:58:39.433765 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.433770 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.433774 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.433779 | orchestrator | 2026-01-02 00:58:39.433784 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-01-02 00:58:39.433788 | orchestrator | Friday 02 January 2026 00:52:55 +0000 (0:00:00.332) 0:06:06.107 ******** 2026-01-02 00:58:39.433793 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:58:39.433798 | orchestrator | 2026-01-02 00:58:39.433802 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-01-02 00:58:39.433807 | orchestrator | Friday 02 January 2026 00:52:56 +0000 (0:00:00.987) 0:06:07.094 ******** 2026-01-02 00:58:39.433812 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.433816 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.433821 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.433826 | orchestrator | 2026-01-02 00:58:39.433830 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-01-02 00:58:39.433835 | orchestrator | Friday 02 January 2026 00:52:56 +0000 (0:00:00.336) 0:06:07.430 ******** 2026-01-02 00:58:39.433839 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.433844 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.433849 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.433853 | orchestrator | 2026-01-02 00:58:39.433858 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-01-02 00:58:39.433862 | orchestrator | Friday 02 January 2026 00:52:56 +0000 (0:00:00.334) 0:06:07.765 ******** 2026-01-02 00:58:39.433867 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:58:39.433872 | orchestrator | 2026-01-02 00:58:39.433876 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-01-02 00:58:39.433881 | orchestrator | Friday 02 January 2026 00:52:57 +0000 (0:00:00.791) 0:06:08.556 ******** 2026-01-02 00:58:39.433885 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:58:39.433890 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:58:39.433895 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:58:39.433899 | orchestrator | 2026-01-02 00:58:39.433904 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-01-02 00:58:39.433908 | orchestrator | Friday 02 January 2026 00:52:59 +0000 (0:00:01.454) 0:06:10.011 ******** 2026-01-02 00:58:39.433913 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:58:39.433917 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:58:39.433922 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:58:39.433927 | orchestrator | 2026-01-02 00:58:39.433931 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-01-02 00:58:39.433936 | orchestrator | Friday 02 January 2026 00:53:00 +0000 (0:00:01.348) 0:06:11.360 ******** 2026-01-02 00:58:39.433941 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:58:39.433945 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:58:39.433957 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:58:39.433961 | orchestrator | 2026-01-02 00:58:39.433966 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-01-02 00:58:39.433971 | orchestrator | Friday 02 January 2026 00:53:02 +0000 (0:00:01.946) 0:06:13.306 ******** 2026-01-02 00:58:39.433975 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:58:39.433980 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:58:39.433985 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:58:39.433989 | orchestrator | 2026-01-02 00:58:39.433994 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-01-02 00:58:39.433998 | orchestrator | Friday 02 January 2026 00:53:04 +0000 (0:00:02.382) 0:06:15.688 ******** 2026-01-02 00:58:39.434003 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.434007 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.434012 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-01-02 00:58:39.434045 | orchestrator | 2026-01-02 00:58:39.434050 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-01-02 00:58:39.434055 | orchestrator | Friday 02 January 2026 00:53:05 +0000 (0:00:00.360) 0:06:16.049 ******** 2026-01-02 00:58:39.434069 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-01-02 00:58:39.434075 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-01-02 00:58:39.434079 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-01-02 00:58:39.434084 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-01-02 00:58:39.434089 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-01-02 00:58:39.434093 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2026-01-02 00:58:39.434101 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-02 00:58:39.434106 | orchestrator | 2026-01-02 00:58:39.434110 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-01-02 00:58:39.434115 | orchestrator | Friday 02 January 2026 00:53:41 +0000 (0:00:36.516) 0:06:52.566 ******** 2026-01-02 00:58:39.434120 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-02 00:58:39.434124 | orchestrator | 2026-01-02 00:58:39.434129 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-01-02 00:58:39.434133 | orchestrator | Friday 02 January 2026 00:53:42 +0000 (0:00:01.329) 0:06:53.896 ******** 2026-01-02 00:58:39.434138 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.434143 | orchestrator | 2026-01-02 00:58:39.434147 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-01-02 00:58:39.434152 | orchestrator | Friday 02 January 2026 00:53:43 +0000 (0:00:00.313) 0:06:54.209 ******** 2026-01-02 00:58:39.434156 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.434161 | orchestrator | 2026-01-02 00:58:39.434166 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-01-02 00:58:39.434170 | orchestrator | Friday 02 January 2026 00:53:43 +0000 (0:00:00.156) 0:06:54.366 ******** 2026-01-02 00:58:39.434175 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-01-02 00:58:39.434180 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-01-02 00:58:39.434184 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-01-02 00:58:39.434189 | orchestrator | 2026-01-02 00:58:39.434194 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-01-02 00:58:39.434198 | orchestrator | Friday 02 January 2026 00:53:49 +0000 (0:00:06.530) 0:07:00.896 ******** 2026-01-02 00:58:39.434203 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-01-02 00:58:39.434212 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-01-02 00:58:39.434216 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-01-02 00:58:39.434221 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-01-02 00:58:39.434226 | orchestrator | 2026-01-02 00:58:39.434230 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-02 00:58:39.434235 | orchestrator | Friday 02 January 2026 00:53:55 +0000 (0:00:05.204) 0:07:06.101 ******** 2026-01-02 00:58:39.434240 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:58:39.434244 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:58:39.434249 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:58:39.434253 | orchestrator | 2026-01-02 00:58:39.434258 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-01-02 00:58:39.434263 | orchestrator | Friday 02 January 2026 00:53:55 +0000 (0:00:00.758) 0:07:06.860 ******** 2026-01-02 00:58:39.434267 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:58:39.434272 | orchestrator | 2026-01-02 00:58:39.434277 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-01-02 00:58:39.434281 | orchestrator | Friday 02 January 2026 00:53:56 +0000 (0:00:00.796) 0:07:07.656 ******** 2026-01-02 00:58:39.434286 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.434291 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.434307 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.434312 | orchestrator | 2026-01-02 00:58:39.434317 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-01-02 00:58:39.434322 | orchestrator | Friday 02 January 2026 00:53:57 +0000 (0:00:00.367) 0:07:08.024 ******** 2026-01-02 00:58:39.434326 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:58:39.434331 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:58:39.434336 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:58:39.434340 | orchestrator | 2026-01-02 00:58:39.434345 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-01-02 00:58:39.434349 | orchestrator | Friday 02 January 2026 00:53:58 +0000 (0:00:01.291) 0:07:09.315 ******** 2026-01-02 00:58:39.434354 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-02 00:58:39.434358 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-02 00:58:39.434363 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-02 00:58:39.434368 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.434372 | orchestrator | 2026-01-02 00:58:39.434377 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-01-02 00:58:39.434381 | orchestrator | Friday 02 January 2026 00:53:58 +0000 (0:00:00.635) 0:07:09.951 ******** 2026-01-02 00:58:39.434386 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.434391 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.434395 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.434400 | orchestrator | 2026-01-02 00:58:39.434404 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-01-02 00:58:39.434409 | orchestrator | 2026-01-02 00:58:39.434435 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-02 00:58:39.434456 | orchestrator | Friday 02 January 2026 00:53:59 +0000 (0:00:00.839) 0:07:10.791 ******** 2026-01-02 00:58:39.434462 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:58:39.434467 | orchestrator | 2026-01-02 00:58:39.434472 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-02 00:58:39.434477 | orchestrator | Friday 02 January 2026 00:54:00 +0000 (0:00:00.510) 0:07:11.302 ******** 2026-01-02 00:58:39.434481 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:58:39.434490 | orchestrator | 2026-01-02 00:58:39.434495 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-02 00:58:39.434503 | orchestrator | Friday 02 January 2026 00:54:01 +0000 (0:00:00.794) 0:07:12.096 ******** 2026-01-02 00:58:39.434507 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.434512 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.434517 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.434521 | orchestrator | 2026-01-02 00:58:39.434526 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-02 00:58:39.434531 | orchestrator | Friday 02 January 2026 00:54:01 +0000 (0:00:00.345) 0:07:12.442 ******** 2026-01-02 00:58:39.434535 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.434540 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.434545 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.434549 | orchestrator | 2026-01-02 00:58:39.434554 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-02 00:58:39.434559 | orchestrator | Friday 02 January 2026 00:54:02 +0000 (0:00:00.736) 0:07:13.178 ******** 2026-01-02 00:58:39.434563 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.434568 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.434573 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.434577 | orchestrator | 2026-01-02 00:58:39.434582 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-02 00:58:39.434587 | orchestrator | Friday 02 January 2026 00:54:02 +0000 (0:00:00.742) 0:07:13.920 ******** 2026-01-02 00:58:39.434591 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.434596 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.434600 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.434605 | orchestrator | 2026-01-02 00:58:39.434610 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-02 00:58:39.434614 | orchestrator | Friday 02 January 2026 00:54:03 +0000 (0:00:00.995) 0:07:14.915 ******** 2026-01-02 00:58:39.434619 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.434624 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.434629 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.434633 | orchestrator | 2026-01-02 00:58:39.434638 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-02 00:58:39.434642 | orchestrator | Friday 02 January 2026 00:54:04 +0000 (0:00:00.367) 0:07:15.282 ******** 2026-01-02 00:58:39.434647 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.434652 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.434656 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.434661 | orchestrator | 2026-01-02 00:58:39.434665 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-02 00:58:39.434670 | orchestrator | Friday 02 January 2026 00:54:04 +0000 (0:00:00.397) 0:07:15.680 ******** 2026-01-02 00:58:39.434675 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.434679 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.434684 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.434689 | orchestrator | 2026-01-02 00:58:39.434693 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-02 00:58:39.434698 | orchestrator | Friday 02 January 2026 00:54:05 +0000 (0:00:00.338) 0:07:16.019 ******** 2026-01-02 00:58:39.434702 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.434707 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.434712 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.434716 | orchestrator | 2026-01-02 00:58:39.434721 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-02 00:58:39.434726 | orchestrator | Friday 02 January 2026 00:54:06 +0000 (0:00:01.004) 0:07:17.023 ******** 2026-01-02 00:58:39.434730 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.434735 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.434739 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.434744 | orchestrator | 2026-01-02 00:58:39.434749 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-02 00:58:39.434756 | orchestrator | Friday 02 January 2026 00:54:06 +0000 (0:00:00.817) 0:07:17.840 ******** 2026-01-02 00:58:39.434761 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.434766 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.434770 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.434775 | orchestrator | 2026-01-02 00:58:39.434779 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-02 00:58:39.434784 | orchestrator | Friday 02 January 2026 00:54:07 +0000 (0:00:00.306) 0:07:18.147 ******** 2026-01-02 00:58:39.434790 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.434798 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.434805 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.434813 | orchestrator | 2026-01-02 00:58:39.434820 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-02 00:58:39.434829 | orchestrator | Friday 02 January 2026 00:54:07 +0000 (0:00:00.320) 0:07:18.467 ******** 2026-01-02 00:58:39.434834 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.434838 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.434843 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.434848 | orchestrator | 2026-01-02 00:58:39.434853 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-02 00:58:39.434857 | orchestrator | Friday 02 January 2026 00:54:08 +0000 (0:00:00.604) 0:07:19.071 ******** 2026-01-02 00:58:39.434862 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.434866 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.434871 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.434875 | orchestrator | 2026-01-02 00:58:39.434880 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-02 00:58:39.434888 | orchestrator | Friday 02 January 2026 00:54:08 +0000 (0:00:00.392) 0:07:19.464 ******** 2026-01-02 00:58:39.434893 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.434897 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.434902 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.434907 | orchestrator | 2026-01-02 00:58:39.434911 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-02 00:58:39.434916 | orchestrator | Friday 02 January 2026 00:54:08 +0000 (0:00:00.369) 0:07:19.833 ******** 2026-01-02 00:58:39.434921 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.434925 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.434930 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.434934 | orchestrator | 2026-01-02 00:58:39.434939 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-02 00:58:39.434944 | orchestrator | Friday 02 January 2026 00:54:09 +0000 (0:00:00.441) 0:07:20.275 ******** 2026-01-02 00:58:39.434948 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.434953 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.434961 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.434965 | orchestrator | 2026-01-02 00:58:39.434971 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-02 00:58:39.434979 | orchestrator | Friday 02 January 2026 00:54:09 +0000 (0:00:00.581) 0:07:20.856 ******** 2026-01-02 00:58:39.434987 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.434993 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.434998 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.435003 | orchestrator | 2026-01-02 00:58:39.435007 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-02 00:58:39.435012 | orchestrator | Friday 02 January 2026 00:54:10 +0000 (0:00:00.322) 0:07:21.179 ******** 2026-01-02 00:58:39.435016 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.435021 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.435026 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.435030 | orchestrator | 2026-01-02 00:58:39.435035 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-02 00:58:39.435040 | orchestrator | Friday 02 January 2026 00:54:10 +0000 (0:00:00.337) 0:07:21.517 ******** 2026-01-02 00:58:39.435048 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.435053 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.435057 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.435062 | orchestrator | 2026-01-02 00:58:39.435066 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-01-02 00:58:39.435071 | orchestrator | Friday 02 January 2026 00:54:11 +0000 (0:00:00.799) 0:07:22.316 ******** 2026-01-02 00:58:39.435076 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.435080 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.435085 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.435089 | orchestrator | 2026-01-02 00:58:39.435094 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-01-02 00:58:39.435099 | orchestrator | Friday 02 January 2026 00:54:11 +0000 (0:00:00.342) 0:07:22.659 ******** 2026-01-02 00:58:39.435104 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-02 00:58:39.435108 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-02 00:58:39.435113 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-02 00:58:39.435117 | orchestrator | 2026-01-02 00:58:39.435122 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-01-02 00:58:39.435127 | orchestrator | Friday 02 January 2026 00:54:12 +0000 (0:00:00.642) 0:07:23.302 ******** 2026-01-02 00:58:39.435131 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:58:39.435136 | orchestrator | 2026-01-02 00:58:39.435140 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-01-02 00:58:39.435145 | orchestrator | Friday 02 January 2026 00:54:12 +0000 (0:00:00.508) 0:07:23.810 ******** 2026-01-02 00:58:39.435150 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.435154 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.435159 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.435164 | orchestrator | 2026-01-02 00:58:39.435168 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-01-02 00:58:39.435173 | orchestrator | Friday 02 January 2026 00:54:13 +0000 (0:00:00.575) 0:07:24.385 ******** 2026-01-02 00:58:39.435178 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.435182 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.435187 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.435192 | orchestrator | 2026-01-02 00:58:39.435196 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-01-02 00:58:39.435201 | orchestrator | Friday 02 January 2026 00:54:13 +0000 (0:00:00.315) 0:07:24.701 ******** 2026-01-02 00:58:39.435205 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.435210 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.435215 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.435219 | orchestrator | 2026-01-02 00:58:39.435224 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-01-02 00:58:39.435229 | orchestrator | Friday 02 January 2026 00:54:14 +0000 (0:00:00.597) 0:07:25.298 ******** 2026-01-02 00:58:39.435233 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.435238 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.435242 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.435247 | orchestrator | 2026-01-02 00:58:39.435252 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-01-02 00:58:39.435256 | orchestrator | Friday 02 January 2026 00:54:14 +0000 (0:00:00.328) 0:07:25.627 ******** 2026-01-02 00:58:39.435261 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-02 00:58:39.435266 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-02 00:58:39.435270 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-02 00:58:39.435275 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-02 00:58:39.435292 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-02 00:58:39.435297 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-02 00:58:39.435301 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-02 00:58:39.435306 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-02 00:58:39.435310 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-02 00:58:39.435315 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-02 00:58:39.435322 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-02 00:58:39.435327 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-02 00:58:39.435332 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-02 00:58:39.435336 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-02 00:58:39.435341 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-02 00:58:39.435345 | orchestrator | 2026-01-02 00:58:39.435350 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-01-02 00:58:39.435354 | orchestrator | Friday 02 January 2026 00:54:18 +0000 (0:00:03.567) 0:07:29.195 ******** 2026-01-02 00:58:39.435359 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.435364 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.435368 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.435373 | orchestrator | 2026-01-02 00:58:39.435378 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-01-02 00:58:39.435382 | orchestrator | Friday 02 January 2026 00:54:18 +0000 (0:00:00.313) 0:07:29.509 ******** 2026-01-02 00:58:39.435387 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:58:39.435391 | orchestrator | 2026-01-02 00:58:39.435396 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-01-02 00:58:39.435400 | orchestrator | Friday 02 January 2026 00:54:19 +0000 (0:00:00.524) 0:07:30.033 ******** 2026-01-02 00:58:39.435405 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-02 00:58:39.435410 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-02 00:58:39.435434 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-02 00:58:39.435439 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-01-02 00:58:39.435444 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-01-02 00:58:39.435448 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-01-02 00:58:39.435453 | orchestrator | 2026-01-02 00:58:39.435457 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-01-02 00:58:39.435462 | orchestrator | Friday 02 January 2026 00:54:20 +0000 (0:00:01.291) 0:07:31.325 ******** 2026-01-02 00:58:39.435467 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 00:58:39.435471 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-02 00:58:39.435476 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-02 00:58:39.435480 | orchestrator | 2026-01-02 00:58:39.435485 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-01-02 00:58:39.435490 | orchestrator | Friday 02 January 2026 00:54:22 +0000 (0:00:02.273) 0:07:33.598 ******** 2026-01-02 00:58:39.435494 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-02 00:58:39.435499 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-02 00:58:39.435504 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:58:39.435512 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-02 00:58:39.435517 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-02 00:58:39.435522 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:58:39.435526 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-02 00:58:39.435531 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-02 00:58:39.435535 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:58:39.435540 | orchestrator | 2026-01-02 00:58:39.435545 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-01-02 00:58:39.435549 | orchestrator | Friday 02 January 2026 00:54:23 +0000 (0:00:01.309) 0:07:34.908 ******** 2026-01-02 00:58:39.435554 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-02 00:58:39.435558 | orchestrator | 2026-01-02 00:58:39.435563 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-01-02 00:58:39.435568 | orchestrator | Friday 02 January 2026 00:54:26 +0000 (0:00:02.303) 0:07:37.211 ******** 2026-01-02 00:58:39.435572 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:58:39.435577 | orchestrator | 2026-01-02 00:58:39.435582 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-01-02 00:58:39.435586 | orchestrator | Friday 02 January 2026 00:54:26 +0000 (0:00:00.529) 0:07:37.741 ******** 2026-01-02 00:58:39.435591 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-98c0a427-0bfe-5560-90fa-409a46d34f73', 'data_vg': 'ceph-98c0a427-0bfe-5560-90fa-409a46d34f73'}) 2026-01-02 00:58:39.435596 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8c17e839-2cbb-5f17-abcc-9f26ae111b42', 'data_vg': 'ceph-8c17e839-2cbb-5f17-abcc-9f26ae111b42'}) 2026-01-02 00:58:39.435604 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c483f3a2-63e3-5a58-8db6-ff291b90fd92', 'data_vg': 'ceph-c483f3a2-63e3-5a58-8db6-ff291b90fd92'}) 2026-01-02 00:58:39.435609 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b563cbc7-469d-5dd4-bc68-32b49ff22a36', 'data_vg': 'ceph-b563cbc7-469d-5dd4-bc68-32b49ff22a36'}) 2026-01-02 00:58:39.435614 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-37cfd703-64b2-55b0-ad28-4f6812d5fa0d', 'data_vg': 'ceph-37cfd703-64b2-55b0-ad28-4f6812d5fa0d'}) 2026-01-02 00:58:39.435618 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa', 'data_vg': 'ceph-7b4d4f98-8928-5a24-8a9c-c2096dcbe0fa'}) 2026-01-02 00:58:39.435623 | orchestrator | 2026-01-02 00:58:39.435631 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-01-02 00:58:39.435635 | orchestrator | Friday 02 January 2026 00:55:10 +0000 (0:00:43.278) 0:08:21.019 ******** 2026-01-02 00:58:39.435640 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.435645 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.435649 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.435654 | orchestrator | 2026-01-02 00:58:39.435658 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-01-02 00:58:39.435663 | orchestrator | Friday 02 January 2026 00:55:10 +0000 (0:00:00.343) 0:08:21.362 ******** 2026-01-02 00:58:39.435668 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:58:39.435672 | orchestrator | 2026-01-02 00:58:39.435677 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-01-02 00:58:39.435681 | orchestrator | Friday 02 January 2026 00:55:10 +0000 (0:00:00.523) 0:08:21.886 ******** 2026-01-02 00:58:39.435686 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.435691 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.435696 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.435700 | orchestrator | 2026-01-02 00:58:39.435705 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-01-02 00:58:39.435709 | orchestrator | Friday 02 January 2026 00:55:11 +0000 (0:00:01.006) 0:08:22.892 ******** 2026-01-02 00:58:39.435717 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.435722 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.435727 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.435731 | orchestrator | 2026-01-02 00:58:39.435736 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-01-02 00:58:39.435741 | orchestrator | Friday 02 January 2026 00:55:14 +0000 (0:00:02.848) 0:08:25.741 ******** 2026-01-02 00:58:39.435745 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:58:39.435750 | orchestrator | 2026-01-02 00:58:39.435755 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-01-02 00:58:39.435759 | orchestrator | Friday 02 January 2026 00:55:15 +0000 (0:00:00.551) 0:08:26.292 ******** 2026-01-02 00:58:39.435764 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:58:39.435768 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:58:39.435773 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:58:39.435778 | orchestrator | 2026-01-02 00:58:39.435782 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-01-02 00:58:39.435787 | orchestrator | Friday 02 January 2026 00:55:16 +0000 (0:00:01.594) 0:08:27.886 ******** 2026-01-02 00:58:39.435791 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:58:39.435796 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:58:39.435801 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:58:39.435805 | orchestrator | 2026-01-02 00:58:39.435810 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-01-02 00:58:39.435815 | orchestrator | Friday 02 January 2026 00:55:18 +0000 (0:00:01.201) 0:08:29.087 ******** 2026-01-02 00:58:39.435819 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:58:39.435824 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:58:39.435829 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:58:39.435833 | orchestrator | 2026-01-02 00:58:39.435838 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-01-02 00:58:39.435842 | orchestrator | Friday 02 January 2026 00:55:19 +0000 (0:00:01.767) 0:08:30.855 ******** 2026-01-02 00:58:39.435847 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.435852 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.435856 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.435861 | orchestrator | 2026-01-02 00:58:39.435866 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-01-02 00:58:39.435870 | orchestrator | Friday 02 January 2026 00:55:20 +0000 (0:00:00.459) 0:08:31.315 ******** 2026-01-02 00:58:39.435875 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.435880 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.435885 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.435889 | orchestrator | 2026-01-02 00:58:39.435894 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-01-02 00:58:39.435898 | orchestrator | Friday 02 January 2026 00:55:21 +0000 (0:00:00.698) 0:08:32.014 ******** 2026-01-02 00:58:39.435903 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-01-02 00:58:39.435908 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-01-02 00:58:39.435912 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-01-02 00:58:39.435917 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-02 00:58:39.435922 | orchestrator | ok: [testbed-node-4] => (item=2) 2026-01-02 00:58:39.435926 | orchestrator | ok: [testbed-node-5] => (item=1) 2026-01-02 00:58:39.435931 | orchestrator | 2026-01-02 00:58:39.435936 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-01-02 00:58:39.435940 | orchestrator | Friday 02 January 2026 00:55:22 +0000 (0:00:01.099) 0:08:33.113 ******** 2026-01-02 00:58:39.435945 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-01-02 00:58:39.435950 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-01-02 00:58:39.435957 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-01-02 00:58:39.435962 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-01-02 00:58:39.435970 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-01-02 00:58:39.435975 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-01-02 00:58:39.435980 | orchestrator | 2026-01-02 00:58:39.435984 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-01-02 00:58:39.435989 | orchestrator | Friday 02 January 2026 00:55:24 +0000 (0:00:02.192) 0:08:35.305 ******** 2026-01-02 00:58:39.435994 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-01-02 00:58:39.435998 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-01-02 00:58:39.436003 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-01-02 00:58:39.436007 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-01-02 00:58:39.436012 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-01-02 00:58:39.436017 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-01-02 00:58:39.436021 | orchestrator | 2026-01-02 00:58:39.436028 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-01-02 00:58:39.436033 | orchestrator | Friday 02 January 2026 00:55:28 +0000 (0:00:03.696) 0:08:39.002 ******** 2026-01-02 00:58:39.436038 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.436043 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.436047 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-02 00:58:39.436052 | orchestrator | 2026-01-02 00:58:39.436056 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-01-02 00:58:39.436061 | orchestrator | Friday 02 January 2026 00:55:31 +0000 (0:00:03.038) 0:08:42.040 ******** 2026-01-02 00:58:39.436066 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.436070 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.436075 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-01-02 00:58:39.436080 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-02 00:58:39.436084 | orchestrator | 2026-01-02 00:58:39.436089 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-01-02 00:58:39.436094 | orchestrator | Friday 02 January 2026 00:55:43 +0000 (0:00:12.474) 0:08:54.514 ******** 2026-01-02 00:58:39.436099 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.436103 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.436108 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.436113 | orchestrator | 2026-01-02 00:58:39.436117 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-02 00:58:39.436122 | orchestrator | Friday 02 January 2026 00:55:44 +0000 (0:00:01.084) 0:08:55.599 ******** 2026-01-02 00:58:39.436127 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.436131 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.436136 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.436141 | orchestrator | 2026-01-02 00:58:39.436145 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-01-02 00:58:39.436150 | orchestrator | Friday 02 January 2026 00:55:45 +0000 (0:00:00.408) 0:08:56.007 ******** 2026-01-02 00:58:39.436154 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:58:39.436159 | orchestrator | 2026-01-02 00:58:39.436164 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-01-02 00:58:39.436168 | orchestrator | Friday 02 January 2026 00:55:45 +0000 (0:00:00.545) 0:08:56.552 ******** 2026-01-02 00:58:39.436173 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-02 00:58:39.436178 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-02 00:58:39.436182 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-02 00:58:39.436187 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.436192 | orchestrator | 2026-01-02 00:58:39.436196 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-01-02 00:58:39.436201 | orchestrator | Friday 02 January 2026 00:55:46 +0000 (0:00:00.984) 0:08:57.536 ******** 2026-01-02 00:58:39.436209 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.436214 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.436218 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.436223 | orchestrator | 2026-01-02 00:58:39.436228 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-01-02 00:58:39.436232 | orchestrator | Friday 02 January 2026 00:55:46 +0000 (0:00:00.331) 0:08:57.868 ******** 2026-01-02 00:58:39.436237 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.436241 | orchestrator | 2026-01-02 00:58:39.436246 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-01-02 00:58:39.436251 | orchestrator | Friday 02 January 2026 00:55:47 +0000 (0:00:00.291) 0:08:58.159 ******** 2026-01-02 00:58:39.436255 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.436260 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.436265 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.436269 | orchestrator | 2026-01-02 00:58:39.436274 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-01-02 00:58:39.436279 | orchestrator | Friday 02 January 2026 00:55:47 +0000 (0:00:00.316) 0:08:58.475 ******** 2026-01-02 00:58:39.436283 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.436288 | orchestrator | 2026-01-02 00:58:39.436292 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-01-02 00:58:39.436297 | orchestrator | Friday 02 January 2026 00:55:47 +0000 (0:00:00.229) 0:08:58.705 ******** 2026-01-02 00:58:39.436302 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.436306 | orchestrator | 2026-01-02 00:58:39.436311 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-01-02 00:58:39.436315 | orchestrator | Friday 02 January 2026 00:55:47 +0000 (0:00:00.240) 0:08:58.946 ******** 2026-01-02 00:58:39.436320 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.436325 | orchestrator | 2026-01-02 00:58:39.436329 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-01-02 00:58:39.436334 | orchestrator | Friday 02 January 2026 00:55:48 +0000 (0:00:00.153) 0:08:59.099 ******** 2026-01-02 00:58:39.436341 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.436346 | orchestrator | 2026-01-02 00:58:39.436351 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-01-02 00:58:39.436355 | orchestrator | Friday 02 January 2026 00:55:48 +0000 (0:00:00.244) 0:08:59.344 ******** 2026-01-02 00:58:39.436360 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.436365 | orchestrator | 2026-01-02 00:58:39.436369 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-01-02 00:58:39.436374 | orchestrator | Friday 02 January 2026 00:55:49 +0000 (0:00:00.792) 0:09:00.136 ******** 2026-01-02 00:58:39.436379 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-02 00:58:39.436383 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-02 00:58:39.436388 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-02 00:58:39.436392 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.436397 | orchestrator | 2026-01-02 00:58:39.436404 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-01-02 00:58:39.436409 | orchestrator | Friday 02 January 2026 00:55:49 +0000 (0:00:00.487) 0:09:00.624 ******** 2026-01-02 00:58:39.436428 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.436433 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.436438 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.436442 | orchestrator | 2026-01-02 00:58:39.436447 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-01-02 00:58:39.436452 | orchestrator | Friday 02 January 2026 00:55:49 +0000 (0:00:00.332) 0:09:00.956 ******** 2026-01-02 00:58:39.436456 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.436461 | orchestrator | 2026-01-02 00:58:39.436465 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-01-02 00:58:39.436474 | orchestrator | Friday 02 January 2026 00:55:50 +0000 (0:00:00.217) 0:09:01.174 ******** 2026-01-02 00:58:39.436478 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.436483 | orchestrator | 2026-01-02 00:58:39.436488 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-01-02 00:58:39.436492 | orchestrator | 2026-01-02 00:58:39.436497 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-02 00:58:39.436501 | orchestrator | Friday 02 January 2026 00:55:51 +0000 (0:00:00.924) 0:09:02.099 ******** 2026-01-02 00:58:39.436506 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:58:39.436512 | orchestrator | 2026-01-02 00:58:39.436516 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-02 00:58:39.436521 | orchestrator | Friday 02 January 2026 00:55:52 +0000 (0:00:01.015) 0:09:03.114 ******** 2026-01-02 00:58:39.436525 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:58:39.436530 | orchestrator | 2026-01-02 00:58:39.436535 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-02 00:58:39.436539 | orchestrator | Friday 02 January 2026 00:55:53 +0000 (0:00:01.288) 0:09:04.402 ******** 2026-01-02 00:58:39.436544 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.436549 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.436553 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.436558 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.436562 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.436567 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.436572 | orchestrator | 2026-01-02 00:58:39.436576 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-02 00:58:39.436581 | orchestrator | Friday 02 January 2026 00:55:54 +0000 (0:00:01.312) 0:09:05.715 ******** 2026-01-02 00:58:39.436585 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.436590 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.436595 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.436599 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.436604 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.436609 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.436613 | orchestrator | 2026-01-02 00:58:39.436618 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-02 00:58:39.436622 | orchestrator | Friday 02 January 2026 00:55:55 +0000 (0:00:00.687) 0:09:06.403 ******** 2026-01-02 00:58:39.436627 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.436632 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.436636 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.436641 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.436646 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.436650 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.436655 | orchestrator | 2026-01-02 00:58:39.436659 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-02 00:58:39.436664 | orchestrator | Friday 02 January 2026 00:55:56 +0000 (0:00:01.055) 0:09:07.458 ******** 2026-01-02 00:58:39.436669 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.436673 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.436678 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.436683 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.436687 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.436692 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.436696 | orchestrator | 2026-01-02 00:58:39.436701 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-02 00:58:39.436706 | orchestrator | Friday 02 January 2026 00:55:57 +0000 (0:00:00.740) 0:09:08.198 ******** 2026-01-02 00:58:39.436710 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.436718 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.436723 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.436728 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.436732 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.436737 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.436741 | orchestrator | 2026-01-02 00:58:39.436746 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-02 00:58:39.436754 | orchestrator | Friday 02 January 2026 00:55:58 +0000 (0:00:01.280) 0:09:09.479 ******** 2026-01-02 00:58:39.436759 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.436764 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.436768 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.436773 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.436777 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.436782 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.436787 | orchestrator | 2026-01-02 00:58:39.436792 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-02 00:58:39.436796 | orchestrator | Friday 02 January 2026 00:55:59 +0000 (0:00:00.623) 0:09:10.103 ******** 2026-01-02 00:58:39.436801 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.436806 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.436810 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.436815 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.436819 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.436824 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.436829 | orchestrator | 2026-01-02 00:58:39.436836 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-02 00:58:39.436841 | orchestrator | Friday 02 January 2026 00:56:00 +0000 (0:00:00.881) 0:09:10.984 ******** 2026-01-02 00:58:39.436845 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.436850 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.436855 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.436859 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.436864 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.436869 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.436873 | orchestrator | 2026-01-02 00:58:39.436878 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-02 00:58:39.436883 | orchestrator | Friday 02 January 2026 00:56:01 +0000 (0:00:01.218) 0:09:12.202 ******** 2026-01-02 00:58:39.436887 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.436892 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.436896 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.436901 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.436906 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.436910 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.436915 | orchestrator | 2026-01-02 00:58:39.436920 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-02 00:58:39.436924 | orchestrator | Friday 02 January 2026 00:56:02 +0000 (0:00:01.385) 0:09:13.588 ******** 2026-01-02 00:58:39.436929 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.436934 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.436939 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.436943 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.436948 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.436953 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.436957 | orchestrator | 2026-01-02 00:58:39.436962 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-02 00:58:39.436967 | orchestrator | Friday 02 January 2026 00:56:03 +0000 (0:00:00.655) 0:09:14.243 ******** 2026-01-02 00:58:39.436971 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.436976 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.436981 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.436985 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.436990 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.436995 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.437003 | orchestrator | 2026-01-02 00:58:39.437008 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-02 00:58:39.437012 | orchestrator | Friday 02 January 2026 00:56:04 +0000 (0:00:00.927) 0:09:15.171 ******** 2026-01-02 00:58:39.437017 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.437022 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.437027 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.437031 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.437036 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.437040 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.437045 | orchestrator | 2026-01-02 00:58:39.437050 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-02 00:58:39.437054 | orchestrator | Friday 02 January 2026 00:56:04 +0000 (0:00:00.628) 0:09:15.800 ******** 2026-01-02 00:58:39.437059 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.437064 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.437068 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.437073 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.437078 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.437082 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.437087 | orchestrator | 2026-01-02 00:58:39.437091 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-02 00:58:39.437096 | orchestrator | Friday 02 January 2026 00:56:05 +0000 (0:00:00.975) 0:09:16.775 ******** 2026-01-02 00:58:39.437101 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.437105 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.437110 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.437115 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.437119 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.437124 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.437128 | orchestrator | 2026-01-02 00:58:39.437133 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-02 00:58:39.437138 | orchestrator | Friday 02 January 2026 00:56:06 +0000 (0:00:00.753) 0:09:17.528 ******** 2026-01-02 00:58:39.437142 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.437147 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.437152 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.437157 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.437161 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.437166 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.437171 | orchestrator | 2026-01-02 00:58:39.437175 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-02 00:58:39.437180 | orchestrator | Friday 02 January 2026 00:56:07 +0000 (0:00:00.872) 0:09:18.400 ******** 2026-01-02 00:58:39.437184 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.437189 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.437194 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.437198 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:39.437203 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:39.437208 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:39.437212 | orchestrator | 2026-01-02 00:58:39.437217 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-02 00:58:39.437225 | orchestrator | Friday 02 January 2026 00:56:08 +0000 (0:00:00.587) 0:09:18.988 ******** 2026-01-02 00:58:39.437229 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.437234 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.437239 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.437243 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.437248 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.437253 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.437257 | orchestrator | 2026-01-02 00:58:39.437262 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-02 00:58:39.437267 | orchestrator | Friday 02 January 2026 00:56:08 +0000 (0:00:00.857) 0:09:19.845 ******** 2026-01-02 00:58:39.437275 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.437279 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.437284 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.437289 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.437293 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.437298 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.437303 | orchestrator | 2026-01-02 00:58:39.437310 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-02 00:58:39.437315 | orchestrator | Friday 02 January 2026 00:56:09 +0000 (0:00:00.667) 0:09:20.512 ******** 2026-01-02 00:58:39.437320 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.437324 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.437329 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.437334 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.437338 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.437343 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.437348 | orchestrator | 2026-01-02 00:58:39.437352 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-01-02 00:58:39.437357 | orchestrator | Friday 02 January 2026 00:56:10 +0000 (0:00:01.294) 0:09:21.807 ******** 2026-01-02 00:58:39.437362 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-02 00:58:39.437366 | orchestrator | 2026-01-02 00:58:39.437371 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-01-02 00:58:39.437376 | orchestrator | Friday 02 January 2026 00:56:15 +0000 (0:00:04.205) 0:09:26.012 ******** 2026-01-02 00:58:39.437380 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-02 00:58:39.437385 | orchestrator | 2026-01-02 00:58:39.437390 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-01-02 00:58:39.437394 | orchestrator | Friday 02 January 2026 00:56:17 +0000 (0:00:02.140) 0:09:28.152 ******** 2026-01-02 00:58:39.437399 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:58:39.437404 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:58:39.437408 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:58:39.437428 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.437433 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:58:39.437438 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:58:39.437443 | orchestrator | 2026-01-02 00:58:39.437448 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-01-02 00:58:39.437452 | orchestrator | Friday 02 January 2026 00:56:19 +0000 (0:00:01.854) 0:09:30.006 ******** 2026-01-02 00:58:39.437457 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:58:39.437462 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:58:39.437466 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:58:39.437471 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:58:39.437475 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:58:39.437480 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:58:39.437484 | orchestrator | 2026-01-02 00:58:39.437489 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-01-02 00:58:39.437494 | orchestrator | Friday 02 January 2026 00:56:20 +0000 (0:00:01.016) 0:09:31.023 ******** 2026-01-02 00:58:39.437498 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:58:39.437504 | orchestrator | 2026-01-02 00:58:39.437509 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-01-02 00:58:39.437513 | orchestrator | Friday 02 January 2026 00:56:21 +0000 (0:00:01.385) 0:09:32.409 ******** 2026-01-02 00:58:39.437518 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:58:39.437523 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:58:39.437527 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:58:39.437532 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:58:39.437536 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:58:39.437541 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:58:39.437546 | orchestrator | 2026-01-02 00:58:39.437555 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-01-02 00:58:39.437560 | orchestrator | Friday 02 January 2026 00:56:23 +0000 (0:00:01.984) 0:09:34.394 ******** 2026-01-02 00:58:39.437565 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:58:39.437569 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:58:39.437574 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:58:39.437579 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:58:39.437583 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:58:39.437588 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:58:39.437592 | orchestrator | 2026-01-02 00:58:39.437597 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-01-02 00:58:39.437602 | orchestrator | Friday 02 January 2026 00:56:27 +0000 (0:00:03.599) 0:09:37.993 ******** 2026-01-02 00:58:39.437607 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:58:39.437611 | orchestrator | 2026-01-02 00:58:39.437616 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-01-02 00:58:39.437621 | orchestrator | Friday 02 January 2026 00:56:28 +0000 (0:00:01.370) 0:09:39.364 ******** 2026-01-02 00:58:39.437626 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.437630 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.437635 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.437640 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.437644 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.437649 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.437653 | orchestrator | 2026-01-02 00:58:39.437658 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-01-02 00:58:39.437666 | orchestrator | Friday 02 January 2026 00:56:29 +0000 (0:00:00.999) 0:09:40.363 ******** 2026-01-02 00:58:39.437671 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:58:39.437675 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:58:39.437680 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:58:39.437685 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:58:39.437689 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:58:39.437694 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:58:39.437699 | orchestrator | 2026-01-02 00:58:39.437703 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-01-02 00:58:39.437708 | orchestrator | Friday 02 January 2026 00:56:31 +0000 (0:00:02.470) 0:09:42.834 ******** 2026-01-02 00:58:39.437713 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.437717 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.437722 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.437727 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:39.437731 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:39.437736 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:39.437741 | orchestrator | 2026-01-02 00:58:39.437748 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-01-02 00:58:39.437753 | orchestrator | 2026-01-02 00:58:39.437758 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-02 00:58:39.437762 | orchestrator | Friday 02 January 2026 00:56:33 +0000 (0:00:01.182) 0:09:44.017 ******** 2026-01-02 00:58:39.437767 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:58:39.437772 | orchestrator | 2026-01-02 00:58:39.437776 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-02 00:58:39.437781 | orchestrator | Friday 02 January 2026 00:56:33 +0000 (0:00:00.518) 0:09:44.535 ******** 2026-01-02 00:58:39.437786 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:58:39.437791 | orchestrator | 2026-01-02 00:58:39.437795 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-02 00:58:39.437800 | orchestrator | Friday 02 January 2026 00:56:34 +0000 (0:00:00.771) 0:09:45.307 ******** 2026-01-02 00:58:39.437808 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.437813 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.437817 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.437822 | orchestrator | 2026-01-02 00:58:39.437826 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-02 00:58:39.437831 | orchestrator | Friday 02 January 2026 00:56:34 +0000 (0:00:00.367) 0:09:45.674 ******** 2026-01-02 00:58:39.437836 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.437841 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.437845 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.437850 | orchestrator | 2026-01-02 00:58:39.437855 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-02 00:58:39.437859 | orchestrator | Friday 02 January 2026 00:56:35 +0000 (0:00:00.775) 0:09:46.450 ******** 2026-01-02 00:58:39.437864 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.437868 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.437873 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.437878 | orchestrator | 2026-01-02 00:58:39.437882 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-02 00:58:39.437887 | orchestrator | Friday 02 January 2026 00:56:36 +0000 (0:00:01.168) 0:09:47.619 ******** 2026-01-02 00:58:39.437892 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.437896 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.437901 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.437905 | orchestrator | 2026-01-02 00:58:39.437910 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-02 00:58:39.437915 | orchestrator | Friday 02 January 2026 00:56:37 +0000 (0:00:00.898) 0:09:48.518 ******** 2026-01-02 00:58:39.437919 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.437924 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.437929 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.437933 | orchestrator | 2026-01-02 00:58:39.437938 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-02 00:58:39.437943 | orchestrator | Friday 02 January 2026 00:56:37 +0000 (0:00:00.447) 0:09:48.966 ******** 2026-01-02 00:58:39.437947 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.437952 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.437957 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.437961 | orchestrator | 2026-01-02 00:58:39.437966 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-02 00:58:39.437971 | orchestrator | Friday 02 January 2026 00:56:38 +0000 (0:00:00.404) 0:09:49.370 ******** 2026-01-02 00:58:39.437976 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.437980 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.437985 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.437990 | orchestrator | 2026-01-02 00:58:39.437995 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-02 00:58:39.437999 | orchestrator | Friday 02 January 2026 00:56:38 +0000 (0:00:00.576) 0:09:49.947 ******** 2026-01-02 00:58:39.438004 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.438009 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.438013 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.438045 | orchestrator | 2026-01-02 00:58:39.438050 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-02 00:58:39.438054 | orchestrator | Friday 02 January 2026 00:56:39 +0000 (0:00:00.800) 0:09:50.747 ******** 2026-01-02 00:58:39.438059 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.438064 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.438069 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.438073 | orchestrator | 2026-01-02 00:58:39.438078 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-02 00:58:39.438083 | orchestrator | Friday 02 January 2026 00:56:40 +0000 (0:00:00.853) 0:09:51.600 ******** 2026-01-02 00:58:39.438087 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.438098 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.438105 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.438110 | orchestrator | 2026-01-02 00:58:39.438115 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-02 00:58:39.438123 | orchestrator | Friday 02 January 2026 00:56:40 +0000 (0:00:00.351) 0:09:51.952 ******** 2026-01-02 00:58:39.438128 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.438133 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.438137 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.438142 | orchestrator | 2026-01-02 00:58:39.438147 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-02 00:58:39.438151 | orchestrator | Friday 02 January 2026 00:56:41 +0000 (0:00:00.724) 0:09:52.676 ******** 2026-01-02 00:58:39.438156 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.438161 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.438165 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.438170 | orchestrator | 2026-01-02 00:58:39.438175 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-02 00:58:39.438179 | orchestrator | Friday 02 January 2026 00:56:42 +0000 (0:00:00.487) 0:09:53.164 ******** 2026-01-02 00:58:39.438184 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.438191 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.438196 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.438201 | orchestrator | 2026-01-02 00:58:39.438206 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-02 00:58:39.438210 | orchestrator | Friday 02 January 2026 00:56:42 +0000 (0:00:00.324) 0:09:53.489 ******** 2026-01-02 00:58:39.438215 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.438219 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.438224 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.438229 | orchestrator | 2026-01-02 00:58:39.438233 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-02 00:58:39.438238 | orchestrator | Friday 02 January 2026 00:56:42 +0000 (0:00:00.306) 0:09:53.795 ******** 2026-01-02 00:58:39.438242 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.438247 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.438252 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.438256 | orchestrator | 2026-01-02 00:58:39.438261 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-02 00:58:39.438265 | orchestrator | Friday 02 January 2026 00:56:43 +0000 (0:00:00.630) 0:09:54.426 ******** 2026-01-02 00:58:39.438270 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.438275 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.438279 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.438284 | orchestrator | 2026-01-02 00:58:39.438289 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-02 00:58:39.438293 | orchestrator | Friday 02 January 2026 00:56:43 +0000 (0:00:00.320) 0:09:54.746 ******** 2026-01-02 00:58:39.438298 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.438302 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.438308 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.438312 | orchestrator | 2026-01-02 00:58:39.438317 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-02 00:58:39.438321 | orchestrator | Friday 02 January 2026 00:56:44 +0000 (0:00:00.344) 0:09:55.090 ******** 2026-01-02 00:58:39.438326 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.438331 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.438335 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.438340 | orchestrator | 2026-01-02 00:58:39.438345 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-02 00:58:39.438349 | orchestrator | Friday 02 January 2026 00:56:44 +0000 (0:00:00.392) 0:09:55.483 ******** 2026-01-02 00:58:39.438354 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.438358 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.438363 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.438372 | orchestrator | 2026-01-02 00:58:39.438377 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-01-02 00:58:39.438381 | orchestrator | Friday 02 January 2026 00:56:45 +0000 (0:00:01.200) 0:09:56.683 ******** 2026-01-02 00:58:39.438386 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.438390 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.438395 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-01-02 00:58:39.438400 | orchestrator | 2026-01-02 00:58:39.438404 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-01-02 00:58:39.438409 | orchestrator | Friday 02 January 2026 00:56:46 +0000 (0:00:00.488) 0:09:57.172 ******** 2026-01-02 00:58:39.438429 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-02 00:58:39.438434 | orchestrator | 2026-01-02 00:58:39.438439 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-01-02 00:58:39.438444 | orchestrator | Friday 02 January 2026 00:56:48 +0000 (0:00:02.131) 0:09:59.303 ******** 2026-01-02 00:58:39.438450 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-01-02 00:58:39.438457 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.438461 | orchestrator | 2026-01-02 00:58:39.438466 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-01-02 00:58:39.438471 | orchestrator | Friday 02 January 2026 00:56:48 +0000 (0:00:00.341) 0:09:59.645 ******** 2026-01-02 00:58:39.438477 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-02 00:58:39.438487 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-02 00:58:39.438492 | orchestrator | 2026-01-02 00:58:39.438500 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-01-02 00:58:39.438505 | orchestrator | Friday 02 January 2026 00:56:57 +0000 (0:00:08.683) 0:10:08.328 ******** 2026-01-02 00:58:39.438509 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-02 00:58:39.438514 | orchestrator | 2026-01-02 00:58:39.438519 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-01-02 00:58:39.438523 | orchestrator | Friday 02 January 2026 00:57:01 +0000 (0:00:03.742) 0:10:12.070 ******** 2026-01-02 00:58:39.438528 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:58:39.438533 | orchestrator | 2026-01-02 00:58:39.438537 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-01-02 00:58:39.438542 | orchestrator | Friday 02 January 2026 00:57:01 +0000 (0:00:00.539) 0:10:12.610 ******** 2026-01-02 00:58:39.438549 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-02 00:58:39.438554 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-02 00:58:39.438559 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-02 00:58:39.438563 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-01-02 00:58:39.438568 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-01-02 00:58:39.438573 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-01-02 00:58:39.438577 | orchestrator | 2026-01-02 00:58:39.438582 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-01-02 00:58:39.438591 | orchestrator | Friday 02 January 2026 00:57:02 +0000 (0:00:01.043) 0:10:13.654 ******** 2026-01-02 00:58:39.438595 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 00:58:39.438600 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-02 00:58:39.438605 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-02 00:58:39.438609 | orchestrator | 2026-01-02 00:58:39.438614 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-01-02 00:58:39.438618 | orchestrator | Friday 02 January 2026 00:57:05 +0000 (0:00:02.464) 0:10:16.119 ******** 2026-01-02 00:58:39.438623 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-02 00:58:39.438628 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-02 00:58:39.438632 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:58:39.438637 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-02 00:58:39.438642 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-02 00:58:39.438646 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:58:39.438651 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-02 00:58:39.438655 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-02 00:58:39.438660 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:58:39.438665 | orchestrator | 2026-01-02 00:58:39.438669 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-01-02 00:58:39.438674 | orchestrator | Friday 02 January 2026 00:57:06 +0000 (0:00:01.566) 0:10:17.685 ******** 2026-01-02 00:58:39.438679 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:58:39.438683 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:58:39.438688 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:58:39.438692 | orchestrator | 2026-01-02 00:58:39.438697 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-01-02 00:58:39.438702 | orchestrator | Friday 02 January 2026 00:57:09 +0000 (0:00:02.738) 0:10:20.424 ******** 2026-01-02 00:58:39.438706 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.438711 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.438715 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.438720 | orchestrator | 2026-01-02 00:58:39.438725 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-01-02 00:58:39.438729 | orchestrator | Friday 02 January 2026 00:57:09 +0000 (0:00:00.389) 0:10:20.813 ******** 2026-01-02 00:58:39.438734 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:58:39.438739 | orchestrator | 2026-01-02 00:58:39.438743 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-01-02 00:58:39.438748 | orchestrator | Friday 02 January 2026 00:57:10 +0000 (0:00:00.847) 0:10:21.661 ******** 2026-01-02 00:58:39.438753 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:58:39.438757 | orchestrator | 2026-01-02 00:58:39.438762 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-01-02 00:58:39.438766 | orchestrator | Friday 02 January 2026 00:57:11 +0000 (0:00:00.537) 0:10:22.198 ******** 2026-01-02 00:58:39.438771 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:58:39.438776 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:58:39.438780 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:58:39.438785 | orchestrator | 2026-01-02 00:58:39.438789 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-01-02 00:58:39.438794 | orchestrator | Friday 02 January 2026 00:57:12 +0000 (0:00:01.291) 0:10:23.490 ******** 2026-01-02 00:58:39.438799 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:58:39.438803 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:58:39.438808 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:58:39.438813 | orchestrator | 2026-01-02 00:58:39.438817 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-01-02 00:58:39.438822 | orchestrator | Friday 02 January 2026 00:57:13 +0000 (0:00:01.471) 0:10:24.961 ******** 2026-01-02 00:58:39.438830 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:58:39.438835 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:58:39.438840 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:58:39.438844 | orchestrator | 2026-01-02 00:58:39.438849 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-01-02 00:58:39.438856 | orchestrator | Friday 02 January 2026 00:57:15 +0000 (0:00:01.882) 0:10:26.844 ******** 2026-01-02 00:58:39.438861 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:58:39.438866 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:58:39.438871 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:58:39.438875 | orchestrator | 2026-01-02 00:58:39.438880 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-01-02 00:58:39.438885 | orchestrator | Friday 02 January 2026 00:57:17 +0000 (0:00:02.002) 0:10:28.847 ******** 2026-01-02 00:58:39.438889 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.438894 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.438899 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.438903 | orchestrator | 2026-01-02 00:58:39.438908 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-02 00:58:39.438912 | orchestrator | Friday 02 January 2026 00:57:19 +0000 (0:00:01.451) 0:10:30.299 ******** 2026-01-02 00:58:39.438917 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:58:39.438924 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:58:39.438929 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:58:39.438933 | orchestrator | 2026-01-02 00:58:39.438938 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-01-02 00:58:39.438942 | orchestrator | Friday 02 January 2026 00:57:20 +0000 (0:00:00.679) 0:10:30.978 ******** 2026-01-02 00:58:39.438947 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:58:39.438952 | orchestrator | 2026-01-02 00:58:39.438956 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-01-02 00:58:39.438961 | orchestrator | Friday 02 January 2026 00:57:20 +0000 (0:00:00.794) 0:10:31.773 ******** 2026-01-02 00:58:39.438966 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.438970 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.438975 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.438979 | orchestrator | 2026-01-02 00:58:39.438984 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-01-02 00:58:39.438989 | orchestrator | Friday 02 January 2026 00:57:21 +0000 (0:00:00.341) 0:10:32.115 ******** 2026-01-02 00:58:39.438993 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:58:39.438998 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:58:39.439003 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:58:39.439007 | orchestrator | 2026-01-02 00:58:39.439012 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-01-02 00:58:39.439016 | orchestrator | Friday 02 January 2026 00:57:22 +0000 (0:00:01.232) 0:10:33.347 ******** 2026-01-02 00:58:39.439021 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-02 00:58:39.439026 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-02 00:58:39.439030 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-02 00:58:39.439035 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.439039 | orchestrator | 2026-01-02 00:58:39.439044 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-01-02 00:58:39.439049 | orchestrator | Friday 02 January 2026 00:57:23 +0000 (0:00:01.054) 0:10:34.402 ******** 2026-01-02 00:58:39.439053 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.439058 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.439062 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.439067 | orchestrator | 2026-01-02 00:58:39.439072 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-01-02 00:58:39.439076 | orchestrator | 2026-01-02 00:58:39.439084 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-02 00:58:39.439089 | orchestrator | Friday 02 January 2026 00:57:24 +0000 (0:00:00.860) 0:10:35.263 ******** 2026-01-02 00:58:39.439094 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:58:39.439098 | orchestrator | 2026-01-02 00:58:39.439103 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-02 00:58:39.439107 | orchestrator | Friday 02 January 2026 00:57:24 +0000 (0:00:00.537) 0:10:35.801 ******** 2026-01-02 00:58:39.439112 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:58:39.439117 | orchestrator | 2026-01-02 00:58:39.439121 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-02 00:58:39.439126 | orchestrator | Friday 02 January 2026 00:57:25 +0000 (0:00:00.758) 0:10:36.559 ******** 2026-01-02 00:58:39.439131 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.439135 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.439140 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.439145 | orchestrator | 2026-01-02 00:58:39.439149 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-02 00:58:39.439154 | orchestrator | Friday 02 January 2026 00:57:25 +0000 (0:00:00.309) 0:10:36.869 ******** 2026-01-02 00:58:39.439158 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.439163 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.439168 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.439172 | orchestrator | 2026-01-02 00:58:39.439177 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-02 00:58:39.439181 | orchestrator | Friday 02 January 2026 00:57:26 +0000 (0:00:00.716) 0:10:37.586 ******** 2026-01-02 00:58:39.439186 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.439191 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.439195 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.439200 | orchestrator | 2026-01-02 00:58:39.439205 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-02 00:58:39.439209 | orchestrator | Friday 02 January 2026 00:57:27 +0000 (0:00:00.985) 0:10:38.571 ******** 2026-01-02 00:58:39.439214 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.439218 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.439223 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.439227 | orchestrator | 2026-01-02 00:58:39.439232 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-02 00:58:39.439237 | orchestrator | Friday 02 January 2026 00:57:28 +0000 (0:00:00.739) 0:10:39.310 ******** 2026-01-02 00:58:39.439241 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.439249 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.439254 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.439259 | orchestrator | 2026-01-02 00:58:39.439263 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-02 00:58:39.439268 | orchestrator | Friday 02 January 2026 00:57:28 +0000 (0:00:00.332) 0:10:39.643 ******** 2026-01-02 00:58:39.439273 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.439277 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.439282 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.439287 | orchestrator | 2026-01-02 00:58:39.439291 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-02 00:58:39.439296 | orchestrator | Friday 02 January 2026 00:57:29 +0000 (0:00:00.342) 0:10:39.985 ******** 2026-01-02 00:58:39.439301 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.439305 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.439310 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.439314 | orchestrator | 2026-01-02 00:58:39.439322 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-02 00:58:39.439327 | orchestrator | Friday 02 January 2026 00:57:29 +0000 (0:00:00.291) 0:10:40.276 ******** 2026-01-02 00:58:39.439335 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.439339 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.439344 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.439349 | orchestrator | 2026-01-02 00:58:39.439353 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-02 00:58:39.439358 | orchestrator | Friday 02 January 2026 00:57:30 +0000 (0:00:01.083) 0:10:41.359 ******** 2026-01-02 00:58:39.439363 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.439367 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.439372 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.439377 | orchestrator | 2026-01-02 00:58:39.439391 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-02 00:58:39.439395 | orchestrator | Friday 02 January 2026 00:57:31 +0000 (0:00:00.737) 0:10:42.096 ******** 2026-01-02 00:58:39.439400 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.439405 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.439409 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.439431 | orchestrator | 2026-01-02 00:58:39.439439 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-02 00:58:39.439446 | orchestrator | Friday 02 January 2026 00:57:31 +0000 (0:00:00.343) 0:10:42.440 ******** 2026-01-02 00:58:39.439454 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.439461 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.439468 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.439475 | orchestrator | 2026-01-02 00:58:39.439482 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-02 00:58:39.439487 | orchestrator | Friday 02 January 2026 00:57:31 +0000 (0:00:00.290) 0:10:42.730 ******** 2026-01-02 00:58:39.439492 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.439496 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.439501 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.439505 | orchestrator | 2026-01-02 00:58:39.439510 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-02 00:58:39.439515 | orchestrator | Friday 02 January 2026 00:57:32 +0000 (0:00:00.679) 0:10:43.409 ******** 2026-01-02 00:58:39.439519 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.439524 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.439528 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.439533 | orchestrator | 2026-01-02 00:58:39.439538 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-02 00:58:39.439542 | orchestrator | Friday 02 January 2026 00:57:32 +0000 (0:00:00.339) 0:10:43.749 ******** 2026-01-02 00:58:39.439547 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.439551 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.439556 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.439560 | orchestrator | 2026-01-02 00:58:39.439565 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-02 00:58:39.439569 | orchestrator | Friday 02 January 2026 00:57:33 +0000 (0:00:00.345) 0:10:44.095 ******** 2026-01-02 00:58:39.439574 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.439579 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.439583 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.439588 | orchestrator | 2026-01-02 00:58:39.439593 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-02 00:58:39.439597 | orchestrator | Friday 02 January 2026 00:57:33 +0000 (0:00:00.429) 0:10:44.524 ******** 2026-01-02 00:58:39.439602 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.439606 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.439611 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.439616 | orchestrator | 2026-01-02 00:58:39.439621 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-02 00:58:39.439626 | orchestrator | Friday 02 January 2026 00:57:34 +0000 (0:00:00.614) 0:10:45.139 ******** 2026-01-02 00:58:39.439630 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.439640 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.439645 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.439650 | orchestrator | 2026-01-02 00:58:39.439655 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-02 00:58:39.439660 | orchestrator | Friday 02 January 2026 00:57:34 +0000 (0:00:00.328) 0:10:45.467 ******** 2026-01-02 00:58:39.439665 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.439669 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.439674 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.439679 | orchestrator | 2026-01-02 00:58:39.439684 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-02 00:58:39.439689 | orchestrator | Friday 02 January 2026 00:57:34 +0000 (0:00:00.345) 0:10:45.812 ******** 2026-01-02 00:58:39.439694 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.439698 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.439703 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.439708 | orchestrator | 2026-01-02 00:58:39.439713 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-01-02 00:58:39.439718 | orchestrator | Friday 02 January 2026 00:57:35 +0000 (0:00:00.972) 0:10:46.784 ******** 2026-01-02 00:58:39.439727 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:58:39.439732 | orchestrator | 2026-01-02 00:58:39.439737 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-02 00:58:39.439742 | orchestrator | Friday 02 January 2026 00:57:36 +0000 (0:00:00.549) 0:10:47.333 ******** 2026-01-02 00:58:39.439747 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 00:58:39.439752 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-02 00:58:39.439757 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-02 00:58:39.439762 | orchestrator | 2026-01-02 00:58:39.439767 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-02 00:58:39.439771 | orchestrator | Friday 02 January 2026 00:57:38 +0000 (0:00:02.194) 0:10:49.528 ******** 2026-01-02 00:58:39.439776 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-02 00:58:39.439781 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-02 00:58:39.439789 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-02 00:58:39.439803 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-02 00:58:39.439808 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:58:39.439813 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:58:39.439818 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-02 00:58:39.439823 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-02 00:58:39.439828 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:58:39.439833 | orchestrator | 2026-01-02 00:58:39.439837 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-01-02 00:58:39.439842 | orchestrator | Friday 02 January 2026 00:57:40 +0000 (0:00:01.558) 0:10:51.086 ******** 2026-01-02 00:58:39.439847 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.439852 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.439857 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.439862 | orchestrator | 2026-01-02 00:58:39.439867 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-01-02 00:58:39.439872 | orchestrator | Friday 02 January 2026 00:57:40 +0000 (0:00:00.322) 0:10:51.409 ******** 2026-01-02 00:58:39.439877 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:58:39.439882 | orchestrator | 2026-01-02 00:58:39.439887 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-01-02 00:58:39.439892 | orchestrator | Friday 02 January 2026 00:57:40 +0000 (0:00:00.562) 0:10:51.972 ******** 2026-01-02 00:58:39.439897 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-02 00:58:39.439907 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-02 00:58:39.439913 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-02 00:58:39.439918 | orchestrator | 2026-01-02 00:58:39.439922 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-01-02 00:58:39.439927 | orchestrator | Friday 02 January 2026 00:57:42 +0000 (0:00:01.394) 0:10:53.367 ******** 2026-01-02 00:58:39.439932 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 00:58:39.439937 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 00:58:39.439942 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 00:58:39.439947 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-02 00:58:39.439952 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-02 00:58:39.439957 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-02 00:58:39.439962 | orchestrator | 2026-01-02 00:58:39.439967 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-02 00:58:39.439972 | orchestrator | Friday 02 January 2026 00:57:47 +0000 (0:00:04.744) 0:10:58.111 ******** 2026-01-02 00:58:39.439976 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 00:58:39.439981 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-02 00:58:39.439986 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 00:58:39.439991 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-02 00:58:39.439996 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 00:58:39.440001 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-02 00:58:39.440005 | orchestrator | 2026-01-02 00:58:39.440010 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-02 00:58:39.440015 | orchestrator | Friday 02 January 2026 00:57:49 +0000 (0:00:02.553) 0:11:00.665 ******** 2026-01-02 00:58:39.440020 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-02 00:58:39.440025 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:58:39.440030 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-02 00:58:39.440035 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:58:39.440039 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-02 00:58:39.440044 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:58:39.440049 | orchestrator | 2026-01-02 00:58:39.440057 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-01-02 00:58:39.440062 | orchestrator | Friday 02 January 2026 00:57:51 +0000 (0:00:01.499) 0:11:02.164 ******** 2026-01-02 00:58:39.440067 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-01-02 00:58:39.440072 | orchestrator | 2026-01-02 00:58:39.440077 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-01-02 00:58:39.440082 | orchestrator | Friday 02 January 2026 00:57:51 +0000 (0:00:00.251) 0:11:02.416 ******** 2026-01-02 00:58:39.440087 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-02 00:58:39.440095 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-02 00:58:39.440104 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-02 00:58:39.440109 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-02 00:58:39.440114 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-02 00:58:39.440119 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.440124 | orchestrator | 2026-01-02 00:58:39.440129 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-01-02 00:58:39.440134 | orchestrator | Friday 02 January 2026 00:57:53 +0000 (0:00:01.675) 0:11:04.092 ******** 2026-01-02 00:58:39.440139 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-02 00:58:39.440144 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-02 00:58:39.440149 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-02 00:58:39.440154 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-02 00:58:39.440159 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-02 00:58:39.440164 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.440168 | orchestrator | 2026-01-02 00:58:39.440174 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-01-02 00:58:39.440179 | orchestrator | Friday 02 January 2026 00:57:53 +0000 (0:00:00.615) 0:11:04.707 ******** 2026-01-02 00:58:39.440184 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-02 00:58:39.440189 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-02 00:58:39.440194 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-02 00:58:39.440199 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-02 00:58:39.440203 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-02 00:58:39.440208 | orchestrator | 2026-01-02 00:58:39.440213 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-01-02 00:58:39.440218 | orchestrator | Friday 02 January 2026 00:58:24 +0000 (0:00:30.871) 0:11:35.578 ******** 2026-01-02 00:58:39.440223 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.440228 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.440233 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.440238 | orchestrator | 2026-01-02 00:58:39.440243 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-01-02 00:58:39.440248 | orchestrator | Friday 02 January 2026 00:58:24 +0000 (0:00:00.319) 0:11:35.898 ******** 2026-01-02 00:58:39.440253 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.440258 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.440262 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.440267 | orchestrator | 2026-01-02 00:58:39.440272 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-01-02 00:58:39.440277 | orchestrator | Friday 02 January 2026 00:58:25 +0000 (0:00:00.328) 0:11:36.227 ******** 2026-01-02 00:58:39.440286 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:58:39.440291 | orchestrator | 2026-01-02 00:58:39.440296 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-01-02 00:58:39.440301 | orchestrator | Friday 02 January 2026 00:58:26 +0000 (0:00:00.860) 0:11:37.087 ******** 2026-01-02 00:58:39.440309 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:58:39.440315 | orchestrator | 2026-01-02 00:58:39.440320 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-01-02 00:58:39.440324 | orchestrator | Friday 02 January 2026 00:58:26 +0000 (0:00:00.547) 0:11:37.635 ******** 2026-01-02 00:58:39.440329 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:58:39.440334 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:58:39.440339 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:58:39.440344 | orchestrator | 2026-01-02 00:58:39.440349 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-01-02 00:58:39.440354 | orchestrator | Friday 02 January 2026 00:58:28 +0000 (0:00:01.351) 0:11:38.987 ******** 2026-01-02 00:58:39.440359 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:58:39.440364 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:58:39.440369 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:58:39.440374 | orchestrator | 2026-01-02 00:58:39.440381 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-01-02 00:58:39.440386 | orchestrator | Friday 02 January 2026 00:58:29 +0000 (0:00:01.561) 0:11:40.548 ******** 2026-01-02 00:58:39.440391 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:58:39.440396 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:58:39.440401 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:58:39.440406 | orchestrator | 2026-01-02 00:58:39.440411 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-01-02 00:58:39.440434 | orchestrator | Friday 02 January 2026 00:58:31 +0000 (0:00:01.939) 0:11:42.488 ******** 2026-01-02 00:58:39.440440 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-02 00:58:39.440445 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-02 00:58:39.440449 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-02 00:58:39.440454 | orchestrator | 2026-01-02 00:58:39.440459 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-02 00:58:39.440464 | orchestrator | Friday 02 January 2026 00:58:34 +0000 (0:00:02.945) 0:11:45.433 ******** 2026-01-02 00:58:39.440469 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.440474 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.440479 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.440483 | orchestrator | 2026-01-02 00:58:39.440488 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-01-02 00:58:39.440493 | orchestrator | Friday 02 January 2026 00:58:34 +0000 (0:00:00.395) 0:11:45.828 ******** 2026-01-02 00:58:39.440498 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:58:39.440503 | orchestrator | 2026-01-02 00:58:39.440508 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-01-02 00:58:39.440513 | orchestrator | Friday 02 January 2026 00:58:35 +0000 (0:00:00.544) 0:11:46.373 ******** 2026-01-02 00:58:39.440517 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.440522 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.440527 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.440532 | orchestrator | 2026-01-02 00:58:39.440537 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-01-02 00:58:39.440542 | orchestrator | Friday 02 January 2026 00:58:36 +0000 (0:00:00.627) 0:11:47.000 ******** 2026-01-02 00:58:39.440551 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.440556 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:58:39.440561 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:58:39.440565 | orchestrator | 2026-01-02 00:58:39.440570 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-01-02 00:58:39.440575 | orchestrator | Friday 02 January 2026 00:58:36 +0000 (0:00:00.353) 0:11:47.354 ******** 2026-01-02 00:58:39.440580 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-02 00:58:39.440585 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-02 00:58:39.440590 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-02 00:58:39.440595 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:58:39.440599 | orchestrator | 2026-01-02 00:58:39.440604 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-01-02 00:58:39.440609 | orchestrator | Friday 02 January 2026 00:58:37 +0000 (0:00:00.669) 0:11:48.023 ******** 2026-01-02 00:58:39.440614 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:58:39.440619 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:58:39.440624 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:58:39.440629 | orchestrator | 2026-01-02 00:58:39.440634 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:58:39.440638 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-01-02 00:58:39.440643 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-01-02 00:58:39.440648 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-01-02 00:58:39.440653 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-01-02 00:58:39.440658 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-01-02 00:58:39.440667 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-01-02 00:58:39.440672 | orchestrator | 2026-01-02 00:58:39.440677 | orchestrator | 2026-01-02 00:58:39.440681 | orchestrator | 2026-01-02 00:58:39.440686 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:58:39.440691 | orchestrator | Friday 02 January 2026 00:58:37 +0000 (0:00:00.264) 0:11:48.288 ******** 2026-01-02 00:58:39.440696 | orchestrator | =============================================================================== 2026-01-02 00:58:39.440701 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 60.28s 2026-01-02 00:58:39.440706 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 43.28s 2026-01-02 00:58:39.440711 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 36.52s 2026-01-02 00:58:39.440727 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.87s 2026-01-02 00:58:39.440733 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.77s 2026-01-02 00:58:39.440737 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.51s 2026-01-02 00:58:39.440742 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.47s 2026-01-02 00:58:39.440747 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.97s 2026-01-02 00:58:39.440752 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 10.85s 2026-01-02 00:58:39.440757 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.68s 2026-01-02 00:58:39.440765 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.18s 2026-01-02 00:58:39.440770 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.53s 2026-01-02 00:58:39.440775 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.20s 2026-01-02 00:58:39.440780 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.74s 2026-01-02 00:58:39.440785 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.21s 2026-01-02 00:58:39.440790 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.75s 2026-01-02 00:58:39.440795 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.74s 2026-01-02 00:58:39.440799 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.70s 2026-01-02 00:58:39.440807 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.60s 2026-01-02 00:58:39.440812 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 3.57s 2026-01-02 00:58:39.440817 | orchestrator | 2026-01-02 00:58:39 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:58:39.440822 | orchestrator | 2026-01-02 00:58:39 | INFO  | Task 3a94c23a-0e64-4972-9eda-b204deaa8ff6 is in state STARTED 2026-01-02 00:58:39.440827 | orchestrator | 2026-01-02 00:58:39 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:58:39.440832 | orchestrator | 2026-01-02 00:58:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:58:42.475804 | orchestrator | 2026-01-02 00:58:42 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:58:42.476664 | orchestrator | 2026-01-02 00:58:42 | INFO  | Task 3a94c23a-0e64-4972-9eda-b204deaa8ff6 is in state STARTED 2026-01-02 00:58:42.478247 | orchestrator | 2026-01-02 00:58:42 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:58:42.478489 | orchestrator | 2026-01-02 00:58:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:58:45.521910 | orchestrator | 2026-01-02 00:58:45 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:58:45.522656 | orchestrator | 2026-01-02 00:58:45 | INFO  | Task 3a94c23a-0e64-4972-9eda-b204deaa8ff6 is in state STARTED 2026-01-02 00:58:45.527808 | orchestrator | 2026-01-02 00:58:45 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:58:45.527874 | orchestrator | 2026-01-02 00:58:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:58:48.575118 | orchestrator | 2026-01-02 00:58:48 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:58:48.577934 | orchestrator | 2026-01-02 00:58:48 | INFO  | Task 3a94c23a-0e64-4972-9eda-b204deaa8ff6 is in state STARTED 2026-01-02 00:58:48.580589 | orchestrator | 2026-01-02 00:58:48 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:58:48.580761 | orchestrator | 2026-01-02 00:58:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:58:51.637613 | orchestrator | 2026-01-02 00:58:51 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:58:51.638561 | orchestrator | 2026-01-02 00:58:51 | INFO  | Task 3a94c23a-0e64-4972-9eda-b204deaa8ff6 is in state STARTED 2026-01-02 00:58:51.640200 | orchestrator | 2026-01-02 00:58:51 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:58:51.640601 | orchestrator | 2026-01-02 00:58:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:58:54.700276 | orchestrator | 2026-01-02 00:58:54 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:58:54.700575 | orchestrator | 2026-01-02 00:58:54 | INFO  | Task 3a94c23a-0e64-4972-9eda-b204deaa8ff6 is in state STARTED 2026-01-02 00:58:54.702064 | orchestrator | 2026-01-02 00:58:54 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:58:54.702466 | orchestrator | 2026-01-02 00:58:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:58:57.764841 | orchestrator | 2026-01-02 00:58:57 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:58:57.767843 | orchestrator | 2026-01-02 00:58:57 | INFO  | Task 3a94c23a-0e64-4972-9eda-b204deaa8ff6 is in state STARTED 2026-01-02 00:58:57.769883 | orchestrator | 2026-01-02 00:58:57 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:58:57.769902 | orchestrator | 2026-01-02 00:58:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:59:00.808842 | orchestrator | 2026-01-02 00:59:00 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:59:00.811086 | orchestrator | 2026-01-02 00:59:00 | INFO  | Task 3a94c23a-0e64-4972-9eda-b204deaa8ff6 is in state STARTED 2026-01-02 00:59:00.813952 | orchestrator | 2026-01-02 00:59:00 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:59:00.813986 | orchestrator | 2026-01-02 00:59:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:59:03.862939 | orchestrator | 2026-01-02 00:59:03 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:59:03.863932 | orchestrator | 2026-01-02 00:59:03 | INFO  | Task 3a94c23a-0e64-4972-9eda-b204deaa8ff6 is in state STARTED 2026-01-02 00:59:03.865060 | orchestrator | 2026-01-02 00:59:03 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:59:03.865199 | orchestrator | 2026-01-02 00:59:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:59:06.918449 | orchestrator | 2026-01-02 00:59:06 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:59:06.920029 | orchestrator | 2026-01-02 00:59:06 | INFO  | Task 3a94c23a-0e64-4972-9eda-b204deaa8ff6 is in state STARTED 2026-01-02 00:59:06.921462 | orchestrator | 2026-01-02 00:59:06 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:59:06.921751 | orchestrator | 2026-01-02 00:59:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:59:09.970883 | orchestrator | 2026-01-02 00:59:09 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state STARTED 2026-01-02 00:59:09.971818 | orchestrator | 2026-01-02 00:59:09 | INFO  | Task 3a94c23a-0e64-4972-9eda-b204deaa8ff6 is in state STARTED 2026-01-02 00:59:09.973131 | orchestrator | 2026-01-02 00:59:09 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:59:09.973171 | orchestrator | 2026-01-02 00:59:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:59:13.068244 | orchestrator | 2026-01-02 00:59:13.068359 | orchestrator | 2026-01-02 00:59:13.068403 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-02 00:59:13.068449 | orchestrator | 2026-01-02 00:59:13.069222 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-02 00:59:13.069254 | orchestrator | Friday 02 January 2026 00:56:17 +0000 (0:00:00.270) 0:00:00.270 ******** 2026-01-02 00:59:13.069261 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:59:13.069271 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:59:13.069278 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:59:13.069286 | orchestrator | 2026-01-02 00:59:13.069293 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-02 00:59:13.069300 | orchestrator | Friday 02 January 2026 00:56:18 +0000 (0:00:00.352) 0:00:00.623 ******** 2026-01-02 00:59:13.069336 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-01-02 00:59:13.069345 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-01-02 00:59:13.069352 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-01-02 00:59:13.069359 | orchestrator | 2026-01-02 00:59:13.069366 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-01-02 00:59:13.069395 | orchestrator | 2026-01-02 00:59:13.069402 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-02 00:59:13.069408 | orchestrator | Friday 02 January 2026 00:56:18 +0000 (0:00:00.425) 0:00:01.048 ******** 2026-01-02 00:59:13.069453 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:59:13.069462 | orchestrator | 2026-01-02 00:59:13.069468 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-01-02 00:59:13.069475 | orchestrator | Friday 02 January 2026 00:56:19 +0000 (0:00:00.496) 0:00:01.545 ******** 2026-01-02 00:59:13.069482 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-02 00:59:13.069489 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-02 00:59:13.069495 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-02 00:59:13.069501 | orchestrator | 2026-01-02 00:59:13.069507 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-01-02 00:59:13.069513 | orchestrator | Friday 02 January 2026 00:56:19 +0000 (0:00:00.680) 0:00:02.225 ******** 2026-01-02 00:59:13.069537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-02 00:59:13.069547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-02 00:59:13.069567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-02 00:59:13.069595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-02 00:59:13.069604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-02 00:59:13.069609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-02 00:59:13.069613 | orchestrator | 2026-01-02 00:59:13.069617 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-02 00:59:13.069621 | orchestrator | Friday 02 January 2026 00:56:21 +0000 (0:00:01.905) 0:00:04.131 ******** 2026-01-02 00:59:13.069627 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:59:13.069633 | orchestrator | 2026-01-02 00:59:13.069639 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-01-02 00:59:13.069645 | orchestrator | Friday 02 January 2026 00:56:22 +0000 (0:00:00.576) 0:00:04.708 ******** 2026-01-02 00:59:13.069662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-02 00:59:13.069669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-02 00:59:13.069679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-02 00:59:13.069686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-02 00:59:13.069697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-02 00:59:13.069710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-02 00:59:13.069716 | orchestrator | 2026-01-02 00:59:13.069722 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-01-02 00:59:13.069728 | orchestrator | Friday 02 January 2026 00:56:25 +0000 (0:00:02.808) 0:00:07.517 ******** 2026-01-02 00:59:13.069737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-02 00:59:13.069745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-02 00:59:13.069751 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:59:13.069759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-02 00:59:13.069775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-02 00:59:13.069782 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:59:13.069791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-02 00:59:13.069798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-02 00:59:13.069805 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:59:13.069811 | orchestrator | 2026-01-02 00:59:13.069818 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-01-02 00:59:13.069830 | orchestrator | Friday 02 January 2026 00:56:26 +0000 (0:00:01.211) 0:00:08.728 ******** 2026-01-02 00:59:13.069837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-02 00:59:13.069850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-02 00:59:13.069855 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:59:13.069859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-02 00:59:13.069866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-02 00:59:13.069880 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:59:13.069884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-02 00:59:13.069894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-02 00:59:13.069898 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:59:13.069902 | orchestrator | 2026-01-02 00:59:13.069906 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-01-02 00:59:13.069910 | orchestrator | Friday 02 January 2026 00:56:27 +0000 (0:00:01.113) 0:00:09.842 ******** 2026-01-02 00:59:13.069914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-02 00:59:13.069921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-02 00:59:13.069925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-02 00:59:13.069940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-02 00:59:13.069945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-02 00:59:13.069952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-02 00:59:13.069956 | orchestrator | 2026-01-02 00:59:13.069964 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-01-02 00:59:13.069968 | orchestrator | Friday 02 January 2026 00:56:30 +0000 (0:00:02.630) 0:00:12.472 ******** 2026-01-02 00:59:13.069971 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:59:13.069975 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:59:13.069979 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:59:13.069983 | orchestrator | 2026-01-02 00:59:13.069987 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-01-02 00:59:13.069991 | orchestrator | Friday 02 January 2026 00:56:33 +0000 (0:00:03.077) 0:00:15.549 ******** 2026-01-02 00:59:13.069995 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:59:13.069999 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:59:13.070002 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:59:13.070006 | orchestrator | 2026-01-02 00:59:13.070010 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-01-02 00:59:13.070049 | orchestrator | Friday 02 January 2026 00:56:35 +0000 (0:00:02.183) 0:00:17.733 ******** 2026-01-02 00:59:13.070053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-02 00:59:13.070062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-02 00:59:13.070066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-02 00:59:13.070073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-02 00:59:13.070082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-02 00:59:13.070091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-02 00:59:13.070096 | orchestrator | 2026-01-02 00:59:13.070100 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-02 00:59:13.070105 | orchestrator | Friday 02 January 2026 00:56:37 +0000 (0:00:02.573) 0:00:20.307 ******** 2026-01-02 00:59:13.070109 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:59:13.070112 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:59:13.070116 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:59:13.070120 | orchestrator | 2026-01-02 00:59:13.070124 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-02 00:59:13.070128 | orchestrator | Friday 02 January 2026 00:56:38 +0000 (0:00:00.346) 0:00:20.653 ******** 2026-01-02 00:59:13.070132 | orchestrator | 2026-01-02 00:59:13.070135 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-02 00:59:13.070139 | orchestrator | Friday 02 January 2026 00:56:38 +0000 (0:00:00.070) 0:00:20.724 ******** 2026-01-02 00:59:13.070143 | orchestrator | 2026-01-02 00:59:13.070147 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-02 00:59:13.070151 | orchestrator | Friday 02 January 2026 00:56:38 +0000 (0:00:00.070) 0:00:20.794 ******** 2026-01-02 00:59:13.070155 | orchestrator | 2026-01-02 00:59:13.070162 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-01-02 00:59:13.070166 | orchestrator | Friday 02 January 2026 00:56:38 +0000 (0:00:00.082) 0:00:20.877 ******** 2026-01-02 00:59:13.070170 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:59:13.070173 | orchestrator | 2026-01-02 00:59:13.070177 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-01-02 00:59:13.070181 | orchestrator | Friday 02 January 2026 00:56:38 +0000 (0:00:00.241) 0:00:21.119 ******** 2026-01-02 00:59:13.070185 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:59:13.070189 | orchestrator | 2026-01-02 00:59:13.070195 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-01-02 00:59:13.070199 | orchestrator | Friday 02 January 2026 00:56:39 +0000 (0:00:00.682) 0:00:21.801 ******** 2026-01-02 00:59:13.070203 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:59:13.070207 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:59:13.070211 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:59:13.070215 | orchestrator | 2026-01-02 00:59:13.070219 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-01-02 00:59:13.070222 | orchestrator | Friday 02 January 2026 00:57:43 +0000 (0:01:04.338) 0:01:26.139 ******** 2026-01-02 00:59:13.070226 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:59:13.070230 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:59:13.070234 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:59:13.070238 | orchestrator | 2026-01-02 00:59:13.070241 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-02 00:59:13.070245 | orchestrator | Friday 02 January 2026 00:58:59 +0000 (0:01:15.779) 0:02:41.919 ******** 2026-01-02 00:59:13.070249 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:59:13.070253 | orchestrator | 2026-01-02 00:59:13.070257 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-01-02 00:59:13.070261 | orchestrator | Friday 02 January 2026 00:59:00 +0000 (0:00:00.744) 0:02:42.664 ******** 2026-01-02 00:59:13.070264 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:59:13.070268 | orchestrator | 2026-01-02 00:59:13.070272 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-01-02 00:59:13.070276 | orchestrator | Friday 02 January 2026 00:59:02 +0000 (0:00:02.671) 0:02:45.335 ******** 2026-01-02 00:59:13.070280 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:59:13.070284 | orchestrator | 2026-01-02 00:59:13.070287 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-01-02 00:59:13.070291 | orchestrator | Friday 02 January 2026 00:59:05 +0000 (0:00:02.426) 0:02:47.762 ******** 2026-01-02 00:59:13.070295 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:59:13.070299 | orchestrator | 2026-01-02 00:59:13.070303 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-01-02 00:59:13.070306 | orchestrator | Friday 02 January 2026 00:59:08 +0000 (0:00:02.875) 0:02:50.637 ******** 2026-01-02 00:59:13.070310 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:59:13.070314 | orchestrator | 2026-01-02 00:59:13.070318 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:59:13.070323 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-02 00:59:13.070329 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-02 00:59:13.070333 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-02 00:59:13.070337 | orchestrator | 2026-01-02 00:59:13.070341 | orchestrator | 2026-01-02 00:59:13.070345 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:59:13.070352 | orchestrator | Friday 02 January 2026 00:59:10 +0000 (0:00:02.500) 0:02:53.138 ******** 2026-01-02 00:59:13.070364 | orchestrator | =============================================================================== 2026-01-02 00:59:13.070368 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 75.78s 2026-01-02 00:59:13.070390 | orchestrator | opensearch : Restart opensearch container ------------------------------ 64.34s 2026-01-02 00:59:13.070398 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.08s 2026-01-02 00:59:13.070404 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.88s 2026-01-02 00:59:13.070410 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.81s 2026-01-02 00:59:13.070417 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.67s 2026-01-02 00:59:13.070425 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.63s 2026-01-02 00:59:13.070432 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.57s 2026-01-02 00:59:13.070438 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.50s 2026-01-02 00:59:13.070443 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.43s 2026-01-02 00:59:13.070449 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.18s 2026-01-02 00:59:13.070454 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.91s 2026-01-02 00:59:13.070460 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.21s 2026-01-02 00:59:13.070466 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.11s 2026-01-02 00:59:13.070472 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.74s 2026-01-02 00:59:13.070478 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.68s 2026-01-02 00:59:13.070482 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.68s 2026-01-02 00:59:13.070486 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.58s 2026-01-02 00:59:13.070490 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.50s 2026-01-02 00:59:13.070493 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.43s 2026-01-02 00:59:13.070500 | orchestrator | 2026-01-02 00:59:13 | INFO  | Task ea5cb038-122d-49fa-86ac-aa194d5b539f is in state SUCCESS 2026-01-02 00:59:13.070504 | orchestrator | 2026-01-02 00:59:13 | INFO  | Task 3a94c23a-0e64-4972-9eda-b204deaa8ff6 is in state STARTED 2026-01-02 00:59:13.070508 | orchestrator | 2026-01-02 00:59:13 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:59:13.070512 | orchestrator | 2026-01-02 00:59:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:59:16.113176 | orchestrator | 2026-01-02 00:59:16 | INFO  | Task 3a94c23a-0e64-4972-9eda-b204deaa8ff6 is in state STARTED 2026-01-02 00:59:16.114696 | orchestrator | 2026-01-02 00:59:16 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:59:16.114752 | orchestrator | 2026-01-02 00:59:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:59:19.156127 | orchestrator | 2026-01-02 00:59:19 | INFO  | Task 3a94c23a-0e64-4972-9eda-b204deaa8ff6 is in state STARTED 2026-01-02 00:59:19.158788 | orchestrator | 2026-01-02 00:59:19 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:59:19.158829 | orchestrator | 2026-01-02 00:59:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:59:22.201185 | orchestrator | 2026-01-02 00:59:22 | INFO  | Task 3a94c23a-0e64-4972-9eda-b204deaa8ff6 is in state STARTED 2026-01-02 00:59:22.202738 | orchestrator | 2026-01-02 00:59:22 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:59:22.202794 | orchestrator | 2026-01-02 00:59:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:59:25.249140 | orchestrator | 2026-01-02 00:59:25 | INFO  | Task 3a94c23a-0e64-4972-9eda-b204deaa8ff6 is in state STARTED 2026-01-02 00:59:25.250055 | orchestrator | 2026-01-02 00:59:25 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state STARTED 2026-01-02 00:59:25.250083 | orchestrator | 2026-01-02 00:59:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:59:28.296870 | orchestrator | 2026-01-02 00:59:28 | INFO  | Task f505b5f3-ad51-494a-9ba9-8616959b188e is in state STARTED 2026-01-02 00:59:28.298656 | orchestrator | 2026-01-02 00:59:28 | INFO  | Task 3a94c23a-0e64-4972-9eda-b204deaa8ff6 is in state STARTED 2026-01-02 00:59:28.300504 | orchestrator | 2026-01-02 00:59:28 | INFO  | Task 2fad24b6-cc3d-4f90-907e-9fc878c03d04 is in state STARTED 2026-01-02 00:59:28.303846 | orchestrator | 2026-01-02 00:59:28 | INFO  | Task 19d9afdc-d1ba-4ba9-8213-7f8a11efefbb is in state SUCCESS 2026-01-02 00:59:28.305482 | orchestrator | 2026-01-02 00:59:28.305522 | orchestrator | 2026-01-02 00:59:28.305529 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-01-02 00:59:28.305536 | orchestrator | 2026-01-02 00:59:28.305541 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-01-02 00:59:28.305547 | orchestrator | Friday 02 January 2026 00:56:17 +0000 (0:00:00.093) 0:00:00.093 ******** 2026-01-02 00:59:28.305553 | orchestrator | ok: [localhost] => { 2026-01-02 00:59:28.305561 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-01-02 00:59:28.305566 | orchestrator | } 2026-01-02 00:59:28.305572 | orchestrator | 2026-01-02 00:59:28.305578 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-01-02 00:59:28.305583 | orchestrator | Friday 02 January 2026 00:56:17 +0000 (0:00:00.052) 0:00:00.146 ******** 2026-01-02 00:59:28.305588 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-01-02 00:59:28.305595 | orchestrator | ...ignoring 2026-01-02 00:59:28.305601 | orchestrator | 2026-01-02 00:59:28.305606 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-01-02 00:59:28.305611 | orchestrator | Friday 02 January 2026 00:56:20 +0000 (0:00:02.884) 0:00:03.030 ******** 2026-01-02 00:59:28.305616 | orchestrator | skipping: [localhost] 2026-01-02 00:59:28.305622 | orchestrator | 2026-01-02 00:59:28.305627 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-01-02 00:59:28.305632 | orchestrator | Friday 02 January 2026 00:56:20 +0000 (0:00:00.052) 0:00:03.082 ******** 2026-01-02 00:59:28.305637 | orchestrator | ok: [localhost] 2026-01-02 00:59:28.305642 | orchestrator | 2026-01-02 00:59:28.305647 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-02 00:59:28.305652 | orchestrator | 2026-01-02 00:59:28.305657 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-02 00:59:28.305662 | orchestrator | Friday 02 January 2026 00:56:20 +0000 (0:00:00.156) 0:00:03.239 ******** 2026-01-02 00:59:28.305668 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:59:28.305673 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:59:28.305678 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:59:28.305683 | orchestrator | 2026-01-02 00:59:28.305690 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-02 00:59:28.305743 | orchestrator | Friday 02 January 2026 00:56:21 +0000 (0:00:00.349) 0:00:03.588 ******** 2026-01-02 00:59:28.305756 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-01-02 00:59:28.305764 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-01-02 00:59:28.305772 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-01-02 00:59:28.305780 | orchestrator | 2026-01-02 00:59:28.305799 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-01-02 00:59:28.305828 | orchestrator | 2026-01-02 00:59:28.305837 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-01-02 00:59:28.305846 | orchestrator | Friday 02 January 2026 00:56:21 +0000 (0:00:00.617) 0:00:04.206 ******** 2026-01-02 00:59:28.305854 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-02 00:59:28.305863 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-02 00:59:28.305872 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-02 00:59:28.305880 | orchestrator | 2026-01-02 00:59:28.305888 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-02 00:59:28.305897 | orchestrator | Friday 02 January 2026 00:56:22 +0000 (0:00:00.445) 0:00:04.651 ******** 2026-01-02 00:59:28.305906 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:59:28.305916 | orchestrator | 2026-01-02 00:59:28.305925 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-01-02 00:59:28.305934 | orchestrator | Friday 02 January 2026 00:56:22 +0000 (0:00:00.601) 0:00:05.253 ******** 2026-01-02 00:59:28.305960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-02 00:59:28.305974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-02 00:59:28.305989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-02 00:59:28.305995 | orchestrator | 2026-01-02 00:59:28.306004 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-01-02 00:59:28.306010 | orchestrator | Friday 02 January 2026 00:56:25 +0000 (0:00:03.055) 0:00:08.309 ******** 2026-01-02 00:59:28.306048 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:59:28.306060 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:59:28.306067 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:59:28.306073 | orchestrator | 2026-01-02 00:59:28.306079 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-01-02 00:59:28.306085 | orchestrator | Friday 02 January 2026 00:56:26 +0000 (0:00:00.692) 0:00:09.001 ******** 2026-01-02 00:59:28.306091 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:59:28.306097 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:59:28.306103 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:59:28.306109 | orchestrator | 2026-01-02 00:59:28.306115 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-01-02 00:59:28.306121 | orchestrator | Friday 02 January 2026 00:56:28 +0000 (0:00:01.799) 0:00:10.801 ******** 2026-01-02 00:59:28.306131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-02 00:59:28.306148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-02 00:59:28.306159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-02 00:59:28.306169 | orchestrator | 2026-01-02 00:59:28.306176 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-01-02 00:59:28.306182 | orchestrator | Friday 02 January 2026 00:56:32 +0000 (0:00:04.201) 0:00:15.002 ******** 2026-01-02 00:59:28.306188 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:59:28.306194 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:59:28.306200 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:59:28.306206 | orchestrator | 2026-01-02 00:59:28.306212 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-01-02 00:59:28.306218 | orchestrator | Friday 02 January 2026 00:56:33 +0000 (0:00:01.106) 0:00:16.109 ******** 2026-01-02 00:59:28.306224 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:59:28.306230 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:59:28.306236 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:59:28.306241 | orchestrator | 2026-01-02 00:59:28.306248 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-02 00:59:28.306258 | orchestrator | Friday 02 January 2026 00:56:39 +0000 (0:00:05.276) 0:00:21.385 ******** 2026-01-02 00:59:28.306266 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:59:28.306280 | orchestrator | 2026-01-02 00:59:28.306289 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-01-02 00:59:28.306297 | orchestrator | Friday 02 January 2026 00:56:39 +0000 (0:00:00.572) 0:00:21.958 ******** 2026-01-02 00:59:28.306313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-02 00:59:28.306330 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:59:28.306344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-02 00:59:28.306383 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:59:28.306400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-02 00:59:28.306416 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:59:28.306426 | orchestrator | 2026-01-02 00:59:28.306435 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-01-02 00:59:28.306443 | orchestrator | Friday 02 January 2026 00:56:44 +0000 (0:00:04.665) 0:00:26.624 ******** 2026-01-02 00:59:28.306456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-02 00:59:28.306467 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:59:28.306480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-02 00:59:28.306495 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:59:28.306508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-02 00:59:28.306516 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:59:28.306524 | orchestrator | 2026-01-02 00:59:28.306532 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-01-02 00:59:28.306540 | orchestrator | Friday 02 January 2026 00:56:47 +0000 (0:00:03.532) 0:00:30.157 ******** 2026-01-02 00:59:28.306548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-02 00:59:28.306566 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:59:28.306581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-02 00:59:28.306590 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:59:28.306673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-02 00:59:28.306684 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:59:28.306693 | orchestrator | 2026-01-02 00:59:28.306701 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-01-02 00:59:28.306716 | orchestrator | Friday 02 January 2026 00:56:50 +0000 (0:00:03.159) 0:00:33.316 ******** 2026-01-02 00:59:28.306733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-02 00:59:28.306743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-02 00:59:28.306787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-02 00:59:28.306806 | orchestrator | 2026-01-02 00:59:28.306815 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-01-02 00:59:28.306824 | orchestrator | Friday 02 January 2026 00:56:54 +0000 (0:00:03.966) 0:00:37.283 ******** 2026-01-02 00:59:28.306831 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:59:28.306837 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:59:28.306842 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:59:28.306848 | orchestrator | 2026-01-02 00:59:28.306853 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-01-02 00:59:28.306858 | orchestrator | Friday 02 January 2026 00:56:55 +0000 (0:00:00.813) 0:00:38.097 ******** 2026-01-02 00:59:28.306864 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:59:28.306869 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:59:28.306875 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:59:28.306880 | orchestrator | 2026-01-02 00:59:28.306885 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-01-02 00:59:28.306891 | orchestrator | Friday 02 January 2026 00:56:56 +0000 (0:00:00.873) 0:00:38.971 ******** 2026-01-02 00:59:28.306898 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:59:28.306906 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:59:28.306914 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:59:28.306922 | orchestrator | 2026-01-02 00:59:28.306934 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-01-02 00:59:28.306943 | orchestrator | Friday 02 January 2026 00:56:56 +0000 (0:00:00.369) 0:00:39.340 ******** 2026-01-02 00:59:28.306951 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-01-02 00:59:28.306960 | orchestrator | ...ignoring 2026-01-02 00:59:28.306968 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-01-02 00:59:28.306975 | orchestrator | ...ignoring 2026-01-02 00:59:28.306983 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-01-02 00:59:28.306992 | orchestrator | ...ignoring 2026-01-02 00:59:28.307009 | orchestrator | 2026-01-02 00:59:28.307019 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-01-02 00:59:28.307028 | orchestrator | Friday 02 January 2026 00:57:07 +0000 (0:00:10.946) 0:00:50.287 ******** 2026-01-02 00:59:28.307043 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:59:28.307052 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:59:28.307060 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:59:28.307069 | orchestrator | 2026-01-02 00:59:28.307078 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-01-02 00:59:28.307086 | orchestrator | Friday 02 January 2026 00:57:08 +0000 (0:00:00.418) 0:00:50.706 ******** 2026-01-02 00:59:28.307096 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:59:28.307104 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:59:28.307112 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:59:28.307119 | orchestrator | 2026-01-02 00:59:28.307125 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-01-02 00:59:28.307130 | orchestrator | Friday 02 January 2026 00:57:09 +0000 (0:00:00.693) 0:00:51.399 ******** 2026-01-02 00:59:28.307135 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:59:28.307140 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:59:28.307145 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:59:28.307150 | orchestrator | 2026-01-02 00:59:28.307155 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-01-02 00:59:28.307161 | orchestrator | Friday 02 January 2026 00:57:09 +0000 (0:00:00.470) 0:00:51.869 ******** 2026-01-02 00:59:28.307166 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:59:28.307171 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:59:28.307177 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:59:28.307182 | orchestrator | 2026-01-02 00:59:28.307187 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-01-02 00:59:28.307192 | orchestrator | Friday 02 January 2026 00:57:09 +0000 (0:00:00.441) 0:00:52.310 ******** 2026-01-02 00:59:28.307197 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:59:28.307202 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:59:28.307207 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:59:28.307213 | orchestrator | 2026-01-02 00:59:28.307218 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-01-02 00:59:28.307224 | orchestrator | Friday 02 January 2026 00:57:10 +0000 (0:00:00.424) 0:00:52.734 ******** 2026-01-02 00:59:28.307235 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:59:28.307240 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:59:28.307245 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:59:28.307250 | orchestrator | 2026-01-02 00:59:28.307256 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-02 00:59:28.307261 | orchestrator | Friday 02 January 2026 00:57:11 +0000 (0:00:00.723) 0:00:53.458 ******** 2026-01-02 00:59:28.307266 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:59:28.307271 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:59:28.307276 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-01-02 00:59:28.307281 | orchestrator | 2026-01-02 00:59:28.307286 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-01-02 00:59:28.307291 | orchestrator | Friday 02 January 2026 00:57:11 +0000 (0:00:00.405) 0:00:53.863 ******** 2026-01-02 00:59:28.307296 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:59:28.307301 | orchestrator | 2026-01-02 00:59:28.307307 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-01-02 00:59:28.307312 | orchestrator | Friday 02 January 2026 00:57:22 +0000 (0:00:10.726) 0:01:04.590 ******** 2026-01-02 00:59:28.307317 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:59:28.307322 | orchestrator | 2026-01-02 00:59:28.307327 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-02 00:59:28.307332 | orchestrator | Friday 02 January 2026 00:57:22 +0000 (0:00:00.113) 0:01:04.703 ******** 2026-01-02 00:59:28.307337 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:59:28.307342 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:59:28.307347 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:59:28.307370 | orchestrator | 2026-01-02 00:59:28.307379 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-01-02 00:59:28.307389 | orchestrator | Friday 02 January 2026 00:57:23 +0000 (0:00:01.093) 0:01:05.797 ******** 2026-01-02 00:59:28.307394 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:59:28.307399 | orchestrator | 2026-01-02 00:59:28.307405 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-01-02 00:59:28.307410 | orchestrator | Friday 02 January 2026 00:57:31 +0000 (0:00:08.053) 0:01:13.850 ******** 2026-01-02 00:59:28.307415 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:59:28.307420 | orchestrator | 2026-01-02 00:59:28.307425 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-01-02 00:59:28.307430 | orchestrator | Friday 02 January 2026 00:57:33 +0000 (0:00:01.666) 0:01:15.517 ******** 2026-01-02 00:59:28.307435 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:59:28.307440 | orchestrator | 2026-01-02 00:59:28.307445 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-01-02 00:59:28.307451 | orchestrator | Friday 02 January 2026 00:57:35 +0000 (0:00:02.727) 0:01:18.245 ******** 2026-01-02 00:59:28.307460 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:59:28.307465 | orchestrator | 2026-01-02 00:59:28.307471 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-01-02 00:59:28.307476 | orchestrator | Friday 02 January 2026 00:57:36 +0000 (0:00:00.137) 0:01:18.382 ******** 2026-01-02 00:59:28.307481 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:59:28.307486 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:59:28.307491 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:59:28.307496 | orchestrator | 2026-01-02 00:59:28.307501 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-01-02 00:59:28.307506 | orchestrator | Friday 02 January 2026 00:57:36 +0000 (0:00:00.321) 0:01:18.703 ******** 2026-01-02 00:59:28.307511 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:59:28.307516 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-01-02 00:59:28.307521 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:59:28.307526 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:59:28.307532 | orchestrator | 2026-01-02 00:59:28.307537 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-01-02 00:59:28.307542 | orchestrator | skipping: no hosts matched 2026-01-02 00:59:28.307547 | orchestrator | 2026-01-02 00:59:28.307552 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-02 00:59:28.307557 | orchestrator | 2026-01-02 00:59:28.307562 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-02 00:59:28.307567 | orchestrator | Friday 02 January 2026 00:57:36 +0000 (0:00:00.608) 0:01:19.312 ******** 2026-01-02 00:59:28.307572 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:59:28.307578 | orchestrator | 2026-01-02 00:59:28.307583 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-02 00:59:28.307588 | orchestrator | Friday 02 January 2026 00:57:55 +0000 (0:00:18.891) 0:01:38.204 ******** 2026-01-02 00:59:28.307593 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:59:28.307598 | orchestrator | 2026-01-02 00:59:28.307603 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-02 00:59:28.307608 | orchestrator | Friday 02 January 2026 00:58:12 +0000 (0:00:16.621) 0:01:54.825 ******** 2026-01-02 00:59:28.307614 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:59:28.307619 | orchestrator | 2026-01-02 00:59:28.307624 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-02 00:59:28.307629 | orchestrator | 2026-01-02 00:59:28.307634 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-02 00:59:28.307640 | orchestrator | Friday 02 January 2026 00:58:14 +0000 (0:00:02.461) 0:01:57.286 ******** 2026-01-02 00:59:28.307645 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:59:28.307650 | orchestrator | 2026-01-02 00:59:28.307655 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-02 00:59:28.307664 | orchestrator | Friday 02 January 2026 00:58:33 +0000 (0:00:18.288) 0:02:15.575 ******** 2026-01-02 00:59:28.307669 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:59:28.307674 | orchestrator | 2026-01-02 00:59:28.307679 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-02 00:59:28.307684 | orchestrator | Friday 02 January 2026 00:58:48 +0000 (0:00:15.637) 0:02:31.212 ******** 2026-01-02 00:59:28.307689 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:59:28.307695 | orchestrator | 2026-01-02 00:59:28.307700 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-01-02 00:59:28.307705 | orchestrator | 2026-01-02 00:59:28.307714 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-02 00:59:28.307723 | orchestrator | Friday 02 January 2026 00:58:51 +0000 (0:00:02.965) 0:02:34.178 ******** 2026-01-02 00:59:28.307731 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:59:28.307739 | orchestrator | 2026-01-02 00:59:28.307747 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-02 00:59:28.307755 | orchestrator | Friday 02 January 2026 00:59:04 +0000 (0:00:12.634) 0:02:46.812 ******** 2026-01-02 00:59:28.307763 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:59:28.307772 | orchestrator | 2026-01-02 00:59:28.307781 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-02 00:59:28.307790 | orchestrator | Friday 02 January 2026 00:59:09 +0000 (0:00:04.679) 0:02:51.492 ******** 2026-01-02 00:59:28.307799 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:59:28.307807 | orchestrator | 2026-01-02 00:59:28.307816 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-01-02 00:59:28.307823 | orchestrator | 2026-01-02 00:59:28.307828 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-01-02 00:59:28.307833 | orchestrator | Friday 02 January 2026 00:59:11 +0000 (0:00:02.850) 0:02:54.342 ******** 2026-01-02 00:59:28.307839 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:59:28.307844 | orchestrator | 2026-01-02 00:59:28.307849 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-01-02 00:59:28.307854 | orchestrator | Friday 02 January 2026 00:59:12 +0000 (0:00:00.578) 0:02:54.921 ******** 2026-01-02 00:59:28.307859 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:59:28.307865 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:59:28.307870 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:59:28.307875 | orchestrator | 2026-01-02 00:59:28.307880 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-01-02 00:59:28.307886 | orchestrator | Friday 02 January 2026 00:59:15 +0000 (0:00:02.469) 0:02:57.390 ******** 2026-01-02 00:59:28.307891 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:59:28.307896 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:59:28.307901 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:59:28.307907 | orchestrator | 2026-01-02 00:59:28.307912 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-01-02 00:59:28.307917 | orchestrator | Friday 02 January 2026 00:59:17 +0000 (0:00:02.250) 0:02:59.641 ******** 2026-01-02 00:59:28.307922 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:59:28.307927 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:59:28.307932 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:59:28.307938 | orchestrator | 2026-01-02 00:59:28.307943 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-01-02 00:59:28.307966 | orchestrator | Friday 02 January 2026 00:59:19 +0000 (0:00:02.254) 0:03:01.895 ******** 2026-01-02 00:59:28.307971 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:59:28.307977 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:59:28.307982 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:59:28.307987 | orchestrator | 2026-01-02 00:59:28.307992 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-01-02 00:59:28.307997 | orchestrator | Friday 02 January 2026 00:59:21 +0000 (0:00:02.296) 0:03:04.192 ******** 2026-01-02 00:59:28.308007 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:59:28.308012 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:59:28.308018 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:59:28.308023 | orchestrator | 2026-01-02 00:59:28.308028 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-01-02 00:59:28.308033 | orchestrator | Friday 02 January 2026 00:59:25 +0000 (0:00:03.236) 0:03:07.429 ******** 2026-01-02 00:59:28.308038 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:59:28.308044 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:59:28.308049 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:59:28.308054 | orchestrator | 2026-01-02 00:59:28.308059 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:59:28.308064 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-01-02 00:59:28.308070 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-01-02 00:59:28.308077 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-01-02 00:59:28.308082 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-01-02 00:59:28.308087 | orchestrator | 2026-01-02 00:59:28.308093 | orchestrator | 2026-01-02 00:59:28.308098 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:59:28.308103 | orchestrator | Friday 02 January 2026 00:59:25 +0000 (0:00:00.253) 0:03:07.683 ******** 2026-01-02 00:59:28.308108 | orchestrator | =============================================================================== 2026-01-02 00:59:28.308114 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 37.18s 2026-01-02 00:59:28.308119 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 32.26s 2026-01-02 00:59:28.308124 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.63s 2026-01-02 00:59:28.308129 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.95s 2026-01-02 00:59:28.308134 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.73s 2026-01-02 00:59:28.308140 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.05s 2026-01-02 00:59:28.308149 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.43s 2026-01-02 00:59:28.308154 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 5.28s 2026-01-02 00:59:28.308159 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.68s 2026-01-02 00:59:28.308165 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 4.67s 2026-01-02 00:59:28.308170 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.20s 2026-01-02 00:59:28.308178 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.97s 2026-01-02 00:59:28.308186 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.53s 2026-01-02 00:59:28.308195 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.24s 2026-01-02 00:59:28.308203 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.16s 2026-01-02 00:59:28.308211 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.06s 2026-01-02 00:59:28.308219 | orchestrator | Check MariaDB service --------------------------------------------------- 2.88s 2026-01-02 00:59:28.308227 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.85s 2026-01-02 00:59:28.308236 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.73s 2026-01-02 00:59:28.308244 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.47s 2026-01-02 00:59:28.308258 | orchestrator | 2026-01-02 00:59:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:59:31.352468 | orchestrator | 2026-01-02 00:59:31 | INFO  | Task f505b5f3-ad51-494a-9ba9-8616959b188e is in state STARTED 2026-01-02 00:59:31.355244 | orchestrator | 2026-01-02 00:59:31 | INFO  | Task 3a94c23a-0e64-4972-9eda-b204deaa8ff6 is in state STARTED 2026-01-02 00:59:31.358590 | orchestrator | 2026-01-02 00:59:31 | INFO  | Task 2fad24b6-cc3d-4f90-907e-9fc878c03d04 is in state STARTED 2026-01-02 00:59:31.359067 | orchestrator | 2026-01-02 00:59:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:59:34.407690 | orchestrator | 2026-01-02 00:59:34 | INFO  | Task f505b5f3-ad51-494a-9ba9-8616959b188e is in state STARTED 2026-01-02 00:59:34.410789 | orchestrator | 2026-01-02 00:59:34 | INFO  | Task 3a94c23a-0e64-4972-9eda-b204deaa8ff6 is in state STARTED 2026-01-02 00:59:34.414407 | orchestrator | 2026-01-02 00:59:34 | INFO  | Task 2fad24b6-cc3d-4f90-907e-9fc878c03d04 is in state STARTED 2026-01-02 00:59:34.414458 | orchestrator | 2026-01-02 00:59:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:59:37.460519 | orchestrator | 2026-01-02 00:59:37 | INFO  | Task f505b5f3-ad51-494a-9ba9-8616959b188e is in state STARTED 2026-01-02 00:59:37.462231 | orchestrator | 2026-01-02 00:59:37 | INFO  | Task 3a94c23a-0e64-4972-9eda-b204deaa8ff6 is in state STARTED 2026-01-02 00:59:37.465187 | orchestrator | 2026-01-02 00:59:37 | INFO  | Task 2fad24b6-cc3d-4f90-907e-9fc878c03d04 is in state STARTED 2026-01-02 00:59:37.465548 | orchestrator | 2026-01-02 00:59:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:59:40.507299 | orchestrator | 2026-01-02 00:59:40 | INFO  | Task f505b5f3-ad51-494a-9ba9-8616959b188e is in state STARTED 2026-01-02 00:59:40.508723 | orchestrator | 2026-01-02 00:59:40 | INFO  | Task 3a94c23a-0e64-4972-9eda-b204deaa8ff6 is in state STARTED 2026-01-02 00:59:40.511392 | orchestrator | 2026-01-02 00:59:40 | INFO  | Task 2fad24b6-cc3d-4f90-907e-9fc878c03d04 is in state STARTED 2026-01-02 00:59:40.511658 | orchestrator | 2026-01-02 00:59:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:59:43.551613 | orchestrator | 2026-01-02 00:59:43 | INFO  | Task f505b5f3-ad51-494a-9ba9-8616959b188e is in state STARTED 2026-01-02 00:59:43.552694 | orchestrator | 2026-01-02 00:59:43 | INFO  | Task 3a94c23a-0e64-4972-9eda-b204deaa8ff6 is in state STARTED 2026-01-02 00:59:43.553512 | orchestrator | 2026-01-02 00:59:43 | INFO  | Task 2fad24b6-cc3d-4f90-907e-9fc878c03d04 is in state STARTED 2026-01-02 00:59:43.553615 | orchestrator | 2026-01-02 00:59:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:59:46.598151 | orchestrator | 2026-01-02 00:59:46 | INFO  | Task f505b5f3-ad51-494a-9ba9-8616959b188e is in state STARTED 2026-01-02 00:59:46.600369 | orchestrator | 2026-01-02 00:59:46 | INFO  | Task 3a94c23a-0e64-4972-9eda-b204deaa8ff6 is in state STARTED 2026-01-02 00:59:46.602782 | orchestrator | 2026-01-02 00:59:46 | INFO  | Task 2fad24b6-cc3d-4f90-907e-9fc878c03d04 is in state STARTED 2026-01-02 00:59:46.603948 | orchestrator | 2026-01-02 00:59:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:59:49.637559 | orchestrator | 2026-01-02 00:59:49 | INFO  | Task f505b5f3-ad51-494a-9ba9-8616959b188e is in state STARTED 2026-01-02 00:59:49.638889 | orchestrator | 2026-01-02 00:59:49 | INFO  | Task 3a94c23a-0e64-4972-9eda-b204deaa8ff6 is in state STARTED 2026-01-02 00:59:49.641084 | orchestrator | 2026-01-02 00:59:49 | INFO  | Task 2fad24b6-cc3d-4f90-907e-9fc878c03d04 is in state STARTED 2026-01-02 00:59:49.641141 | orchestrator | 2026-01-02 00:59:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:59:52.684386 | orchestrator | 2026-01-02 00:59:52 | INFO  | Task f505b5f3-ad51-494a-9ba9-8616959b188e is in state STARTED 2026-01-02 00:59:52.684497 | orchestrator | 2026-01-02 00:59:52 | INFO  | Task 3a94c23a-0e64-4972-9eda-b204deaa8ff6 is in state STARTED 2026-01-02 00:59:52.687266 | orchestrator | 2026-01-02 00:59:52 | INFO  | Task 2fad24b6-cc3d-4f90-907e-9fc878c03d04 is in state STARTED 2026-01-02 00:59:52.687307 | orchestrator | 2026-01-02 00:59:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:59:55.717044 | orchestrator | 2026-01-02 00:59:55 | INFO  | Task f505b5f3-ad51-494a-9ba9-8616959b188e is in state STARTED 2026-01-02 00:59:55.717475 | orchestrator | 2026-01-02 00:59:55 | INFO  | Task 3a94c23a-0e64-4972-9eda-b204deaa8ff6 is in state STARTED 2026-01-02 00:59:55.718531 | orchestrator | 2026-01-02 00:59:55 | INFO  | Task 2fad24b6-cc3d-4f90-907e-9fc878c03d04 is in state STARTED 2026-01-02 00:59:55.718562 | orchestrator | 2026-01-02 00:59:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:59:58.748726 | orchestrator | 2026-01-02 00:59:58 | INFO  | Task f505b5f3-ad51-494a-9ba9-8616959b188e is in state STARTED 2026-01-02 00:59:58.748801 | orchestrator | 2026-01-02 00:59:58 | INFO  | Task 3a94c23a-0e64-4972-9eda-b204deaa8ff6 is in state STARTED 2026-01-02 00:59:58.750049 | orchestrator | 2026-01-02 00:59:58 | INFO  | Task 2fad24b6-cc3d-4f90-907e-9fc878c03d04 is in state STARTED 2026-01-02 00:59:58.750587 | orchestrator | 2026-01-02 00:59:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:00:01.804735 | orchestrator | 2026-01-02 01:00:01 | INFO  | Task f505b5f3-ad51-494a-9ba9-8616959b188e is in state STARTED 2026-01-02 01:00:01.805973 | orchestrator | 2026-01-02 01:00:01 | INFO  | Task 3a94c23a-0e64-4972-9eda-b204deaa8ff6 is in state STARTED 2026-01-02 01:00:01.809118 | orchestrator | 2026-01-02 01:00:01 | INFO  | Task 2fad24b6-cc3d-4f90-907e-9fc878c03d04 is in state STARTED 2026-01-02 01:00:01.809733 | orchestrator | 2026-01-02 01:00:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:00:04.866863 | orchestrator | 2026-01-02 01:00:04 | INFO  | Task f505b5f3-ad51-494a-9ba9-8616959b188e is in state STARTED 2026-01-02 01:00:04.868462 | orchestrator | 2026-01-02 01:00:04 | INFO  | Task 3a94c23a-0e64-4972-9eda-b204deaa8ff6 is in state STARTED 2026-01-02 01:00:04.870219 | orchestrator | 2026-01-02 01:00:04 | INFO  | Task 2fad24b6-cc3d-4f90-907e-9fc878c03d04 is in state STARTED 2026-01-02 01:00:04.870566 | orchestrator | 2026-01-02 01:00:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:00:07.931912 | orchestrator | 2026-01-02 01:00:07 | INFO  | Task f505b5f3-ad51-494a-9ba9-8616959b188e is in state STARTED 2026-01-02 01:00:07.932004 | orchestrator | 2026-01-02 01:00:07 | INFO  | Task 3a94c23a-0e64-4972-9eda-b204deaa8ff6 is in state STARTED 2026-01-02 01:00:07.933179 | orchestrator | 2026-01-02 01:00:07 | INFO  | Task 2fad24b6-cc3d-4f90-907e-9fc878c03d04 is in state STARTED 2026-01-02 01:00:07.933209 | orchestrator | 2026-01-02 01:00:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:00:10.976365 | orchestrator | 2026-01-02 01:00:10 | INFO  | Task f505b5f3-ad51-494a-9ba9-8616959b188e is in state STARTED 2026-01-02 01:00:10.978350 | orchestrator | 2026-01-02 01:00:10 | INFO  | Task 3a94c23a-0e64-4972-9eda-b204deaa8ff6 is in state STARTED 2026-01-02 01:00:10.980055 | orchestrator | 2026-01-02 01:00:10 | INFO  | Task 2fad24b6-cc3d-4f90-907e-9fc878c03d04 is in state STARTED 2026-01-02 01:00:10.980085 | orchestrator | 2026-01-02 01:00:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:00:14.036783 | orchestrator | 2026-01-02 01:00:14 | INFO  | Task f505b5f3-ad51-494a-9ba9-8616959b188e is in state STARTED 2026-01-02 01:00:14.037990 | orchestrator | 2026-01-02 01:00:14 | INFO  | Task 3a94c23a-0e64-4972-9eda-b204deaa8ff6 is in state STARTED 2026-01-02 01:00:14.040104 | orchestrator | 2026-01-02 01:00:14 | INFO  | Task 2fad24b6-cc3d-4f90-907e-9fc878c03d04 is in state STARTED 2026-01-02 01:00:14.040159 | orchestrator | 2026-01-02 01:00:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:00:17.084950 | orchestrator | 2026-01-02 01:00:17 | INFO  | Task f505b5f3-ad51-494a-9ba9-8616959b188e is in state STARTED 2026-01-02 01:00:17.085329 | orchestrator | 2026-01-02 01:00:17 | INFO  | Task 3a94c23a-0e64-4972-9eda-b204deaa8ff6 is in state STARTED 2026-01-02 01:00:17.086116 | orchestrator | 2026-01-02 01:00:17 | INFO  | Task 2fad24b6-cc3d-4f90-907e-9fc878c03d04 is in state STARTED 2026-01-02 01:00:17.086193 | orchestrator | 2026-01-02 01:00:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:00:20.129967 | orchestrator | 2026-01-02 01:00:20 | INFO  | Task f505b5f3-ad51-494a-9ba9-8616959b188e is in state STARTED 2026-01-02 01:00:20.132171 | orchestrator | 2026-01-02 01:00:20 | INFO  | Task 3a94c23a-0e64-4972-9eda-b204deaa8ff6 is in state STARTED 2026-01-02 01:00:20.134175 | orchestrator | 2026-01-02 01:00:20 | INFO  | Task 2fad24b6-cc3d-4f90-907e-9fc878c03d04 is in state STARTED 2026-01-02 01:00:20.134211 | orchestrator | 2026-01-02 01:00:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:00:23.175666 | orchestrator | 2026-01-02 01:00:23 | INFO  | Task f505b5f3-ad51-494a-9ba9-8616959b188e is in state STARTED 2026-01-02 01:00:23.178257 | orchestrator | 2026-01-02 01:00:23 | INFO  | Task 3a94c23a-0e64-4972-9eda-b204deaa8ff6 is in state STARTED 2026-01-02 01:00:23.181246 | orchestrator | 2026-01-02 01:00:23 | INFO  | Task 2fad24b6-cc3d-4f90-907e-9fc878c03d04 is in state STARTED 2026-01-02 01:00:23.181621 | orchestrator | 2026-01-02 01:00:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:00:26.244580 | orchestrator | 2026-01-02 01:00:26 | INFO  | Task f505b5f3-ad51-494a-9ba9-8616959b188e is in state STARTED 2026-01-02 01:00:26.245782 | orchestrator | 2026-01-02 01:00:26 | INFO  | Task 3a94c23a-0e64-4972-9eda-b204deaa8ff6 is in state STARTED 2026-01-02 01:00:26.247406 | orchestrator | 2026-01-02 01:00:26 | INFO  | Task 2fad24b6-cc3d-4f90-907e-9fc878c03d04 is in state STARTED 2026-01-02 01:00:26.247436 | orchestrator | 2026-01-02 01:00:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:00:29.293168 | orchestrator | 2026-01-02 01:00:29 | INFO  | Task f505b5f3-ad51-494a-9ba9-8616959b188e is in state STARTED 2026-01-02 01:00:29.294250 | orchestrator | 2026-01-02 01:00:29 | INFO  | Task 3a94c23a-0e64-4972-9eda-b204deaa8ff6 is in state STARTED 2026-01-02 01:00:29.295523 | orchestrator | 2026-01-02 01:00:29 | INFO  | Task 2fad24b6-cc3d-4f90-907e-9fc878c03d04 is in state STARTED 2026-01-02 01:00:29.295562 | orchestrator | 2026-01-02 01:00:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:00:32.345611 | orchestrator | 2026-01-02 01:00:32 | INFO  | Task f505b5f3-ad51-494a-9ba9-8616959b188e is in state STARTED 2026-01-02 01:00:32.347501 | orchestrator | 2026-01-02 01:00:32 | INFO  | Task 3a94c23a-0e64-4972-9eda-b204deaa8ff6 is in state STARTED 2026-01-02 01:00:32.349808 | orchestrator | 2026-01-02 01:00:32 | INFO  | Task 2fad24b6-cc3d-4f90-907e-9fc878c03d04 is in state STARTED 2026-01-02 01:00:32.349862 | orchestrator | 2026-01-02 01:00:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:00:35.397191 | orchestrator | 2026-01-02 01:00:35 | INFO  | Task f505b5f3-ad51-494a-9ba9-8616959b188e is in state SUCCESS 2026-01-02 01:00:35.398466 | orchestrator | 2026-01-02 01:00:35.398534 | orchestrator | 2026-01-02 01:00:35.398552 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-02 01:00:35.398565 | orchestrator | 2026-01-02 01:00:35.398576 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-02 01:00:35.398589 | orchestrator | Friday 02 January 2026 00:59:30 +0000 (0:00:00.277) 0:00:00.277 ******** 2026-01-02 01:00:35.398600 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:00:35.398614 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:00:35.398626 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:00:35.398661 | orchestrator | 2026-01-02 01:00:35.398672 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-02 01:00:35.398684 | orchestrator | Friday 02 January 2026 00:59:30 +0000 (0:00:00.326) 0:00:00.604 ******** 2026-01-02 01:00:35.398695 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-01-02 01:00:35.398707 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-01-02 01:00:35.398718 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-01-02 01:00:35.398729 | orchestrator | 2026-01-02 01:00:35.398740 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-01-02 01:00:35.398751 | orchestrator | 2026-01-02 01:00:35.398762 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-02 01:00:35.398773 | orchestrator | Friday 02 January 2026 00:59:30 +0000 (0:00:00.460) 0:00:01.065 ******** 2026-01-02 01:00:35.398784 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 01:00:35.398796 | orchestrator | 2026-01-02 01:00:35.398807 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-01-02 01:00:35.398818 | orchestrator | Friday 02 January 2026 00:59:31 +0000 (0:00:00.626) 0:00:01.691 ******** 2026-01-02 01:00:35.398835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-02 01:00:35.398853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-02 01:00:35.398899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-02 01:00:35.399025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-02 01:00:35.399042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-02 01:00:35.399054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-02 01:00:35.399066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-02 01:00:35.399078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-02 01:00:35.399103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-02 01:00:35.399115 | orchestrator | 2026-01-02 01:00:35.399126 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-01-02 01:00:35.399138 | orchestrator | Friday 02 January 2026 00:59:33 +0000 (0:00:01.951) 0:00:03.642 ******** 2026-01-02 01:00:35.399150 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:00:35.399356 | orchestrator | 2026-01-02 01:00:35.399434 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-01-02 01:00:35.399447 | orchestrator | Friday 02 January 2026 00:59:33 +0000 (0:00:00.151) 0:00:03.794 ******** 2026-01-02 01:00:35.399459 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:00:35.399470 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:00:35.399481 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:00:35.399492 | orchestrator | 2026-01-02 01:00:35.399503 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-01-02 01:00:35.399515 | orchestrator | Friday 02 January 2026 00:59:34 +0000 (0:00:00.466) 0:00:04.260 ******** 2026-01-02 01:00:35.399525 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-02 01:00:35.399536 | orchestrator | 2026-01-02 01:00:35.399547 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-02 01:00:35.399558 | orchestrator | Friday 02 January 2026 00:59:34 +0000 (0:00:00.824) 0:00:05.084 ******** 2026-01-02 01:00:35.399570 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 01:00:35.399581 | orchestrator | 2026-01-02 01:00:35.399592 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-01-02 01:00:35.399603 | orchestrator | Friday 02 January 2026 00:59:35 +0000 (0:00:00.566) 0:00:05.651 ******** 2026-01-02 01:00:35.399615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-02 01:00:35.399629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-02 01:00:35.399658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-02 01:00:35.399711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-02 01:00:35.399726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-02 01:00:35.399738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-02 01:00:35.399750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-02 01:00:35.399762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-02 01:00:35.399786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-02 01:00:35.399798 | orchestrator | 2026-01-02 01:00:35.399843 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-01-02 01:00:35.399856 | orchestrator | Friday 02 January 2026 00:59:39 +0000 (0:00:03.577) 0:00:09.229 ******** 2026-01-02 01:00:35.399876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-02 01:00:35.399887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-02 01:00:35.399897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-02 01:00:35.399908 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:00:35.399921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-02 01:00:35.399990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-02 01:00:35.400011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-02 01:00:35.400022 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:00:35.400041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-02 01:00:35.400052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-02 01:00:35.400063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-02 01:00:35.400080 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:00:35.400090 | orchestrator | 2026-01-02 01:00:35.400100 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-01-02 01:00:35.400110 | orchestrator | Friday 02 January 2026 00:59:39 +0000 (0:00:00.837) 0:00:10.066 ******** 2026-01-02 01:00:35.400126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-02 01:00:35.400136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-02 01:00:35.400153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-02 01:00:35.400163 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:00:35.400174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-02 01:00:35.400191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-02 01:00:35.400201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-02 01:00:35.400211 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:00:35.400227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-02 01:00:35.400245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-02 01:00:35.400255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-02 01:00:35.400265 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:00:35.400307 | orchestrator | 2026-01-02 01:00:35.400320 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-01-02 01:00:35.400330 | orchestrator | Friday 02 January 2026 00:59:40 +0000 (0:00:00.764) 0:00:10.831 ******** 2026-01-02 01:00:35.400340 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-02 01:00:35.400359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-02 01:00:35.400985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-02 01:00:35.401013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-02 01:00:35.401024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-02 01:00:35.401050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-02 01:00:35.401060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-02 01:00:35.401071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-02 01:00:35.401085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-02 01:00:35.401096 | orchestrator | 2026-01-02 01:00:35.401106 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-01-02 01:00:35.401123 | orchestrator | Friday 02 January 2026 00:59:43 +0000 (0:00:03.330) 0:00:14.162 ******** 2026-01-02 01:00:35.401152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-02 01:00:35.401182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-02 01:00:35.401197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-02 01:00:35.401214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-02 01:00:35.401243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-02 01:00:35.401261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-02 01:00:35.401302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-02 01:00:35.401328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-02 01:00:35.401346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-02 01:00:35.401364 | orchestrator | 2026-01-02 01:00:35.401381 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-01-02 01:00:35.401397 | orchestrator | Friday 02 January 2026 00:59:49 +0000 (0:00:05.866) 0:00:20.028 ******** 2026-01-02 01:00:35.401411 | orchestrator | changed: [testbed-node-1] 2026-01-02 01:00:35.401421 | orchestrator | changed: [testbed-node-0] 2026-01-02 01:00:35.401431 | orchestrator | changed: [testbed-node-2] 2026-01-02 01:00:35.401441 | orchestrator | 2026-01-02 01:00:35.401451 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-01-02 01:00:35.401460 | orchestrator | Friday 02 January 2026 00:59:51 +0000 (0:00:01.815) 0:00:21.843 ******** 2026-01-02 01:00:35.401470 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:00:35.401479 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:00:35.401489 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:00:35.401499 | orchestrator | 2026-01-02 01:00:35.401508 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-01-02 01:00:35.401518 | orchestrator | Friday 02 January 2026 00:59:52 +0000 (0:00:00.602) 0:00:22.446 ******** 2026-01-02 01:00:35.401528 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:00:35.401537 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:00:35.401552 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:00:35.401562 | orchestrator | 2026-01-02 01:00:35.401575 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-01-02 01:00:35.401590 | orchestrator | Friday 02 January 2026 00:59:52 +0000 (0:00:00.330) 0:00:22.776 ******** 2026-01-02 01:00:35.401607 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:00:35.401623 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:00:35.401638 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:00:35.401653 | orchestrator | 2026-01-02 01:00:35.401668 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-01-02 01:00:35.401683 | orchestrator | Friday 02 January 2026 00:59:53 +0000 (0:00:00.498) 0:00:23.275 ******** 2026-01-02 01:00:35.401712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-02 01:00:35.401741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-02 01:00:35.401768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-02 01:00:35.401785 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:00:35.401802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-02 01:00:35.401825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-02 01:00:35.401851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-02 01:00:35.401878 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:00:35.401895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-02 01:00:35.401911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-02 01:00:35.401928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-02 01:00:35.401942 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:00:35.401956 | orchestrator | 2026-01-02 01:00:35.401970 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-02 01:00:35.401985 | orchestrator | Friday 02 January 2026 00:59:53 +0000 (0:00:00.563) 0:00:23.838 ******** 2026-01-02 01:00:35.401999 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:00:35.402069 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:00:35.402092 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:00:35.402107 | orchestrator | 2026-01-02 01:00:35.402124 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-01-02 01:00:35.402139 | orchestrator | Friday 02 January 2026 00:59:53 +0000 (0:00:00.299) 0:00:24.137 ******** 2026-01-02 01:00:35.402154 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-02 01:00:35.402171 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-02 01:00:35.402187 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-02 01:00:35.402204 | orchestrator | 2026-01-02 01:00:35.402326 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-01-02 01:00:35.402376 | orchestrator | Friday 02 January 2026 00:59:55 +0000 (0:00:01.756) 0:00:25.894 ******** 2026-01-02 01:00:35.402399 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-02 01:00:35.402419 | orchestrator | 2026-01-02 01:00:35.402436 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-01-02 01:00:35.402450 | orchestrator | Friday 02 January 2026 00:59:56 +0000 (0:00:00.956) 0:00:26.851 ******** 2026-01-02 01:00:35.402467 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:00:35.402483 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:00:35.402500 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:00:35.402516 | orchestrator | 2026-01-02 01:00:35.402532 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-01-02 01:00:35.402548 | orchestrator | Friday 02 January 2026 00:59:57 +0000 (0:00:00.934) 0:00:27.786 ******** 2026-01-02 01:00:35.402563 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-02 01:00:35.402573 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-02 01:00:35.402583 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-02 01:00:35.402593 | orchestrator | 2026-01-02 01:00:35.402603 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-01-02 01:00:35.402625 | orchestrator | Friday 02 January 2026 00:59:58 +0000 (0:00:01.133) 0:00:28.920 ******** 2026-01-02 01:00:35.402636 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:00:35.402646 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:00:35.402656 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:00:35.402665 | orchestrator | 2026-01-02 01:00:35.402675 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-01-02 01:00:35.402685 | orchestrator | Friday 02 January 2026 00:59:59 +0000 (0:00:00.352) 0:00:29.272 ******** 2026-01-02 01:00:35.402695 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-02 01:00:35.402705 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-02 01:00:35.402715 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-02 01:00:35.402724 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-02 01:00:35.402734 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-02 01:00:35.402744 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-02 01:00:35.402754 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-02 01:00:35.402763 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-02 01:00:35.402773 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-02 01:00:35.402783 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-02 01:00:35.402792 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-02 01:00:35.402802 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-02 01:00:35.402812 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-02 01:00:35.402822 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-02 01:00:35.402832 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-02 01:00:35.402842 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-02 01:00:35.402851 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-02 01:00:35.402861 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-02 01:00:35.402880 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-02 01:00:35.402894 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-02 01:00:35.402910 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-02 01:00:35.402925 | orchestrator | 2026-01-02 01:00:35.402940 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-01-02 01:00:35.402956 | orchestrator | Friday 02 January 2026 01:00:08 +0000 (0:00:09.719) 0:00:38.992 ******** 2026-01-02 01:00:35.402971 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-02 01:00:35.402986 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-02 01:00:35.403004 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-02 01:00:35.403021 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-02 01:00:35.403036 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-02 01:00:35.403051 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-02 01:00:35.403061 | orchestrator | 2026-01-02 01:00:35.403072 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-01-02 01:00:35.403081 | orchestrator | Friday 02 January 2026 01:00:11 +0000 (0:00:03.093) 0:00:42.086 ******** 2026-01-02 01:00:35.403109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-02 01:00:35.403122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-02 01:00:35.403134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-02 01:00:35.403153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-02 01:00:35.403169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-02 01:00:35.403179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-02 01:00:35.403196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-02 01:00:35.403207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-02 01:00:35.403217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-02 01:00:35.403239 | orchestrator | 2026-01-02 01:00:35.403250 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-02 01:00:35.403260 | orchestrator | Friday 02 January 2026 01:00:14 +0000 (0:00:02.505) 0:00:44.591 ******** 2026-01-02 01:00:35.403297 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:00:35.403312 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:00:35.403322 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:00:35.403332 | orchestrator | 2026-01-02 01:00:35.403341 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-01-02 01:00:35.403351 | orchestrator | Friday 02 January 2026 01:00:14 +0000 (0:00:00.305) 0:00:44.896 ******** 2026-01-02 01:00:35.403361 | orchestrator | changed: [testbed-node-0] 2026-01-02 01:00:35.403370 | orchestrator | 2026-01-02 01:00:35.403380 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-01-02 01:00:35.403389 | orchestrator | Friday 02 January 2026 01:00:17 +0000 (0:00:02.302) 0:00:47.199 ******** 2026-01-02 01:00:35.403399 | orchestrator | changed: [testbed-node-0] 2026-01-02 01:00:35.403409 | orchestrator | 2026-01-02 01:00:35.403418 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-01-02 01:00:35.403428 | orchestrator | Friday 02 January 2026 01:00:19 +0000 (0:00:02.465) 0:00:49.665 ******** 2026-01-02 01:00:35.403438 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:00:35.403448 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:00:35.403457 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:00:35.403467 | orchestrator | 2026-01-02 01:00:35.403477 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-01-02 01:00:35.403486 | orchestrator | Friday 02 January 2026 01:00:20 +0000 (0:00:01.067) 0:00:50.733 ******** 2026-01-02 01:00:35.403496 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:00:35.403506 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:00:35.403516 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:00:35.403525 | orchestrator | 2026-01-02 01:00:35.403535 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-01-02 01:00:35.403545 | orchestrator | Friday 02 January 2026 01:00:20 +0000 (0:00:00.324) 0:00:51.057 ******** 2026-01-02 01:00:35.403555 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:00:35.403565 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:00:35.403574 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:00:35.403584 | orchestrator | 2026-01-02 01:00:35.403599 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-01-02 01:00:35.403609 | orchestrator | Friday 02 January 2026 01:00:21 +0000 (0:00:00.350) 0:00:51.408 ******** 2026-01-02 01:00:35.403809 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "Container exited with non-zero return code 1", "rc": 1, "stderr": "+ sudo -E kolla_set_configs\nINFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json\nINFO:__main__:Validating config file\nINFO:__main__:Kolla config strategy set to: COPY_ALWAYS\nINFO:__main__:Copying service configuration files\nINFO:__main__:Copying /var/lib/kolla/config_files/keystone-startup.sh to /usr/bin/keystone-startup.sh\nINFO:__main__:Setting permission for /usr/bin/keystone-startup.sh\nINFO:__main__:Copying /var/lib/kolla/config_files/keystone.conf to /etc/keystone/keystone.conf\nINFO:__main__:Setting permission for /etc/keystone/keystone.conf\nINFO:__main__:Copying /var/lib/kolla/config_files/wsgi-keystone.conf to /etc/apache2/conf-enabled/wsgi-keystone.conf\nINFO:__main__:Setting permission for /etc/apache2/conf-enabled/wsgi-keystone.conf\nINFO:__main__:Writing out command to execute\nINFO:__main__:Setting permission for /var/log/kolla\nINFO:__main__:Setting permission for /etc/keystone/fernet-keys\n++ cat /run_command\n+ CMD=/usr/bin/keystone-startup.sh\n+ ARGS=\n+ sudo kolla_copy_cacerts\nrehash: warning: skipping ca-certificates.crt,it does not contain exactly one certificate or CRL\n+ sudo kolla_install_projects\n+ [[ ! -n '' ]]\n+ . kolla_extend_start\n++ KEYSTONE_LOG_DIR=/var/log/kolla/keystone\n++ [[ ! -d /var/log/kolla/keystone ]]\n++ mkdir -p /var/log/kolla/keystone\n+++ stat -c %U:%G /var/log/kolla/keystone\n++ [[ root:kolla != \\k\\e\\y\\s\\t\\o\\n\\e\\:\\k\\o\\l\\l\\a ]]\n++ chown keystone:kolla /var/log/kolla/keystone\n++ '[' '!' -f /var/log/kolla/keystone/keystone.log ']'\n++ touch /var/log/kolla/keystone/keystone.log\n+++ stat -c %U:%G /var/log/kolla/keystone/keystone.log\n++ [[ root:kolla != \\k\\e\\y\\s\\t\\o\\n\\e\\:\\k\\e\\y\\s\\t\\o\\n\\e ]]\n++ chown keystone:keystone /var/log/kolla/keystone/keystone.log\n+++ stat -c %a /var/log/kolla/keystone\n++ [[ 2755 != \\7\\5\\5 ]]\n++ chmod 755 /var/log/kolla/keystone\n++ EXTRA_KEYSTONE_MANAGE_ARGS=\n++ [[ -n '' ]]\n++ [[ -n '' ]]\n++ [[ -n 0 ]]\n++ sudo -H -u keystone keystone-manage db_sync\n2026-01-02 01:00:32.133 1079 DEBUG oslo_db.sqlalchemy.engines [-] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py:342\n2026-01-02 01:00:32.138 1079 CRITICAL keystone [-] Unhandled error: sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1193, \"Unknown system variable 'transaction_isolation'\")\n(Background on this error at: https://sqlalche.me/e/20/e3q8)\n2026-01-02 01:00:32.138 1079 ERROR keystone Traceback (most recent call last):\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 146, in __init__\n2026-01-02 01:00:32.138 1079 ERROR keystone self._dbapi_connection = engine.raw_connection()\n2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3302, in raw_connection\n2026-01-02 01:00:32.138 1079 ERROR keystone return self.pool.connect()\n2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 449, in connect\n2026-01-02 01:00:32.138 1079 ERROR keystone return _ConnectionFairy._checkout(self)\n2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 1263, in _checkout\n2026-01-02 01:00:32.138 1079 ERROR keystone fairy = _ConnectionRecord.checkout(pool)\n2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 712, in checkout\n2026-01-02 01:00:32.138 1079 ERROR keystone rec = pool._do_get()\n2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 179, in _do_get\n2026-01-02 01:00:32.138 1079 ERROR keystone with util.safe_reraise():\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 146, in __exit__\n2026-01-02 01:00:32.138 1079 ERROR keystone raise exc_value.with_traceback(exc_tb)\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 177, in _do_get\n2026-01-02 01:00:32.138 1079 ERROR keystone return self._create_connection()\n2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 390, in _create_connection\n2026-01-02 01:00:32.138 1079 ERROR keystone return _ConnectionRecord(self)\n2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 674, in __init__\n2026-01-02 01:00:32.138 1079 ERROR keystone self.__connect()\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 914, in __connect\n2026-01-02 01:00:32.138 1079 ERROR keystone )._exec_w_sync_on_first_run(self.dbapi_connection, self)\n2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 483, in _exec_w_sync_on_first_run\n2026-01-02 01:00:32.138 1079 ERROR keystone self(*args, **kw)\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 497, in __call__\n2026-01-02 01:00:32.138 1079 ERROR keystone fn(*args, **kw)\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 1912, in go\n2026-01-02 01:00:32.138 1079 ERROR keystone return once_fn(*arg, **kw)\n2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/create.py\", line 749, in first_connect\n2026-01-02 01:00:32.138 1079 ERROR keystone dialect.initialize(c)\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2835, in initialize\n2026-01-02 01:00:32.138 1079 ERROR keystone default.DefaultDialect.initialize(self, connection)\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 532, in initialize\n2026-01-02 01:00:32.138 1079 ERROR keystone self.default_isolation_level = self.get_default_isolation_level(\n2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 583, in get_default_isolation_level\n2026-01-02 01:00:32.138 1079 ERROR keystone return self.get_isolation_level(dbapi_conn)\n2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2540, in get_isolation_level\n2026-01-02 01:00:32.138 1079 ERROR keystone cursor.execute(\"SELECT @@transaction_isolation\")\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 153, in execute\n2026-01-02 01:00:32.138 1079 ERROR keystone result = self._query(query)\n2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 322, in _query\n2026-01-02 01:00:32.138 1079 ERROR keystone conn.query(q)\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 563, in query\n2026-01-02 01:00:32.138 1079 ERROR keystone self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 825, in _read_query_result\n2026-01-02 01:00:32.138 1079 ERROR keystone result.read()\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 1199, in read\n2026-01-02 01:00:32.138 1079 ERROR keystone first_packet = self.connection._read_packet()\n2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 775, in _read_packet\n2026-01-02 01:00:32.138 1079 ERROR keystone packet.raise_for_error()\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/protocol.py\", line 219, in raise_for_error\n2026-01-02 01:00:32.138 1079 ERROR keystone err.raise_mysql_exception(self._data)\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/err.py\", line 150, in raise_mysql_exception\n2026-01-02 01:00:32.138 1079 ERROR keystone raise errorclass(errno, errval)\n2026-01-02 01:00:32.138 1079 ERROR keystone pymysql.err.OperationalError: (1193, \"Unknown system variable 'transaction_isolation'\")\n2026-01-02 01:00:32.138 1079 ERROR keystone \n2026-01-02 01:00:32.138 1079 ERROR keystone The above exception was the direct cause of the following exception:\n2026-01-02 01:00:32.138 1079 ERROR keystone \n2026-01-02 01:00:32.138 1079 ERROR keystone Traceback (most recent call last):\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/bin/keystone-manage\", line 7, in \n2026-01-02 01:00:32.138 1079 ERROR keystone sys.exit(main())\n2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/manage.py\", line 36, in main\n2026-01-02 01:00:32.138 1079 ERROR keystone cli.main(argv=sys.argv, developer_config_file=developer_config)\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/cli.py\", line 1733, in main\n2026-01-02 01:00:32.138 1079 ERROR keystone CONF.command.cmd_class.main()\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/cli.py\", line 493, in main\n2026-01-02 01:00:32.138 1079 ERROR keystone upgrades.offline_sync_database_to_version(CONF.command.version)\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/common/sql/upgrades.py\", line 328, in offline_sync_database_to_version\n2026-01-02 01:00:32.138 1079 ERROR keystone _db_sync(engine=engine)\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/common/sql/upgrades.py\", line 217, in _db_sync\n2026-01-02 01:00:32.138 1079 ERROR keystone with sql.session_for_write() as session:\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/usr/lib/python3.12/contextlib.py\", line 137, in __enter__\n2026-01-02 01:00:32.138 1079 ERROR keystone return next(self.gen)\n2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 1042, in _transaction_scope\n2026-01-02 01:00:32.138 1079 ERROR keystone with current._produce_block(\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/usr/lib/python3.12/contextlib.py\", line 137, in __enter__\n2026-01-02 01:00:32.138 1079 ERROR keystone return next(self.gen)\n2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 641, in _session\n2026-01-02 01:00:32.138 1079 ERROR keystone self.session = self.factory._create_session(\n2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 404, in _create_session\n2026-01-02 01:00:32.138 1079 ERROR keystone self._start()\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 493, in _start\n2026-01-02 01:00:32.138 1079 ERROR keystone self._setup_for_connection(\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 519, in _setup_for_connection\n2026-01-02 01:00:32.138 1079 ERROR keystone engine = engines.create_engine(\n2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/debtcollector/renames.py\", line 41, in decorator\n2026-01-02 01:00:32.138 1079 ERROR keystone return wrapped(*args, **kwargs)\n2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py\", line 218, in create_engine\n2026-01-02 01:00:32.138 1079 ERROR keystone test_conn = _test_connection(engine, max_retries, retry_interval)\n2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py\", line 411, in _test_connection\n2026-01-02 01:00:32.138 1079 ERROR keystone return engine.connect()\n2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3278, in connect\n2026-01-02 01:00:32.138 1079 ERROR keystone return self._connection_cls(self)\n2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 148, in __init__\n2026-01-02 01:00:32.138 1079 ERROR keystone Connection._handle_dbapi_exception_noconnection(\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 2439, in _handle_dbapi_exception_noconnection\n2026-01-02 01:00:32.138 1079 ERROR keystone raise newraise.with_traceback(exc_info[2]) from e\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 146, in __init__\n2026-01-02 01:00:32.138 1079 ERROR keystone self._dbapi_connection = engine.raw_connection()\n2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3302, in raw_connection\n2026-01-02 01:00:32.138 1079 ERROR keystone return self.pool.connect()\n2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 449, in connect\n2026-01-02 01:00:32.138 1079 ERROR keystone return _ConnectionFairy._checkout(self)\n2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 1263, in _checkout\n2026-01-02 01:00:32.138 1079 ERROR keystone fairy = _ConnectionRecord.checkout(pool)\n2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 712, in checkout\n2026-01-02 01:00:32.138 1079 ERROR keystone rec = pool._do_get()\n2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 179, in _do_get\n2026-01-02 01:00:32.138 1079 ERROR keystone with util.safe_reraise():\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 146, in __exit__\n2026-01-02 01:00:32.138 1079 ERROR keystone raise exc_value.with_traceback(exc_tb)\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 177, in _do_get\n2026-01-02 01:00:32.138 1079 ERROR keystone return self._create_connection()\n2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 390, in _create_connection\n2026-01-02 01:00:32.138 1079 ERROR keystone return _ConnectionRecord(self)\n2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 674, in __init__\n2026-01-02 01:00:32.138 1079 ERROR keystone self.__connect()\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 914, in __connect\n2026-01-02 01:00:32.138 1079 ERROR keystone )._exec_w_sync_on_first_run(self.dbapi_connection, self)\n2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 483, in _exec_w_sync_on_first_run\n2026-01-02 01:00:32.138 1079 ERROR keystone self(*args, **kw)\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 497, in __call__\n2026-01-02 01:00:32.138 1079 ERROR keystone fn(*args, **kw)\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 1912, in go\n2026-01-02 01:00:32.138 1079 ERROR keystone return once_fn(*arg, **kw)\n2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/create.py\", line 749, in first_connect\n2026-01-02 01:00:32.138 1079 ERROR keystone dialect.initialize(c)\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2835, in initialize\n2026-01-02 01:00:32.138 1079 ERROR keystone default.DefaultDialect.initialize(self, connection)\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 532, in initialize\n2026-01-02 01:00:32.138 1079 ERROR keystone self.default_isolation_level = self.get_default_isolation_level(\n2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 583, in get_default_isolation_level\n2026-01-02 01:00:32.138 1079 ERROR keystone return self.get_isolation_level(dbapi_conn)\n2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2540, in get_isolation_level\n2026-01-02 01:00:32.138 1079 ERROR keystone cursor.execute(\"SELECT @@transaction_isolation\")\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 153, in execute\n2026-01-02 01:00:32.138 1079 ERROR keystone result = self._query(query)\n2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 322, in _query\n2026-01-02 01:00:32.138 1079 ERROR keystone conn.query(q)\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 563, in query\n2026-01-02 01:00:32.138 1079 ERROR keystone self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 825, in _read_query_result\n2026-01-02 01:00:32.138 1079 ERROR keystone result.read()\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 1199, in read\n2026-01-02 01:00:32.138 1079 ERROR keystone first_packet = self.connection._read_packet()\n2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 775, in _read_packet\n2026-01-02 01:00:32.138 1079 ERROR keystone packet.raise_for_error()\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/protocol.py\", line 219, in raise_for_error\n2026-01-02 01:00:32.138 1079 ERROR keystone err.raise_mysql_exception(self._data)\n2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/err.py\", line 150, in raise_mysql_exception\n2026-01-02 01:00:32.138 1079 ERROR keystone raise errorclass(errno, errval)\n2026-01-02 01:00:32.138 1079 ERROR keystone sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1193, \"Unknown system variable 'transaction_isolation'\")\n2026-01-02 01:00:32.138 1079 ERROR keystone (Background on this error at: https://sqlalche.me/e/20/e3q8)\n2026-01-02 01:00:32.138 1079 ERROR keystone \n", "stderr_lines": ["+ sudo -E kolla_set_configs", "INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json", "INFO:__main__:Validating config file", "INFO:__main__:Kolla config strategy set to: COPY_ALWAYS", "INFO:__main__:Copying service configuration files", "INFO:__main__:Copying /var/lib/kolla/config_files/keystone-startup.sh to /usr/bin/keystone-startup.sh", "INFO:__main__:Setting permission for /usr/bin/keystone-startup.sh", "INFO:__main__:Copying /var/lib/kolla/config_files/keystone.conf to /etc/keystone/keystone.conf", "INFO:__main__:Setting permission for /etc/keystone/keystone.conf", "INFO:__main__:Copying /var/lib/kolla/config_files/wsgi-keystone.conf to /etc/apache2/conf-enabled/wsgi-keystone.conf", "INFO:__main__:Setting permission for /etc/apache2/conf-enabled/wsgi-keystone.conf", "INFO:__main__:Writing out command to execute", "INFO:__main__:Setting permission for /var/log/kolla", "INFO:__main__:Setting permission for /etc/keystone/fernet-keys", "++ cat /run_command", "+ CMD=/usr/bin/keystone-startup.sh", "+ ARGS=", "+ sudo kolla_copy_cacerts", "rehash: warning: skipping ca-certificates.crt,it does not contain exactly one certificate or CRL", "+ sudo kolla_install_projects", "+ [[ ! -n '' ]]", "+ . kolla_extend_start", "++ KEYSTONE_LOG_DIR=/var/log/kolla/keystone", "++ [[ ! -d /var/log/kolla/keystone ]]", "++ mkdir -p /var/log/kolla/keystone", "+++ stat -c %U:%G /var/log/kolla/keystone", "++ [[ root:kolla != \\k\\e\\y\\s\\t\\o\\n\\e\\:\\k\\o\\l\\l\\a ]]", "++ chown keystone:kolla /var/log/kolla/keystone", "++ '[' '!' -f /var/log/kolla/keystone/keystone.log ']'", "++ touch /var/log/kolla/keystone/keystone.log", "+++ stat -c %U:%G /var/log/kolla/keystone/keystone.log", "++ [[ root:kolla != \\k\\e\\y\\s\\t\\o\\n\\e\\:\\k\\e\\y\\s\\t\\o\\n\\e ]]", "++ chown keystone:keystone /var/log/kolla/keystone/keystone.log", "+++ stat -c %a /var/log/kolla/keystone", "++ [[ 2755 != \\7\\5\\5 ]]", "++ chmod 755 /var/log/kolla/keystone", "++ EXTRA_KEYSTONE_MANAGE_ARGS=", "++ [[ -n '' ]]", "++ [[ -n '' ]]", "++ [[ -n 0 ]]", "++ sudo -H -u keystone keystone-manage db_sync", "2026-01-02 01:00:32.133 1079 DEBUG oslo_db.sqlalchemy.engines [-] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py:342", "2026-01-02 01:00:32.138 1079 CRITICAL keystone [-] Unhandled error: sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1193, \"Unknown system variable 'transaction_isolation'\")", "(Background on this error at: https://sqlalche.me/e/20/e3q8)", "2026-01-02 01:00:32.138 1079 ERROR keystone Traceback (most recent call last):", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 146, in __init__", "2026-01-02 01:00:32.138 1079 ERROR keystone self._dbapi_connection = engine.raw_connection()", "2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3302, in raw_connection", "2026-01-02 01:00:32.138 1079 ERROR keystone return self.pool.connect()", "2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 449, in connect", "2026-01-02 01:00:32.138 1079 ERROR keystone return _ConnectionFairy._checkout(self)", "2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 1263, in _checkout", "2026-01-02 01:00:32.138 1079 ERROR keystone fairy = _ConnectionRecord.checkout(pool)", "2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 712, in checkout", "2026-01-02 01:00:32.138 1079 ERROR keystone rec = pool._do_get()", "2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 179, in _do_get", "2026-01-02 01:00:32.138 1079 ERROR keystone with util.safe_reraise():", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 146, in __exit__", "2026-01-02 01:00:32.138 1079 ERROR keystone raise exc_value.with_traceback(exc_tb)", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 177, in _do_get", "2026-01-02 01:00:32.138 1079 ERROR keystone return self._create_connection()", "2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 390, in _create_connection", "2026-01-02 01:00:32.138 1079 ERROR keystone return _ConnectionRecord(self)", "2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 674, in __init__", "2026-01-02 01:00:32.138 1079 ERROR keystone self.__connect()", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 914, in __connect", "2026-01-02 01:00:32.138 1079 ERROR keystone )._exec_w_sync_on_first_run(self.dbapi_connection, self)", "2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 483, in _exec_w_sync_on_first_run", "2026-01-02 01:00:32.138 1079 ERROR keystone self(*args, **kw)", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 497, in __call__", "2026-01-02 01:00:32.138 1079 ERROR keystone fn(*args, **kw)", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 1912, in go", "2026-01-02 01:00:32.138 1079 ERROR keystone return once_fn(*arg, **kw)", "2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/create.py\", line 749, in first_connect", "2026-01-02 01:00:32.138 1079 ERROR keystone dialect.initialize(c)", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2835, in initialize", "2026-01-02 01:00:32.138 1079 ERROR keystone default.DefaultDialect.initialize(self, connection)", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 532, in initialize", "2026-01-02 01:00:32.138 1079 ERROR keystone self.default_isolation_level = self.get_default_isolation_level(", "2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 583, in get_default_isolation_level", "2026-01-02 01:00:32.138 1079 ERROR keystone return self.get_isolation_level(dbapi_conn)", "2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2540, in get_isolation_level", "2026-01-02 01:00:32.138 1079 ERROR keystone cursor.execute(\"SELECT @@transaction_isolation\")", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 153, in execute", "2026-01-02 01:00:32.138 1079 ERROR keystone result = self._query(query)", "2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 322, in _query", "2026-01-02 01:00:32.138 1079 ERROR keystone conn.query(q)", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 563, in query", "2026-01-02 01:00:32.138 1079 ERROR keystone self._affected_rows = self._read_query_result(unbuffered=unbuffered)", "2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 825, in _read_query_result", "2026-01-02 01:00:32.138 1079 ERROR keystone result.read()", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 1199, in read", "2026-01-02 01:00:32.138 1079 ERROR keystone first_packet = self.connection._read_packet()", "2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 775, in _read_packet", "2026-01-02 01:00:32.138 1079 ERROR keystone packet.raise_for_error()", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/protocol.py\", line 219, in raise_for_error", "2026-01-02 01:00:32.138 1079 ERROR keystone err.raise_mysql_exception(self._data)", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/err.py\", line 150, in raise_mysql_exception", "2026-01-02 01:00:32.138 1079 ERROR keystone raise errorclass(errno, errval)", "2026-01-02 01:00:32.138 1079 ERROR keystone pymysql.err.OperationalError: (1193, \"Unknown system variable 'transaction_isolation'\")", "2026-01-02 01:00:32.138 1079 ERROR keystone ", "2026-01-02 01:00:32.138 1079 ERROR keystone The above exception was the direct cause of the following exception:", "2026-01-02 01:00:32.138 1079 ERROR keystone ", "2026-01-02 01:00:32.138 1079 ERROR keystone Traceback (most recent call last):", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/bin/keystone-manage\", line 7, in ", "2026-01-02 01:00:32.138 1079 ERROR keystone sys.exit(main())", "2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/manage.py\", line 36, in main", "2026-01-02 01:00:32.138 1079 ERROR keystone cli.main(argv=sys.argv, developer_config_file=developer_config)", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/cli.py\", line 1733, in main", "2026-01-02 01:00:32.138 1079 ERROR keystone CONF.command.cmd_class.main()", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/cli.py\", line 493, in main", "2026-01-02 01:00:32.138 1079 ERROR keystone upgrades.offline_sync_database_to_version(CONF.command.version)", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/common/sql/upgrades.py\", line 328, in offline_sync_database_to_version", "2026-01-02 01:00:32.138 1079 ERROR keystone _db_sync(engine=engine)", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/common/sql/upgrades.py\", line 217, in _db_sync", "2026-01-02 01:00:32.138 1079 ERROR keystone with sql.session_for_write() as session:", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/usr/lib/python3.12/contextlib.py\", line 137, in __enter__", "2026-01-02 01:00:32.138 1079 ERROR keystone return next(self.gen)", "2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 1042, in _transaction_scope", "2026-01-02 01:00:32.138 1079 ERROR keystone with current._produce_block(", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/usr/lib/python3.12/contextlib.py\", line 137, in __enter__", "2026-01-02 01:00:32.138 1079 ERROR keystone return next(self.gen)", "2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 641, in _session", "2026-01-02 01:00:32.138 1079 ERROR keystone self.session = self.factory._create_session(", "2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 404, in _create_session", "2026-01-02 01:00:32.138 1079 ERROR keystone self._start()", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 493, in _start", "2026-01-02 01:00:32.138 1079 ERROR keystone self._setup_for_connection(", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 519, in _setup_for_connection", "2026-01-02 01:00:32.138 1079 ERROR keystone engine = engines.create_engine(", "2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/debtcollector/renames.py\", line 41, in decorator", "2026-01-02 01:00:32.138 1079 ERROR keystone return wrapped(*args, **kwargs)", "2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py\", line 218, in create_engine", "2026-01-02 01:00:32.138 1079 ERROR keystone test_conn = _test_connection(engine, max_retries, retry_interval)", "2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py\", line 411, in _test_connection", "2026-01-02 01:00:32.138 1079 ERROR keystone return engine.connect()", "2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3278, in connect", "2026-01-02 01:00:32.138 1079 ERROR keystone return self._connection_cls(self)", "2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 148, in __init__", "2026-01-02 01:00:32.138 1079 ERROR keystone Connection._handle_dbapi_exception_noconnection(", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 2439, in _handle_dbapi_exception_noconnection", "2026-01-02 01:00:32.138 1079 ERROR keystone raise newraise.with_traceback(exc_info[2]) from e", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 146, in __init__", "2026-01-02 01:00:32.138 1079 ERROR keystone self._dbapi_connection = engine.raw_connection()", "2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3302, in raw_connection", "2026-01-02 01:00:32.138 1079 ERROR keystone return self.pool.connect()", "2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 449, in connect", "2026-01-02 01:00:32.138 1079 ERROR keystone return _ConnectionFairy._checkout(self)", "2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 1263, in _checkout", "2026-01-02 01:00:32.138 1079 ERROR keystone fairy = _ConnectionRecord.checkout(pool)", "2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 712, in checkout", "2026-01-02 01:00:32.138 1079 ERROR keystone rec = pool._do_get()", "2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 179, in _do_get", "2026-01-02 01:00:32.138 1079 ERROR keystone with util.safe_reraise():", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 146, in __exit__", "2026-01-02 01:00:32.138 1079 ERROR keystone raise exc_value.with_traceback(exc_tb)", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 177, in _do_get", "2026-01-02 01:00:32.138 1079 ERROR keystone return self._create_connection()", "2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 390, in _create_connection", "2026-01-02 01:00:32.138 1079 ERROR keystone return _ConnectionRecord(self)", "2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 674, in __init__", "2026-01-02 01:00:32.138 1079 ERROR keystone self.__connect()", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 914, in __connect", "2026-01-02 01:00:32.138 1079 ERROR keystone )._exec_w_sync_on_first_run(self.dbapi_connection, self)", "2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 483, in _exec_w_sync_on_first_run", "2026-01-02 01:00:32.138 1079 ERROR keystone self(*args, **kw)", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 497, in __call__", "2026-01-02 01:00:32.138 1079 ERROR keystone fn(*args, **kw)", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 1912, in go", "2026-01-02 01:00:32.138 1079 ERROR keystone return once_fn(*arg, **kw)", "2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/create.py\", line 749, in first_connect", "2026-01-02 01:00:32.138 1079 ERROR keystone dialect.initialize(c)", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2835, in initialize", "2026-01-02 01:00:32.138 1079 ERROR keystone default.DefaultDialect.initialize(self, connection)", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 532, in initialize", "2026-01-02 01:00:32.138 1079 ERROR keystone self.default_isolation_level = self.get_default_isolation_level(", "2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 583, in get_default_isolation_level", "2026-01-02 01:00:32.138 1079 ERROR keystone return self.get_isolation_level(dbapi_conn)", "2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2540, in get_isolation_level", "2026-01-02 01:00:32.138 1079 ERROR keystone cursor.execute(\"SELECT @@transaction_isolation\")", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 153, in execute", "2026-01-02 01:00:32.138 1079 ERROR keystone result = self._query(query)", "2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 322, in _query", "2026-01-02 01:00:32.138 1079 ERROR keystone conn.query(q)", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 563, in query", "2026-01-02 01:00:32.138 1079 ERROR keystone self._affected_rows = self._read_query_result(unbuffered=unbuffered)", "2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 825, in _read_query_result", "2026-01-02 01:00:32.138 1079 ERROR keystone result.read()", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 1199, in read", "2026-01-02 01:00:32.138 1079 ERROR keystone first_packet = self.connection._read_packet()", "2026-01-02 01:00:32.138 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 775, in _read_packet", "2026-01-02 01:00:32.138 1079 ERROR keystone packet.raise_for_error()", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/protocol.py\", line 219, in raise_for_error", "2026-01-02 01:00:32.138 1079 ERROR keystone err.raise_mysql_exception(self._data)", "2026-01-02 01:00:32.138 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/err.py\", line 150, in raise_mysql_exception", "2026-01-02 01:00:32.138 1079 ERROR keystone raise errorclass(errno, errval)", "2026-01-02 01:00:32.138 1079 ERROR keystone sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1193, \"Unknown system variable 'transaction_isolation'\")", "2026-01-02 01:00:32.138 1079 ERROR keystone (Background on this error at: https://sqlalche.me/e/20/e3q8)", "2026-01-02 01:00:32.138 1079 ERROR keystone "], "stdout": "Updating certificates in /etc/ssl/certs...\n1 added, 0 removed; done.\nRunning hooks in /etc/ca-certificates/update.d...\ndone.\n", "stdout_lines": ["Updating certificates in /etc/ssl/certs...", "1 added, 0 removed; done.", "Running hooks in /etc/ca-certificates/update.d...", "done."]} 2026-01-02 01:00:35.403914 | orchestrator | 2026-01-02 01:00:35.403925 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 01:00:35.403936 | orchestrator | testbed-node-0 : ok=21  changed=11  unreachable=0 failed=1  skipped=12  rescued=0 ignored=0 2026-01-02 01:00:35.403952 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-02 01:00:35.403964 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-02 01:00:35.403980 | orchestrator | 2026-01-02 01:00:35.403990 | orchestrator | 2026-01-02 01:00:35.404000 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 01:00:35.404010 | orchestrator | Friday 02 January 2026 01:00:33 +0000 (0:00:11.899) 0:01:03.307 ******** 2026-01-02 01:00:35.404020 | orchestrator | =============================================================================== 2026-01-02 01:00:35.404030 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 11.90s 2026-01-02 01:00:35.404040 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.72s 2026-01-02 01:00:35.404050 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.87s 2026-01-02 01:00:35.404059 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.58s 2026-01-02 01:00:35.404069 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.33s 2026-01-02 01:00:35.404079 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.09s 2026-01-02 01:00:35.404089 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.51s 2026-01-02 01:00:35.404099 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.47s 2026-01-02 01:00:35.404109 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.30s 2026-01-02 01:00:35.404119 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.95s 2026-01-02 01:00:35.404129 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 1.82s 2026-01-02 01:00:35.404139 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.76s 2026-01-02 01:00:35.404150 | orchestrator | keystone : Generate the required cron jobs for the node ----------------- 1.13s 2026-01-02 01:00:35.404160 | orchestrator | keystone : Checking for any running keystone_fernet containers ---------- 1.07s 2026-01-02 01:00:35.404169 | orchestrator | keystone : Checking whether keystone-paste.ini file exists -------------- 0.96s 2026-01-02 01:00:35.404226 | orchestrator | keystone : Copying over keystone-paste.ini ------------------------------ 0.93s 2026-01-02 01:00:35.404237 | orchestrator | service-cert-copy : keystone | Copying over backend internal TLS certificate --- 0.84s 2026-01-02 01:00:35.404247 | orchestrator | keystone : Check if Keystone domain-specific config is supplied --------- 0.82s 2026-01-02 01:00:35.404257 | orchestrator | service-cert-copy : keystone | Copying over backend internal TLS key ---- 0.76s 2026-01-02 01:00:35.404267 | orchestrator | keystone : include_tasks ------------------------------------------------ 0.63s 2026-01-02 01:00:35.404302 | orchestrator | 2026-01-02 01:00:35 | INFO  | Task e23e40e0-2c82-46c9-93de-3748dbcc7c88 is in state STARTED 2026-01-02 01:00:35.404409 | orchestrator | 2026-01-02 01:00:35 | INFO  | Task 410a4e57-7f62-4810-b404-aa8d070afe21 is in state STARTED 2026-01-02 01:00:35.405679 | orchestrator | 2026-01-02 01:00:35 | INFO  | Task 3a94c23a-0e64-4972-9eda-b204deaa8ff6 is in state STARTED 2026-01-02 01:00:35.406973 | orchestrator | 2026-01-02 01:00:35 | INFO  | Task 2fad24b6-cc3d-4f90-907e-9fc878c03d04 is in state STARTED 2026-01-02 01:00:35.408087 | orchestrator | 2026-01-02 01:00:35 | INFO  | Task 1ad88a0f-20c5-448e-9c72-5ef0f43ccfd2 is in state STARTED 2026-01-02 01:00:35.408109 | orchestrator | 2026-01-02 01:00:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:00:38.452453 | orchestrator | 2026-01-02 01:00:38 | INFO  | Task e23e40e0-2c82-46c9-93de-3748dbcc7c88 is in state STARTED 2026-01-02 01:00:38.454215 | orchestrator | 2026-01-02 01:00:38 | INFO  | Task 410a4e57-7f62-4810-b404-aa8d070afe21 is in state STARTED 2026-01-02 01:00:38.457115 | orchestrator | 2026-01-02 01:00:38 | INFO  | Task 3a94c23a-0e64-4972-9eda-b204deaa8ff6 is in state STARTED 2026-01-02 01:00:38.459507 | orchestrator | 2026-01-02 01:00:38 | INFO  | Task 2fad24b6-cc3d-4f90-907e-9fc878c03d04 is in state STARTED 2026-01-02 01:00:38.461103 | orchestrator | 2026-01-02 01:00:38 | INFO  | Task 1ad88a0f-20c5-448e-9c72-5ef0f43ccfd2 is in state STARTED 2026-01-02 01:00:38.461149 | orchestrator | 2026-01-02 01:00:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:00:41.522810 | orchestrator | 2026-01-02 01:00:41 | INFO  | Task e23e40e0-2c82-46c9-93de-3748dbcc7c88 is in state STARTED 2026-01-02 01:00:41.525115 | orchestrator | 2026-01-02 01:00:41 | INFO  | Task 410a4e57-7f62-4810-b404-aa8d070afe21 is in state STARTED 2026-01-02 01:00:41.527876 | orchestrator | 2026-01-02 01:00:41 | INFO  | Task 3a94c23a-0e64-4972-9eda-b204deaa8ff6 is in state STARTED 2026-01-02 01:00:41.532119 | orchestrator | 2026-01-02 01:00:41 | INFO  | Task 2fad24b6-cc3d-4f90-907e-9fc878c03d04 is in state STARTED 2026-01-02 01:00:41.535044 | orchestrator | 2026-01-02 01:00:41 | INFO  | Task 1ad88a0f-20c5-448e-9c72-5ef0f43ccfd2 is in state STARTED 2026-01-02 01:00:41.535143 | orchestrator | 2026-01-02 01:00:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:00:44.585396 | orchestrator | 2026-01-02 01:00:44 | INFO  | Task e23e40e0-2c82-46c9-93de-3748dbcc7c88 is in state STARTED 2026-01-02 01:00:44.586660 | orchestrator | 2026-01-02 01:00:44 | INFO  | Task 410a4e57-7f62-4810-b404-aa8d070afe21 is in state STARTED 2026-01-02 01:00:44.588525 | orchestrator | 2026-01-02 01:00:44 | INFO  | Task 3a94c23a-0e64-4972-9eda-b204deaa8ff6 is in state STARTED 2026-01-02 01:00:44.592075 | orchestrator | 2026-01-02 01:00:44 | INFO  | Task 2fad24b6-cc3d-4f90-907e-9fc878c03d04 is in state STARTED 2026-01-02 01:00:44.594571 | orchestrator | 2026-01-02 01:00:44 | INFO  | Task 1ad88a0f-20c5-448e-9c72-5ef0f43ccfd2 is in state STARTED 2026-01-02 01:00:44.595198 | orchestrator | 2026-01-02 01:00:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:00:47.643893 | orchestrator | 2026-01-02 01:00:47 | INFO  | Task e23e40e0-2c82-46c9-93de-3748dbcc7c88 is in state STARTED 2026-01-02 01:00:47.647075 | orchestrator | 2026-01-02 01:00:47 | INFO  | Task 410a4e57-7f62-4810-b404-aa8d070afe21 is in state STARTED 2026-01-02 01:00:47.650748 | orchestrator | 2026-01-02 01:00:47 | INFO  | Task 3a94c23a-0e64-4972-9eda-b204deaa8ff6 is in state STARTED 2026-01-02 01:00:47.652802 | orchestrator | 2026-01-02 01:00:47 | INFO  | Task 2fad24b6-cc3d-4f90-907e-9fc878c03d04 is in state STARTED 2026-01-02 01:00:47.655606 | orchestrator | 2026-01-02 01:00:47 | INFO  | Task 1ad88a0f-20c5-448e-9c72-5ef0f43ccfd2 is in state STARTED 2026-01-02 01:00:47.655682 | orchestrator | 2026-01-02 01:00:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:00:50.703464 | orchestrator | 2026-01-02 01:00:50 | INFO  | Task e23e40e0-2c82-46c9-93de-3748dbcc7c88 is in state STARTED 2026-01-02 01:00:50.706346 | orchestrator | 2026-01-02 01:00:50 | INFO  | Task 410a4e57-7f62-4810-b404-aa8d070afe21 is in state STARTED 2026-01-02 01:00:50.709757 | orchestrator | 2026-01-02 01:00:50 | INFO  | Task 3a94c23a-0e64-4972-9eda-b204deaa8ff6 is in state STARTED 2026-01-02 01:00:50.712101 | orchestrator | 2026-01-02 01:00:50 | INFO  | Task 2fad24b6-cc3d-4f90-907e-9fc878c03d04 is in state STARTED 2026-01-02 01:00:50.713696 | orchestrator | 2026-01-02 01:00:50 | INFO  | Task 1ad88a0f-20c5-448e-9c72-5ef0f43ccfd2 is in state STARTED 2026-01-02 01:00:50.713853 | orchestrator | 2026-01-02 01:00:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:00:53.762353 | orchestrator | 2026-01-02 01:00:53 | INFO  | Task e23e40e0-2c82-46c9-93de-3748dbcc7c88 is in state STARTED 2026-01-02 01:00:53.765095 | orchestrator | 2026-01-02 01:00:53 | INFO  | Task 410a4e57-7f62-4810-b404-aa8d070afe21 is in state STARTED 2026-01-02 01:00:53.768723 | orchestrator | 2026-01-02 01:00:53 | INFO  | Task 3a94c23a-0e64-4972-9eda-b204deaa8ff6 is in state STARTED 2026-01-02 01:00:53.773458 | orchestrator | 2026-01-02 01:00:53 | INFO  | Task 2fad24b6-cc3d-4f90-907e-9fc878c03d04 is in state STARTED 2026-01-02 01:00:53.779633 | orchestrator | 2026-01-02 01:00:53 | INFO  | Task 1ad88a0f-20c5-448e-9c72-5ef0f43ccfd2 is in state STARTED 2026-01-02 01:00:53.779684 | orchestrator | 2026-01-02 01:00:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:00:56.824957 | orchestrator | 2026-01-02 01:00:56 | INFO  | Task e23e40e0-2c82-46c9-93de-3748dbcc7c88 is in state STARTED 2026-01-02 01:00:56.825998 | orchestrator | 2026-01-02 01:00:56 | INFO  | Task 410a4e57-7f62-4810-b404-aa8d070afe21 is in state STARTED 2026-01-02 01:00:56.828097 | orchestrator | 2026-01-02 01:00:56 | INFO  | Task 3a94c23a-0e64-4972-9eda-b204deaa8ff6 is in state SUCCESS 2026-01-02 01:00:56.829999 | orchestrator | 2026-01-02 01:00:56.830120 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-02 01:00:56.830143 | orchestrator | 2.16.14 2026-01-02 01:00:56.830173 | orchestrator | 2026-01-02 01:00:56.830215 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-01-02 01:00:56.830451 | orchestrator | 2026-01-02 01:00:56.830492 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-02 01:00:56.830506 | orchestrator | Friday 02 January 2026 00:58:42 +0000 (0:00:00.615) 0:00:00.615 ******** 2026-01-02 01:00:56.830518 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 01:00:56.830532 | orchestrator | 2026-01-02 01:00:56.830546 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-02 01:00:56.830560 | orchestrator | Friday 02 January 2026 00:58:43 +0000 (0:00:00.681) 0:00:01.296 ******** 2026-01-02 01:00:56.830574 | orchestrator | ok: [testbed-node-4] 2026-01-02 01:00:56.830590 | orchestrator | ok: [testbed-node-5] 2026-01-02 01:00:56.830608 | orchestrator | ok: [testbed-node-3] 2026-01-02 01:00:56.830628 | orchestrator | 2026-01-02 01:00:56.830647 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-02 01:00:56.830667 | orchestrator | Friday 02 January 2026 00:58:43 +0000 (0:00:00.646) 0:00:01.942 ******** 2026-01-02 01:00:56.830685 | orchestrator | ok: [testbed-node-3] 2026-01-02 01:00:56.830704 | orchestrator | ok: [testbed-node-4] 2026-01-02 01:00:56.830723 | orchestrator | ok: [testbed-node-5] 2026-01-02 01:00:56.830742 | orchestrator | 2026-01-02 01:00:56.830761 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-02 01:00:56.830782 | orchestrator | Friday 02 January 2026 00:58:44 +0000 (0:00:00.304) 0:00:02.247 ******** 2026-01-02 01:00:56.830873 | orchestrator | ok: [testbed-node-3] 2026-01-02 01:00:56.830888 | orchestrator | ok: [testbed-node-4] 2026-01-02 01:00:56.830902 | orchestrator | ok: [testbed-node-5] 2026-01-02 01:00:56.830915 | orchestrator | 2026-01-02 01:00:56.830955 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-02 01:00:56.830967 | orchestrator | Friday 02 January 2026 00:58:45 +0000 (0:00:00.888) 0:00:03.135 ******** 2026-01-02 01:00:56.830977 | orchestrator | ok: [testbed-node-3] 2026-01-02 01:00:56.830988 | orchestrator | ok: [testbed-node-4] 2026-01-02 01:00:56.830999 | orchestrator | ok: [testbed-node-5] 2026-01-02 01:00:56.831010 | orchestrator | 2026-01-02 01:00:56.831059 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-02 01:00:56.831070 | orchestrator | Friday 02 January 2026 00:58:45 +0000 (0:00:00.355) 0:00:03.491 ******** 2026-01-02 01:00:56.831081 | orchestrator | ok: [testbed-node-3] 2026-01-02 01:00:56.831092 | orchestrator | ok: [testbed-node-4] 2026-01-02 01:00:56.831103 | orchestrator | ok: [testbed-node-5] 2026-01-02 01:00:56.831114 | orchestrator | 2026-01-02 01:00:56.831126 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-02 01:00:56.831137 | orchestrator | Friday 02 January 2026 00:58:45 +0000 (0:00:00.352) 0:00:03.843 ******** 2026-01-02 01:00:56.831173 | orchestrator | ok: [testbed-node-3] 2026-01-02 01:00:56.831185 | orchestrator | ok: [testbed-node-4] 2026-01-02 01:00:56.831196 | orchestrator | ok: [testbed-node-5] 2026-01-02 01:00:56.831235 | orchestrator | 2026-01-02 01:00:56.831298 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-02 01:00:56.831311 | orchestrator | Friday 02 January 2026 00:58:46 +0000 (0:00:00.328) 0:00:04.172 ******** 2026-01-02 01:00:56.831322 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:00:56.831334 | orchestrator | skipping: [testbed-node-4] 2026-01-02 01:00:56.831346 | orchestrator | skipping: [testbed-node-5] 2026-01-02 01:00:56.831356 | orchestrator | 2026-01-02 01:00:56.831368 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-02 01:00:56.831379 | orchestrator | Friday 02 January 2026 00:58:46 +0000 (0:00:00.525) 0:00:04.698 ******** 2026-01-02 01:00:56.831390 | orchestrator | ok: [testbed-node-3] 2026-01-02 01:00:56.831401 | orchestrator | ok: [testbed-node-4] 2026-01-02 01:00:56.831412 | orchestrator | ok: [testbed-node-5] 2026-01-02 01:00:56.831423 | orchestrator | 2026-01-02 01:00:56.831434 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-02 01:00:56.831445 | orchestrator | Friday 02 January 2026 00:58:47 +0000 (0:00:00.356) 0:00:05.054 ******** 2026-01-02 01:00:56.831456 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-02 01:00:56.831467 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-02 01:00:56.831478 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-02 01:00:56.831489 | orchestrator | 2026-01-02 01:00:56.831500 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-02 01:00:56.831511 | orchestrator | Friday 02 January 2026 00:58:47 +0000 (0:00:00.681) 0:00:05.735 ******** 2026-01-02 01:00:56.831522 | orchestrator | ok: [testbed-node-3] 2026-01-02 01:00:56.831533 | orchestrator | ok: [testbed-node-4] 2026-01-02 01:00:56.831543 | orchestrator | ok: [testbed-node-5] 2026-01-02 01:00:56.831554 | orchestrator | 2026-01-02 01:00:56.831566 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-02 01:00:56.831576 | orchestrator | Friday 02 January 2026 00:58:48 +0000 (0:00:00.450) 0:00:06.186 ******** 2026-01-02 01:00:56.831587 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-02 01:00:56.831598 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-02 01:00:56.831609 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-02 01:00:56.831620 | orchestrator | 2026-01-02 01:00:56.831631 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-02 01:00:56.831642 | orchestrator | Friday 02 January 2026 00:58:50 +0000 (0:00:02.231) 0:00:08.418 ******** 2026-01-02 01:00:56.831653 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-02 01:00:56.831664 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-02 01:00:56.831675 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-02 01:00:56.831686 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:00:56.831698 | orchestrator | 2026-01-02 01:00:56.831726 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-02 01:00:56.831747 | orchestrator | Friday 02 January 2026 00:58:51 +0000 (0:00:00.713) 0:00:09.132 ******** 2026-01-02 01:00:56.831760 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.831775 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.831796 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.831807 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:00:56.831818 | orchestrator | 2026-01-02 01:00:56.831829 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-02 01:00:56.831840 | orchestrator | Friday 02 January 2026 00:58:52 +0000 (0:00:01.026) 0:00:10.159 ******** 2026-01-02 01:00:56.831854 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.831868 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.831880 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.831891 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:00:56.831902 | orchestrator | 2026-01-02 01:00:56.831914 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-02 01:00:56.831925 | orchestrator | Friday 02 January 2026 00:58:52 +0000 (0:00:00.387) 0:00:10.546 ******** 2026-01-02 01:00:56.831938 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'a5feb9c706bf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-02 00:58:48.880416', 'end': '2026-01-02 00:58:48.914365', 'delta': '0:00:00.033949', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a5feb9c706bf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-01-02 01:00:56.831954 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '15aadfa5b185', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-02 00:58:49.621060', 'end': '2026-01-02 00:58:49.664109', 'delta': '0:00:00.043049', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['15aadfa5b185'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-01-02 01:00:56.831981 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '20409544239f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-02 00:58:50.176120', 'end': '2026-01-02 00:58:50.233771', 'delta': '0:00:00.057651', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['20409544239f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-01-02 01:00:56.832000 | orchestrator | 2026-01-02 01:00:56.832011 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-02 01:00:56.832022 | orchestrator | Friday 02 January 2026 00:58:52 +0000 (0:00:00.196) 0:00:10.743 ******** 2026-01-02 01:00:56.832033 | orchestrator | ok: [testbed-node-3] 2026-01-02 01:00:56.832044 | orchestrator | ok: [testbed-node-4] 2026-01-02 01:00:56.832055 | orchestrator | ok: [testbed-node-5] 2026-01-02 01:00:56.832067 | orchestrator | 2026-01-02 01:00:56.832078 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-02 01:00:56.832089 | orchestrator | Friday 02 January 2026 00:58:53 +0000 (0:00:00.437) 0:00:11.181 ******** 2026-01-02 01:00:56.832100 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-01-02 01:00:56.832111 | orchestrator | 2026-01-02 01:00:56.832122 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-02 01:00:56.832133 | orchestrator | Friday 02 January 2026 00:58:54 +0000 (0:00:01.609) 0:00:12.790 ******** 2026-01-02 01:00:56.832144 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:00:56.832155 | orchestrator | skipping: [testbed-node-4] 2026-01-02 01:00:56.832166 | orchestrator | skipping: [testbed-node-5] 2026-01-02 01:00:56.832177 | orchestrator | 2026-01-02 01:00:56.832188 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-02 01:00:56.832199 | orchestrator | Friday 02 January 2026 00:58:55 +0000 (0:00:00.318) 0:00:13.109 ******** 2026-01-02 01:00:56.832210 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:00:56.832221 | orchestrator | skipping: [testbed-node-4] 2026-01-02 01:00:56.832232 | orchestrator | skipping: [testbed-node-5] 2026-01-02 01:00:56.832267 | orchestrator | 2026-01-02 01:00:56.832280 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-02 01:00:56.832291 | orchestrator | Friday 02 January 2026 00:58:55 +0000 (0:00:00.486) 0:00:13.595 ******** 2026-01-02 01:00:56.832301 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:00:56.832312 | orchestrator | skipping: [testbed-node-4] 2026-01-02 01:00:56.832323 | orchestrator | skipping: [testbed-node-5] 2026-01-02 01:00:56.832334 | orchestrator | 2026-01-02 01:00:56.832345 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-02 01:00:56.832356 | orchestrator | Friday 02 January 2026 00:58:56 +0000 (0:00:00.589) 0:00:14.185 ******** 2026-01-02 01:00:56.832367 | orchestrator | ok: [testbed-node-3] 2026-01-02 01:00:56.832378 | orchestrator | 2026-01-02 01:00:56.832389 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-02 01:00:56.832400 | orchestrator | Friday 02 January 2026 00:58:56 +0000 (0:00:00.149) 0:00:14.334 ******** 2026-01-02 01:00:56.832411 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:00:56.832422 | orchestrator | 2026-01-02 01:00:56.832433 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-02 01:00:56.832444 | orchestrator | Friday 02 January 2026 00:58:56 +0000 (0:00:00.239) 0:00:14.573 ******** 2026-01-02 01:00:56.832455 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:00:56.832466 | orchestrator | skipping: [testbed-node-4] 2026-01-02 01:00:56.832477 | orchestrator | skipping: [testbed-node-5] 2026-01-02 01:00:56.832488 | orchestrator | 2026-01-02 01:00:56.832499 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-02 01:00:56.832510 | orchestrator | Friday 02 January 2026 00:58:56 +0000 (0:00:00.329) 0:00:14.903 ******** 2026-01-02 01:00:56.832521 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:00:56.832539 | orchestrator | skipping: [testbed-node-4] 2026-01-02 01:00:56.832550 | orchestrator | skipping: [testbed-node-5] 2026-01-02 01:00:56.832561 | orchestrator | 2026-01-02 01:00:56.832572 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-02 01:00:56.832583 | orchestrator | Friday 02 January 2026 00:58:57 +0000 (0:00:00.360) 0:00:15.264 ******** 2026-01-02 01:00:56.832594 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:00:56.832605 | orchestrator | skipping: [testbed-node-4] 2026-01-02 01:00:56.832616 | orchestrator | skipping: [testbed-node-5] 2026-01-02 01:00:56.832627 | orchestrator | 2026-01-02 01:00:56.832638 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-02 01:00:56.832649 | orchestrator | Friday 02 January 2026 00:58:57 +0000 (0:00:00.578) 0:00:15.842 ******** 2026-01-02 01:00:56.832663 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:00:56.832682 | orchestrator | skipping: [testbed-node-4] 2026-01-02 01:00:56.832700 | orchestrator | skipping: [testbed-node-5] 2026-01-02 01:00:56.832718 | orchestrator | 2026-01-02 01:00:56.832735 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-02 01:00:56.832754 | orchestrator | Friday 02 January 2026 00:58:58 +0000 (0:00:00.377) 0:00:16.219 ******** 2026-01-02 01:00:56.832772 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:00:56.832789 | orchestrator | skipping: [testbed-node-4] 2026-01-02 01:00:56.832809 | orchestrator | skipping: [testbed-node-5] 2026-01-02 01:00:56.832828 | orchestrator | 2026-01-02 01:00:56.832848 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-02 01:00:56.832868 | orchestrator | Friday 02 January 2026 00:58:58 +0000 (0:00:00.314) 0:00:16.534 ******** 2026-01-02 01:00:56.832885 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:00:56.832904 | orchestrator | skipping: [testbed-node-4] 2026-01-02 01:00:56.832916 | orchestrator | skipping: [testbed-node-5] 2026-01-02 01:00:56.832934 | orchestrator | 2026-01-02 01:00:56.832946 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-02 01:00:56.832964 | orchestrator | Friday 02 January 2026 00:58:58 +0000 (0:00:00.326) 0:00:16.860 ******** 2026-01-02 01:00:56.832976 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:00:56.832987 | orchestrator | skipping: [testbed-node-4] 2026-01-02 01:00:56.832997 | orchestrator | skipping: [testbed-node-5] 2026-01-02 01:00:56.833008 | orchestrator | 2026-01-02 01:00:56.833019 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-02 01:00:56.833030 | orchestrator | Friday 02 January 2026 00:58:59 +0000 (0:00:00.530) 0:00:17.391 ******** 2026-01-02 01:00:56.833043 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c483f3a2--63e3--5a58--8db6--ff291b90fd92-osd--block--c483f3a2--63e3--5a58--8db6--ff291b90fd92', 'dm-uuid-LVM-kadOhytslGICfsMPpKKIVUaEJZeEZBk73B7QIjOP9WodUfze1OHCoMt864UsTUvw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-02 01:00:56.833056 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7b4d4f98--8928--5a24--8a9c--c2096dcbe0fa-osd--block--7b4d4f98--8928--5a24--8a9c--c2096dcbe0fa', 'dm-uuid-LVM-sxJm4x8SlvbGRmvWJwUw9wXTKNruugDANlwCAkwEOOoflkJUMHpUFsVEuSUEhryA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-02 01:00:56.833069 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 01:00:56.833090 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 01:00:56.833101 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 01:00:56.833113 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 01:00:56.833125 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 01:00:56.833143 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 01:00:56.833160 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 01:00:56.833172 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 01:00:56.833187 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_817579c1-b31d-4bbf-8af4-60793d227397', 'scsi-SQEMU_QEMU_HARDDISK_817579c1-b31d-4bbf-8af4-60793d227397'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_817579c1-b31d-4bbf-8af4-60793d227397-part1', 'scsi-SQEMU_QEMU_HARDDISK_817579c1-b31d-4bbf-8af4-60793d227397-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_817579c1-b31d-4bbf-8af4-60793d227397-part14', 'scsi-SQEMU_QEMU_HARDDISK_817579c1-b31d-4bbf-8af4-60793d227397-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_817579c1-b31d-4bbf-8af4-60793d227397-part15', 'scsi-SQEMU_QEMU_HARDDISK_817579c1-b31d-4bbf-8af4-60793d227397-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_817579c1-b31d-4bbf-8af4-60793d227397-part16', 'scsi-SQEMU_QEMU_HARDDISK_817579c1-b31d-4bbf-8af4-60793d227397-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 01:00:56.833208 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c483f3a2--63e3--5a58--8db6--ff291b90fd92-osd--block--c483f3a2--63e3--5a58--8db6--ff291b90fd92'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wNoFtq-1fxT-BlVw-9ASv-vo95-eRJy-yzlXtr', 'scsi-0QEMU_QEMU_HARDDISK_6d9d2903-81fe-42d1-9111-d7d9a87231b0', 'scsi-SQEMU_QEMU_HARDDISK_6d9d2903-81fe-42d1-9111-d7d9a87231b0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 01:00:56.833233 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--7b4d4f98--8928--5a24--8a9c--c2096dcbe0fa-osd--block--7b4d4f98--8928--5a24--8a9c--c2096dcbe0fa'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dgyX59-ld2G-gwPN-ZkQx-fE5Q-h7ke-QieFAN', 'scsi-0QEMU_QEMU_HARDDISK_91cfe094-4682-4bfc-95e3-88354566cb8a', 'scsi-SQEMU_QEMU_HARDDISK_91cfe094-4682-4bfc-95e3-88354566cb8a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 01:00:56.833274 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ace49a83-40fe-462c-82a5-a32ee72a9346', 'scsi-SQEMU_QEMU_HARDDISK_ace49a83-40fe-462c-82a5-a32ee72a9346'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 01:00:56.833289 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-02-00-03-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 01:00:56.833307 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--98c0a427--0bfe--5560--90fa--409a46d34f73-osd--block--98c0a427--0bfe--5560--90fa--409a46d34f73', 'dm-uuid-LVM-ujYeRjdD1qfODf03CZCJdSrEePIiQB0u1Giu1X49vSEhSEheZdpGGJEEew5YAOc0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-02 01:00:56.833319 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b563cbc7--469d--5dd4--bc68--32b49ff22a36-osd--block--b563cbc7--469d--5dd4--bc68--32b49ff22a36', 'dm-uuid-LVM-aPTuh7VgWuNL0o8yp0aA0k4J5EcwWp1UwtvMBnJ1KazEPyOojPH041G8du5gyEEG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-02 01:00:56.833331 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:00:56.833342 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 01:00:56.833354 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 01:00:56.833377 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 01:00:56.833389 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 01:00:56.833401 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 01:00:56.833412 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8c17e839--2cbb--5f17--abcc--9f26ae111b42-osd--block--8c17e839--2cbb--5f17--abcc--9f26ae111b42', 'dm-uuid-LVM-CniWHMALJAJrblTkLmpMQNyFIUQNReVPb8Z2UREu9VHvJMqzpWRcds7QSRTO0ZNz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-02 01:00:56.833436 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 01:00:56.833447 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--37cfd703--64b2--55b0--ad28--4f6812d5fa0d-osd--block--37cfd703--64b2--55b0--ad28--4f6812d5fa0d', 'dm-uuid-LVM-xRKpP4K50Lzg4Aow2riAhqUnqA6bb9ERpDH3KbjKE8JTEzAW2NyffvPUW8kVZatV'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-02 01:00:56.833459 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 01:00:56.833471 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 01:00:56.833482 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 01:00:56.833505 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 01:00:56.833518 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfac6910-579b-4d78-84a3-2d39a75847a6', 'scsi-SQEMU_QEMU_HARDDISK_cfac6910-579b-4d78-84a3-2d39a75847a6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfac6910-579b-4d78-84a3-2d39a75847a6-part1', 'scsi-SQEMU_QEMU_HARDDISK_cfac6910-579b-4d78-84a3-2d39a75847a6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfac6910-579b-4d78-84a3-2d39a75847a6-part14', 'scsi-SQEMU_QEMU_HARDDISK_cfac6910-579b-4d78-84a3-2d39a75847a6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfac6910-579b-4d78-84a3-2d39a75847a6-part15', 'scsi-SQEMU_QEMU_HARDDISK_cfac6910-579b-4d78-84a3-2d39a75847a6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfac6910-579b-4d78-84a3-2d39a75847a6-part16', 'scsi-SQEMU_QEMU_HARDDISK_cfac6910-579b-4d78-84a3-2d39a75847a6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 01:00:56.833538 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 01:00:56.833550 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--98c0a427--0bfe--5560--90fa--409a46d34f73-osd--block--98c0a427--0bfe--5560--90fa--409a46d34f73'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zcsi9f-4FfR-03B6-eElj-zHps-d8Au-IF9oXe', 'scsi-0QEMU_QEMU_HARDDISK_84499345-a879-443a-82ee-40e5571fa8cd', 'scsi-SQEMU_QEMU_HARDDISK_84499345-a879-443a-82ee-40e5571fa8cd'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 01:00:56.833562 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 01:00:56.834294 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b563cbc7--469d--5dd4--bc68--32b49ff22a36-osd--block--b563cbc7--469d--5dd4--bc68--32b49ff22a36'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wqmApf-v7ib-7Bs3-YcaK-OSLi-GEEa-ycSF6r', 'scsi-0QEMU_QEMU_HARDDISK_7a849538-9b89-4e07-840a-8a2ecc10a58d', 'scsi-SQEMU_QEMU_HARDDISK_7a849538-9b89-4e07-840a-8a2ecc10a58d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 01:00:56.834364 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 01:00:56.834390 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_496b1234-da7e-4975-8125-a1f8cbe1a452', 'scsi-SQEMU_QEMU_HARDDISK_496b1234-da7e-4975-8125-a1f8cbe1a452'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 01:00:56.834398 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 01:00:56.834404 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-02-00-03-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 01:00:56.834412 | orchestrator | skipping: [testbed-node-4] 2026-01-02 01:00:56.834421 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 01:00:56.834427 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 01:00:56.834455 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4449738-099c-443f-90a1-9eef773d53ef', 'scsi-SQEMU_QEMU_HARDDISK_f4449738-099c-443f-90a1-9eef773d53ef'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4449738-099c-443f-90a1-9eef773d53ef-part1', 'scsi-SQEMU_QEMU_HARDDISK_f4449738-099c-443f-90a1-9eef773d53ef-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4449738-099c-443f-90a1-9eef773d53ef-part14', 'scsi-SQEMU_QEMU_HARDDISK_f4449738-099c-443f-90a1-9eef773d53ef-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4449738-099c-443f-90a1-9eef773d53ef-part15', 'scsi-SQEMU_QEMU_HARDDISK_f4449738-099c-443f-90a1-9eef773d53ef-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4449738-099c-443f-90a1-9eef773d53ef-part16', 'scsi-SQEMU_QEMU_HARDDISK_f4449738-099c-443f-90a1-9eef773d53ef-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 01:00:56.834469 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8c17e839--2cbb--5f17--abcc--9f26ae111b42-osd--block--8c17e839--2cbb--5f17--abcc--9f26ae111b42'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-I3Hbgj-sUvA-JwwA-9Uln-iaSG-kJrt-Ae9QOg', 'scsi-0QEMU_QEMU_HARDDISK_3f193762-36b0-4c27-b28e-8efb206edc66', 'scsi-SQEMU_QEMU_HARDDISK_3f193762-36b0-4c27-b28e-8efb206edc66'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 01:00:56.834476 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--37cfd703--64b2--55b0--ad28--4f6812d5fa0d-osd--block--37cfd703--64b2--55b0--ad28--4f6812d5fa0d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-KNoUEf-TJ8y-mEou-hIgr-GLCl-tNSf-zuT3gs', 'scsi-0QEMU_QEMU_HARDDISK_26cdd52f-83be-4086-bce2-9cb6df4f24ab', 'scsi-SQEMU_QEMU_HARDDISK_26cdd52f-83be-4086-bce2-9cb6df4f24ab'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 01:00:56.834482 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a47a132-03ad-4adf-a37b-d405efe1a07c', 'scsi-SQEMU_QEMU_HARDDISK_3a47a132-03ad-4adf-a37b-d405efe1a07c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 01:00:56.834498 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-02-00-03-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 01:00:56.834505 | orchestrator | skipping: [testbed-node-5] 2026-01-02 01:00:56.834512 | orchestrator | 2026-01-02 01:00:56.834518 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-02 01:00:56.834525 | orchestrator | Friday 02 January 2026 00:58:59 +0000 (0:00:00.601) 0:00:17.993 ******** 2026-01-02 01:00:56.834536 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c483f3a2--63e3--5a58--8db6--ff291b90fd92-osd--block--c483f3a2--63e3--5a58--8db6--ff291b90fd92', 'dm-uuid-LVM-kadOhytslGICfsMPpKKIVUaEJZeEZBk73B7QIjOP9WodUfze1OHCoMt864UsTUvw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.834543 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7b4d4f98--8928--5a24--8a9c--c2096dcbe0fa-osd--block--7b4d4f98--8928--5a24--8a9c--c2096dcbe0fa', 'dm-uuid-LVM-sxJm4x8SlvbGRmvWJwUw9wXTKNruugDANlwCAkwEOOoflkJUMHpUFsVEuSUEhryA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.834549 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.834556 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.834562 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.834597 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.834612 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.834619 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.834625 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.834631 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.834647 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_817579c1-b31d-4bbf-8af4-60793d227397', 'scsi-SQEMU_QEMU_HARDDISK_817579c1-b31d-4bbf-8af4-60793d227397'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_817579c1-b31d-4bbf-8af4-60793d227397-part1', 'scsi-SQEMU_QEMU_HARDDISK_817579c1-b31d-4bbf-8af4-60793d227397-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_817579c1-b31d-4bbf-8af4-60793d227397-part14', 'scsi-SQEMU_QEMU_HARDDISK_817579c1-b31d-4bbf-8af4-60793d227397-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_817579c1-b31d-4bbf-8af4-60793d227397-part15', 'scsi-SQEMU_QEMU_HARDDISK_817579c1-b31d-4bbf-8af4-60793d227397-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_817579c1-b31d-4bbf-8af4-60793d227397-part16', 'scsi-SQEMU_QEMU_HARDDISK_817579c1-b31d-4bbf-8af4-60793d227397-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.834660 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c483f3a2--63e3--5a58--8db6--ff291b90fd92-osd--block--c483f3a2--63e3--5a58--8db6--ff291b90fd92'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wNoFtq-1fxT-BlVw-9ASv-vo95-eRJy-yzlXtr', 'scsi-0QEMU_QEMU_HARDDISK_6d9d2903-81fe-42d1-9111-d7d9a87231b0', 'scsi-SQEMU_QEMU_HARDDISK_6d9d2903-81fe-42d1-9111-d7d9a87231b0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.834668 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--7b4d4f98--8928--5a24--8a9c--c2096dcbe0fa-osd--block--7b4d4f98--8928--5a24--8a9c--c2096dcbe0fa'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dgyX59-ld2G-gwPN-ZkQx-fE5Q-h7ke-QieFAN', 'scsi-0QEMU_QEMU_HARDDISK_91cfe094-4682-4bfc-95e3-88354566cb8a', 'scsi-SQEMU_QEMU_HARDDISK_91cfe094-4682-4bfc-95e3-88354566cb8a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.834675 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ace49a83-40fe-462c-82a5-a32ee72a9346', 'scsi-SQEMU_QEMU_HARDDISK_ace49a83-40fe-462c-82a5-a32ee72a9346'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.834689 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8c17e839--2cbb--5f17--abcc--9f26ae111b42-osd--block--8c17e839--2cbb--5f17--abcc--9f26ae111b42', 'dm-uuid-LVM-CniWHMALJAJrblTkLmpMQNyFIUQNReVPb8Z2UREu9VHvJMqzpWRcds7QSRTO0ZNz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.834700 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-02-00-03-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.834707 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--37cfd703--64b2--55b0--ad28--4f6812d5fa0d-osd--block--37cfd703--64b2--55b0--ad28--4f6812d5fa0d', 'dm-uuid-LVM-xRKpP4K50Lzg4Aow2riAhqUnqA6bb9ERpDH3KbjKE8JTEzAW2NyffvPUW8kVZatV'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.834713 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:00:56.834719 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.834725 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.834732 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--98c0a427--0bfe--5560--90fa--409a46d34f73-osd--block--98c0a427--0bfe--5560--90fa--409a46d34f73', 'dm-uuid-LVM-ujYeRjdD1qfODf03CZCJdSrEePIiQB0u1Giu1X49vSEhSEheZdpGGJEEew5YAOc0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.834749 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b563cbc7--469d--5dd4--bc68--32b49ff22a36-osd--block--b563cbc7--469d--5dd4--bc68--32b49ff22a36', 'dm-uuid-LVM-aPTuh7VgWuNL0o8yp0aA0k4J5EcwWp1UwtvMBnJ1KazEPyOojPH041G8du5gyEEG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.834755 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.834762 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.834768 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.834774 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.834780 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.834802 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.834809 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.834816 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.834822 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.834828 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.834834 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.834841 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.834860 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4449738-099c-443f-90a1-9eef773d53ef', 'scsi-SQEMU_QEMU_HARDDISK_f4449738-099c-443f-90a1-9eef773d53ef'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4449738-099c-443f-90a1-9eef773d53ef-part1', 'scsi-SQEMU_QEMU_HARDDISK_f4449738-099c-443f-90a1-9eef773d53ef-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4449738-099c-443f-90a1-9eef773d53ef-part14', 'scsi-SQEMU_QEMU_HARDDISK_f4449738-099c-443f-90a1-9eef773d53ef-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4449738-099c-443f-90a1-9eef773d53ef-part15', 'scsi-SQEMU_QEMU_HARDDISK_f4449738-099c-443f-90a1-9eef773d53ef-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4449738-099c-443f-90a1-9eef773d53ef-part16', 'scsi-SQEMU_QEMU_HARDDISK_f4449738-099c-443f-90a1-9eef773d53ef-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.834867 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.834874 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--8c17e839--2cbb--5f17--abcc--9f26ae111b42-osd--block--8c17e839--2cbb--5f17--abcc--9f26ae111b42'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-I3Hbgj-sUvA-JwwA-9Uln-iaSG-kJrt-Ae9QOg', 'scsi-0QEMU_QEMU_HARDDISK_3f193762-36b0-4c27-b28e-8efb206edc66', 'scsi-SQEMU_QEMU_HARDDISK_3f193762-36b0-4c27-b28e-8efb206edc66'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.834891 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.834898 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--37cfd703--64b2--55b0--ad28--4f6812d5fa0d-osd--block--37cfd703--64b2--55b0--ad28--4f6812d5fa0d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-KNoUEf-TJ8y-mEou-hIgr-GLCl-tNSf-zuT3gs', 'scsi-0QEMU_QEMU_HARDDISK_26cdd52f-83be-4086-bce2-9cb6df4f24ab', 'scsi-SQEMU_QEMU_HARDDISK_26cdd52f-83be-4086-bce2-9cb6df4f24ab'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.834904 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfac6910-579b-4d78-84a3-2d39a75847a6', 'scsi-SQEMU_QEMU_HARDDISK_cfac6910-579b-4d78-84a3-2d39a75847a6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfac6910-579b-4d78-84a3-2d39a75847a6-part1', 'scsi-SQEMU_QEMU_HARDDISK_cfac6910-579b-4d78-84a3-2d39a75847a6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfac6910-579b-4d78-84a3-2d39a75847a6-part14', 'scsi-SQEMU_QEMU_HARDDISK_cfac6910-579b-4d78-84a3-2d39a75847a6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfac6910-579b-4d78-84a3-2d39a75847a6-part15', 'scsi-SQEMU_QEMU_HARDDISK_cfac6910-579b-4d78-84a3-2d39a75847a6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfac6910-579b-4d78-84a3-2d39a75847a6-part16', 'scsi-SQEMU_QEMU_HARDDISK_cfac6910-579b-4d78-84a3-2d39a75847a6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.834922 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a47a132-03ad-4adf-a37b-d405efe1a07c', 'scsi-SQEMU_QEMU_HARDDISK_3a47a132-03ad-4adf-a37b-d405efe1a07c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.834929 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-02-00-03-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.834935 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--98c0a427--0bfe--5560--90fa--409a46d34f73-osd--block--98c0a427--0bfe--5560--90fa--409a46d34f73'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zcsi9f-4FfR-03B6-eElj-zHps-d8Au-IF9oXe', 'scsi-0QEMU_QEMU_HARDDISK_84499345-a879-443a-82ee-40e5571fa8cd', 'scsi-SQEMU_QEMU_HARDDISK_84499345-a879-443a-82ee-40e5571fa8cd'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.834942 | orchestrator | skipping: [testbed-node-5] 2026-01-02 01:00:56.834948 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b563cbc7--469d--5dd4--bc68--32b49ff22a36-osd--block--b563cbc7--469d--5dd4--bc68--32b49ff22a36'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wqmApf-v7ib-7Bs3-YcaK-OSLi-GEEa-ycSF6r', 'scsi-0QEMU_QEMU_HARDDISK_7a849538-9b89-4e07-840a-8a2ecc10a58d', 'scsi-SQEMU_QEMU_HARDDISK_7a849538-9b89-4e07-840a-8a2ecc10a58d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.834954 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_496b1234-da7e-4975-8125-a1f8cbe1a452', 'scsi-SQEMU_QEMU_HARDDISK_496b1234-da7e-4975-8125-a1f8cbe1a452'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.834970 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-02-00-03-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 01:00:56.834977 | orchestrator | skipping: [testbed-node-4] 2026-01-02 01:00:56.834983 | orchestrator | 2026-01-02 01:00:56.834989 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-02 01:00:56.834995 | orchestrator | Friday 02 January 2026 00:59:00 +0000 (0:00:00.685) 0:00:18.679 ******** 2026-01-02 01:00:56.835002 | orchestrator | ok: [testbed-node-3] 2026-01-02 01:00:56.835008 | orchestrator | ok: [testbed-node-4] 2026-01-02 01:00:56.835014 | orchestrator | ok: [testbed-node-5] 2026-01-02 01:00:56.835020 | orchestrator | 2026-01-02 01:00:56.835026 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-02 01:00:56.835032 | orchestrator | Friday 02 January 2026 00:59:01 +0000 (0:00:00.709) 0:00:19.388 ******** 2026-01-02 01:00:56.835037 | orchestrator | ok: [testbed-node-3] 2026-01-02 01:00:56.835043 | orchestrator | ok: [testbed-node-4] 2026-01-02 01:00:56.835049 | orchestrator | ok: [testbed-node-5] 2026-01-02 01:00:56.835055 | orchestrator | 2026-01-02 01:00:56.835061 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-02 01:00:56.835067 | orchestrator | Friday 02 January 2026 00:59:01 +0000 (0:00:00.556) 0:00:19.944 ******** 2026-01-02 01:00:56.835073 | orchestrator | ok: [testbed-node-3] 2026-01-02 01:00:56.835079 | orchestrator | ok: [testbed-node-4] 2026-01-02 01:00:56.835085 | orchestrator | ok: [testbed-node-5] 2026-01-02 01:00:56.835091 | orchestrator | 2026-01-02 01:00:56.835097 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-02 01:00:56.835103 | orchestrator | Friday 02 January 2026 00:59:02 +0000 (0:00:00.724) 0:00:20.669 ******** 2026-01-02 01:00:56.835109 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:00:56.835115 | orchestrator | skipping: [testbed-node-4] 2026-01-02 01:00:56.835121 | orchestrator | skipping: [testbed-node-5] 2026-01-02 01:00:56.835127 | orchestrator | 2026-01-02 01:00:56.835133 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-02 01:00:56.835139 | orchestrator | Friday 02 January 2026 00:59:02 +0000 (0:00:00.342) 0:00:21.012 ******** 2026-01-02 01:00:56.835145 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:00:56.835151 | orchestrator | skipping: [testbed-node-4] 2026-01-02 01:00:56.835157 | orchestrator | skipping: [testbed-node-5] 2026-01-02 01:00:56.835162 | orchestrator | 2026-01-02 01:00:56.835168 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-02 01:00:56.835174 | orchestrator | Friday 02 January 2026 00:59:03 +0000 (0:00:00.444) 0:00:21.456 ******** 2026-01-02 01:00:56.835180 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:00:56.835186 | orchestrator | skipping: [testbed-node-4] 2026-01-02 01:00:56.835192 | orchestrator | skipping: [testbed-node-5] 2026-01-02 01:00:56.835198 | orchestrator | 2026-01-02 01:00:56.835204 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-02 01:00:56.835213 | orchestrator | Friday 02 January 2026 00:59:03 +0000 (0:00:00.534) 0:00:21.991 ******** 2026-01-02 01:00:56.835219 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-01-02 01:00:56.835226 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-01-02 01:00:56.835231 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-01-02 01:00:56.835237 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-01-02 01:00:56.835269 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-01-02 01:00:56.835278 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-01-02 01:00:56.835284 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-01-02 01:00:56.835289 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-01-02 01:00:56.835295 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-01-02 01:00:56.835301 | orchestrator | 2026-01-02 01:00:56.835308 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-02 01:00:56.835314 | orchestrator | Friday 02 January 2026 00:59:04 +0000 (0:00:00.914) 0:00:22.905 ******** 2026-01-02 01:00:56.835319 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-02 01:00:56.835325 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-02 01:00:56.835331 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-02 01:00:56.835337 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:00:56.835343 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-02 01:00:56.835349 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-02 01:00:56.835355 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-02 01:00:56.835361 | orchestrator | skipping: [testbed-node-4] 2026-01-02 01:00:56.835367 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-02 01:00:56.835372 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-02 01:00:56.835378 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-02 01:00:56.835384 | orchestrator | skipping: [testbed-node-5] 2026-01-02 01:00:56.835390 | orchestrator | 2026-01-02 01:00:56.835396 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-02 01:00:56.835402 | orchestrator | Friday 02 January 2026 00:59:05 +0000 (0:00:00.391) 0:00:23.297 ******** 2026-01-02 01:00:56.835408 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 01:00:56.835414 | orchestrator | 2026-01-02 01:00:56.835421 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-02 01:00:56.835428 | orchestrator | Friday 02 January 2026 00:59:06 +0000 (0:00:00.779) 0:00:24.076 ******** 2026-01-02 01:00:56.835438 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:00:56.835445 | orchestrator | skipping: [testbed-node-4] 2026-01-02 01:00:56.835450 | orchestrator | skipping: [testbed-node-5] 2026-01-02 01:00:56.835456 | orchestrator | 2026-01-02 01:00:56.835466 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-02 01:00:56.835472 | orchestrator | Friday 02 January 2026 00:59:06 +0000 (0:00:00.374) 0:00:24.451 ******** 2026-01-02 01:00:56.835478 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:00:56.835483 | orchestrator | skipping: [testbed-node-4] 2026-01-02 01:00:56.835489 | orchestrator | skipping: [testbed-node-5] 2026-01-02 01:00:56.835495 | orchestrator | 2026-01-02 01:00:56.835501 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-02 01:00:56.835507 | orchestrator | Friday 02 January 2026 00:59:06 +0000 (0:00:00.343) 0:00:24.794 ******** 2026-01-02 01:00:56.835513 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:00:56.835518 | orchestrator | skipping: [testbed-node-4] 2026-01-02 01:00:56.835524 | orchestrator | skipping: [testbed-node-5] 2026-01-02 01:00:56.835530 | orchestrator | 2026-01-02 01:00:56.835540 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-02 01:00:56.835546 | orchestrator | Friday 02 January 2026 00:59:07 +0000 (0:00:00.331) 0:00:25.125 ******** 2026-01-02 01:00:56.835552 | orchestrator | ok: [testbed-node-3] 2026-01-02 01:00:56.835558 | orchestrator | ok: [testbed-node-4] 2026-01-02 01:00:56.835564 | orchestrator | ok: [testbed-node-5] 2026-01-02 01:00:56.835569 | orchestrator | 2026-01-02 01:00:56.835575 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-02 01:00:56.835581 | orchestrator | Friday 02 January 2026 00:59:08 +0000 (0:00:00.928) 0:00:26.053 ******** 2026-01-02 01:00:56.835587 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-02 01:00:56.835593 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-02 01:00:56.835599 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-02 01:00:56.835605 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:00:56.835610 | orchestrator | 2026-01-02 01:00:56.835616 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-02 01:00:56.835622 | orchestrator | Friday 02 January 2026 00:59:08 +0000 (0:00:00.407) 0:00:26.460 ******** 2026-01-02 01:00:56.835628 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-02 01:00:56.835634 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-02 01:00:56.835640 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-02 01:00:56.835646 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:00:56.835651 | orchestrator | 2026-01-02 01:00:56.835657 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-02 01:00:56.835663 | orchestrator | Friday 02 January 2026 00:59:08 +0000 (0:00:00.378) 0:00:26.839 ******** 2026-01-02 01:00:56.835669 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-02 01:00:56.835675 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-02 01:00:56.835680 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-02 01:00:56.835686 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:00:56.835692 | orchestrator | 2026-01-02 01:00:56.835698 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-02 01:00:56.835704 | orchestrator | Friday 02 January 2026 00:59:09 +0000 (0:00:00.395) 0:00:27.235 ******** 2026-01-02 01:00:56.835710 | orchestrator | ok: [testbed-node-3] 2026-01-02 01:00:56.835715 | orchestrator | ok: [testbed-node-4] 2026-01-02 01:00:56.835721 | orchestrator | ok: [testbed-node-5] 2026-01-02 01:00:56.835727 | orchestrator | 2026-01-02 01:00:56.835733 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-02 01:00:56.835739 | orchestrator | Friday 02 January 2026 00:59:09 +0000 (0:00:00.363) 0:00:27.599 ******** 2026-01-02 01:00:56.835745 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-02 01:00:56.835750 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-02 01:00:56.835756 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-02 01:00:56.835762 | orchestrator | 2026-01-02 01:00:56.835768 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-02 01:00:56.835774 | orchestrator | Friday 02 January 2026 00:59:10 +0000 (0:00:00.523) 0:00:28.122 ******** 2026-01-02 01:00:56.835780 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-02 01:00:56.835785 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-02 01:00:56.835791 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-02 01:00:56.835797 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-02 01:00:56.835803 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-02 01:00:56.835809 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-02 01:00:56.835815 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-02 01:00:56.835825 | orchestrator | 2026-01-02 01:00:56.835831 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-02 01:00:56.835836 | orchestrator | Friday 02 January 2026 00:59:11 +0000 (0:00:01.069) 0:00:29.192 ******** 2026-01-02 01:00:56.835842 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-02 01:00:56.835848 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-02 01:00:56.835854 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-02 01:00:56.835860 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-02 01:00:56.835866 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-02 01:00:56.835872 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-02 01:00:56.835881 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-02 01:00:56.835887 | orchestrator | 2026-01-02 01:00:56.835897 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-01-02 01:00:56.835903 | orchestrator | Friday 02 January 2026 00:59:13 +0000 (0:00:02.060) 0:00:31.253 ******** 2026-01-02 01:00:56.835909 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:00:56.835914 | orchestrator | skipping: [testbed-node-4] 2026-01-02 01:00:56.835920 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-01-02 01:00:56.835926 | orchestrator | 2026-01-02 01:00:56.835932 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-01-02 01:00:56.835938 | orchestrator | Friday 02 January 2026 00:59:13 +0000 (0:00:00.390) 0:00:31.643 ******** 2026-01-02 01:00:56.835944 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-02 01:00:56.835951 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-02 01:00:56.835957 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-02 01:00:56.835964 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-02 01:00:56.835970 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-02 01:00:56.835976 | orchestrator | 2026-01-02 01:00:56.835982 | orchestrator | TASK [generate keys] *********************************************************** 2026-01-02 01:00:56.835988 | orchestrator | Friday 02 January 2026 00:59:59 +0000 (0:00:46.335) 0:01:17.979 ******** 2026-01-02 01:00:56.835994 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 01:00:56.835999 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 01:00:56.836005 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 01:00:56.836011 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 01:00:56.836023 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 01:00:56.836029 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 01:00:56.836035 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-01-02 01:00:56.836040 | orchestrator | 2026-01-02 01:00:56.836046 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-01-02 01:00:56.836052 | orchestrator | Friday 02 January 2026 01:00:25 +0000 (0:00:25.550) 0:01:43.530 ******** 2026-01-02 01:00:56.836058 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 01:00:56.836064 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 01:00:56.836070 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 01:00:56.836076 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 01:00:56.836081 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 01:00:56.836087 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 01:00:56.836093 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-02 01:00:56.836099 | orchestrator | 2026-01-02 01:00:56.836105 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-01-02 01:00:56.836111 | orchestrator | Friday 02 January 2026 01:00:38 +0000 (0:00:12.769) 0:01:56.299 ******** 2026-01-02 01:00:56.836116 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 01:00:56.836122 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-02 01:00:56.836128 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-02 01:00:56.836134 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 01:00:56.836140 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-02 01:00:56.836150 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-02 01:00:56.836160 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 01:00:56.836166 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-02 01:00:56.836172 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-02 01:00:56.836178 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 01:00:56.836184 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-02 01:00:56.836189 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-02 01:00:56.836195 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 01:00:56.836201 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-02 01:00:56.836207 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-02 01:00:56.836213 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 01:00:56.836219 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-02 01:00:56.836225 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-02 01:00:56.836231 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-01-02 01:00:56.836237 | orchestrator | 2026-01-02 01:00:56.836257 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 01:00:56.836264 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-01-02 01:00:56.836275 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-01-02 01:00:56.836281 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-01-02 01:00:56.836287 | orchestrator | 2026-01-02 01:00:56.836293 | orchestrator | 2026-01-02 01:00:56.836299 | orchestrator | 2026-01-02 01:00:56.836305 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 01:00:56.836311 | orchestrator | Friday 02 January 2026 01:00:56 +0000 (0:00:18.040) 0:02:14.340 ******** 2026-01-02 01:00:56.836317 | orchestrator | =============================================================================== 2026-01-02 01:00:56.836323 | orchestrator | create openstack pool(s) ----------------------------------------------- 46.34s 2026-01-02 01:00:56.836328 | orchestrator | generate keys ---------------------------------------------------------- 25.55s 2026-01-02 01:00:56.836334 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.04s 2026-01-02 01:00:56.836340 | orchestrator | get keys from monitors ------------------------------------------------- 12.77s 2026-01-02 01:00:56.836346 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.23s 2026-01-02 01:00:56.836352 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.06s 2026-01-02 01:00:56.836358 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.61s 2026-01-02 01:00:56.836363 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.07s 2026-01-02 01:00:56.836369 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 1.03s 2026-01-02 01:00:56.836375 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.93s 2026-01-02 01:00:56.836381 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.91s 2026-01-02 01:00:56.836387 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.89s 2026-01-02 01:00:56.836393 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.78s 2026-01-02 01:00:56.836399 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.72s 2026-01-02 01:00:56.836405 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.71s 2026-01-02 01:00:56.836411 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.71s 2026-01-02 01:00:56.836416 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.69s 2026-01-02 01:00:56.836422 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.68s 2026-01-02 01:00:56.836428 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.68s 2026-01-02 01:00:56.836434 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.65s 2026-01-02 01:00:56.836440 | orchestrator | 2026-01-02 01:00:56 | INFO  | Task 2fad24b6-cc3d-4f90-907e-9fc878c03d04 is in state STARTED 2026-01-02 01:00:56.836446 | orchestrator | 2026-01-02 01:00:56 | INFO  | Task 1ad88a0f-20c5-448e-9c72-5ef0f43ccfd2 is in state STARTED 2026-01-02 01:00:56.836452 | orchestrator | 2026-01-02 01:00:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:00:59.895062 | orchestrator | 2026-01-02 01:00:59 | INFO  | Task e23e40e0-2c82-46c9-93de-3748dbcc7c88 is in state STARTED 2026-01-02 01:00:59.902522 | orchestrator | 2026-01-02 01:00:59 | INFO  | Task 410a4e57-7f62-4810-b404-aa8d070afe21 is in state STARTED 2026-01-02 01:00:59.904653 | orchestrator | 2026-01-02 01:00:59 | INFO  | Task 2fad24b6-cc3d-4f90-907e-9fc878c03d04 is in state STARTED 2026-01-02 01:00:59.908286 | orchestrator | 2026-01-02 01:00:59 | INFO  | Task 1ad88a0f-20c5-448e-9c72-5ef0f43ccfd2 is in state STARTED 2026-01-02 01:00:59.910755 | orchestrator | 2026-01-02 01:00:59 | INFO  | Task 02ca3a9b-313c-4803-a287-46a9cdbd32d2 is in state STARTED 2026-01-02 01:00:59.911007 | orchestrator | 2026-01-02 01:00:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:01:02.966082 | orchestrator | 2026-01-02 01:01:02 | INFO  | Task e23e40e0-2c82-46c9-93de-3748dbcc7c88 is in state STARTED 2026-01-02 01:01:02.967454 | orchestrator | 2026-01-02 01:01:02 | INFO  | Task 410a4e57-7f62-4810-b404-aa8d070afe21 is in state STARTED 2026-01-02 01:01:02.969390 | orchestrator | 2026-01-02 01:01:02 | INFO  | Task 2fad24b6-cc3d-4f90-907e-9fc878c03d04 is in state STARTED 2026-01-02 01:01:02.970882 | orchestrator | 2026-01-02 01:01:02 | INFO  | Task 1ad88a0f-20c5-448e-9c72-5ef0f43ccfd2 is in state STARTED 2026-01-02 01:01:02.974193 | orchestrator | 2026-01-02 01:01:02 | INFO  | Task 02ca3a9b-313c-4803-a287-46a9cdbd32d2 is in state STARTED 2026-01-02 01:01:02.974628 | orchestrator | 2026-01-02 01:01:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:01:06.020525 | orchestrator | 2026-01-02 01:01:06 | INFO  | Task e23e40e0-2c82-46c9-93de-3748dbcc7c88 is in state STARTED 2026-01-02 01:01:06.020621 | orchestrator | 2026-01-02 01:01:06 | INFO  | Task 410a4e57-7f62-4810-b404-aa8d070afe21 is in state STARTED 2026-01-02 01:01:06.021668 | orchestrator | 2026-01-02 01:01:06 | INFO  | Task 2fad24b6-cc3d-4f90-907e-9fc878c03d04 is in state STARTED 2026-01-02 01:01:06.022796 | orchestrator | 2026-01-02 01:01:06 | INFO  | Task 1ad88a0f-20c5-448e-9c72-5ef0f43ccfd2 is in state STARTED 2026-01-02 01:01:06.023961 | orchestrator | 2026-01-02 01:01:06 | INFO  | Task 02ca3a9b-313c-4803-a287-46a9cdbd32d2 is in state STARTED 2026-01-02 01:01:06.024199 | orchestrator | 2026-01-02 01:01:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:01:09.068792 | orchestrator | 2026-01-02 01:01:09 | INFO  | Task e23e40e0-2c82-46c9-93de-3748dbcc7c88 is in state STARTED 2026-01-02 01:01:09.069977 | orchestrator | 2026-01-02 01:01:09 | INFO  | Task 410a4e57-7f62-4810-b404-aa8d070afe21 is in state STARTED 2026-01-02 01:01:09.071925 | orchestrator | 2026-01-02 01:01:09 | INFO  | Task 2fad24b6-cc3d-4f90-907e-9fc878c03d04 is in state STARTED 2026-01-02 01:01:09.075104 | orchestrator | 2026-01-02 01:01:09 | INFO  | Task 1ad88a0f-20c5-448e-9c72-5ef0f43ccfd2 is in state STARTED 2026-01-02 01:01:09.077675 | orchestrator | 2026-01-02 01:01:09 | INFO  | Task 02ca3a9b-313c-4803-a287-46a9cdbd32d2 is in state STARTED 2026-01-02 01:01:09.077796 | orchestrator | 2026-01-02 01:01:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:01:12.130451 | orchestrator | 2026-01-02 01:01:12 | INFO  | Task e23e40e0-2c82-46c9-93de-3748dbcc7c88 is in state STARTED 2026-01-02 01:01:12.131303 | orchestrator | 2026-01-02 01:01:12 | INFO  | Task 410a4e57-7f62-4810-b404-aa8d070afe21 is in state STARTED 2026-01-02 01:01:12.131822 | orchestrator | 2026-01-02 01:01:12 | INFO  | Task 2fad24b6-cc3d-4f90-907e-9fc878c03d04 is in state STARTED 2026-01-02 01:01:12.132480 | orchestrator | 2026-01-02 01:01:12 | INFO  | Task 1ad88a0f-20c5-448e-9c72-5ef0f43ccfd2 is in state STARTED 2026-01-02 01:01:12.133637 | orchestrator | 2026-01-02 01:01:12 | INFO  | Task 02ca3a9b-313c-4803-a287-46a9cdbd32d2 is in state STARTED 2026-01-02 01:01:12.133852 | orchestrator | 2026-01-02 01:01:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:01:15.195545 | orchestrator | 2026-01-02 01:01:15 | INFO  | Task e23e40e0-2c82-46c9-93de-3748dbcc7c88 is in state STARTED 2026-01-02 01:01:15.197590 | orchestrator | 2026-01-02 01:01:15 | INFO  | Task 410a4e57-7f62-4810-b404-aa8d070afe21 is in state STARTED 2026-01-02 01:01:15.199054 | orchestrator | 2026-01-02 01:01:15 | INFO  | Task 2fad24b6-cc3d-4f90-907e-9fc878c03d04 is in state STARTED 2026-01-02 01:01:15.202578 | orchestrator | 2026-01-02 01:01:15 | INFO  | Task 1ad88a0f-20c5-448e-9c72-5ef0f43ccfd2 is in state STARTED 2026-01-02 01:01:15.203864 | orchestrator | 2026-01-02 01:01:15 | INFO  | Task 02ca3a9b-313c-4803-a287-46a9cdbd32d2 is in state STARTED 2026-01-02 01:01:15.204386 | orchestrator | 2026-01-02 01:01:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:01:18.255661 | orchestrator | 2026-01-02 01:01:18 | INFO  | Task e23e40e0-2c82-46c9-93de-3748dbcc7c88 is in state STARTED 2026-01-02 01:01:18.257312 | orchestrator | 2026-01-02 01:01:18 | INFO  | Task 410a4e57-7f62-4810-b404-aa8d070afe21 is in state STARTED 2026-01-02 01:01:18.259521 | orchestrator | 2026-01-02 01:01:18 | INFO  | Task 2fad24b6-cc3d-4f90-907e-9fc878c03d04 is in state STARTED 2026-01-02 01:01:18.261303 | orchestrator | 2026-01-02 01:01:18 | INFO  | Task 1ad88a0f-20c5-448e-9c72-5ef0f43ccfd2 is in state STARTED 2026-01-02 01:01:18.262979 | orchestrator | 2026-01-02 01:01:18 | INFO  | Task 02ca3a9b-313c-4803-a287-46a9cdbd32d2 is in state STARTED 2026-01-02 01:01:18.263000 | orchestrator | 2026-01-02 01:01:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:01:21.311981 | orchestrator | 2026-01-02 01:01:21 | INFO  | Task e23e40e0-2c82-46c9-93de-3748dbcc7c88 is in state STARTED 2026-01-02 01:01:21.314675 | orchestrator | 2026-01-02 01:01:21 | INFO  | Task 410a4e57-7f62-4810-b404-aa8d070afe21 is in state STARTED 2026-01-02 01:01:21.317671 | orchestrator | 2026-01-02 01:01:21 | INFO  | Task 2fad24b6-cc3d-4f90-907e-9fc878c03d04 is in state SUCCESS 2026-01-02 01:01:21.319594 | orchestrator | 2026-01-02 01:01:21.319649 | orchestrator | 2026-01-02 01:01:21.319659 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-02 01:01:21.319666 | orchestrator | 2026-01-02 01:01:21.319673 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-02 01:01:21.319706 | orchestrator | Friday 02 January 2026 00:59:30 +0000 (0:00:00.264) 0:00:00.264 ******** 2026-01-02 01:01:21.319713 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:01:21.319721 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:01:21.319726 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:01:21.319732 | orchestrator | 2026-01-02 01:01:21.319738 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-02 01:01:21.319744 | orchestrator | Friday 02 January 2026 00:59:30 +0000 (0:00:00.303) 0:00:00.567 ******** 2026-01-02 01:01:21.319750 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-01-02 01:01:21.319756 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-01-02 01:01:21.319761 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-01-02 01:01:21.319767 | orchestrator | 2026-01-02 01:01:21.319773 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-01-02 01:01:21.319778 | orchestrator | 2026-01-02 01:01:21.319784 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-02 01:01:21.319789 | orchestrator | Friday 02 January 2026 00:59:30 +0000 (0:00:00.472) 0:00:01.040 ******** 2026-01-02 01:01:21.319795 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 01:01:21.319801 | orchestrator | 2026-01-02 01:01:21.319807 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-01-02 01:01:21.319812 | orchestrator | Friday 02 January 2026 00:59:31 +0000 (0:00:00.531) 0:00:01.571 ******** 2026-01-02 01:01:21.319835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-02 01:01:21.319873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-02 01:01:21.319885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-02 01:01:21.319897 | orchestrator | 2026-01-02 01:01:21.319903 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-01-02 01:01:21.319908 | orchestrator | Friday 02 January 2026 00:59:32 +0000 (0:00:01.297) 0:00:02.869 ******** 2026-01-02 01:01:21.319914 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:01:21.319920 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:01:21.319925 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:01:21.319931 | orchestrator | 2026-01-02 01:01:21.319936 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-02 01:01:21.319941 | orchestrator | Friday 02 January 2026 00:59:33 +0000 (0:00:00.511) 0:00:03.380 ******** 2026-01-02 01:01:21.319947 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-02 01:01:21.319956 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-02 01:01:21.319962 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-01-02 01:01:21.319968 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-01-02 01:01:21.319973 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-01-02 01:01:21.319979 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-01-02 01:01:21.319984 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-01-02 01:01:21.319990 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-01-02 01:01:21.319995 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-02 01:01:21.320001 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-02 01:01:21.320006 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-01-02 01:01:21.320012 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-01-02 01:01:21.320025 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-01-02 01:01:21.320030 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-01-02 01:01:21.320035 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-01-02 01:01:21.320041 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-01-02 01:01:21.320046 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-02 01:01:21.320052 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-02 01:01:21.320057 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-01-02 01:01:21.320063 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-01-02 01:01:21.320068 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-01-02 01:01:21.320073 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-01-02 01:01:21.320079 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-01-02 01:01:21.320084 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-01-02 01:01:21.320090 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-01-02 01:01:21.320098 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-01-02 01:01:21.320103 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-01-02 01:01:21.320109 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-01-02 01:01:21.320114 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-01-02 01:01:21.320120 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-01-02 01:01:21.320125 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-01-02 01:01:21.320131 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-01-02 01:01:21.320145 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-01-02 01:01:21.320156 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-01-02 01:01:21.320165 | orchestrator | 2026-01-02 01:01:21.320174 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-02 01:01:21.320183 | orchestrator | Friday 02 January 2026 00:59:33 +0000 (0:00:00.760) 0:00:04.141 ******** 2026-01-02 01:01:21.320192 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:01:21.320202 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:01:21.320274 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:01:21.320283 | orchestrator | 2026-01-02 01:01:21.320290 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-02 01:01:21.320298 | orchestrator | Friday 02 January 2026 00:59:34 +0000 (0:00:00.300) 0:00:04.442 ******** 2026-01-02 01:01:21.320304 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:01:21.320315 | orchestrator | 2026-01-02 01:01:21.320326 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-02 01:01:21.320332 | orchestrator | Friday 02 January 2026 00:59:34 +0000 (0:00:00.142) 0:00:04.585 ******** 2026-01-02 01:01:21.320337 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:01:21.320343 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:01:21.320349 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:01:21.320354 | orchestrator | 2026-01-02 01:01:21.320360 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-02 01:01:21.320365 | orchestrator | Friday 02 January 2026 00:59:34 +0000 (0:00:00.465) 0:00:05.050 ******** 2026-01-02 01:01:21.320371 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:01:21.320376 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:01:21.320382 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:01:21.320387 | orchestrator | 2026-01-02 01:01:21.320393 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-02 01:01:21.320398 | orchestrator | Friday 02 January 2026 00:59:35 +0000 (0:00:00.303) 0:00:05.354 ******** 2026-01-02 01:01:21.320404 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:01:21.320409 | orchestrator | 2026-01-02 01:01:21.320415 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-02 01:01:21.320421 | orchestrator | Friday 02 January 2026 00:59:35 +0000 (0:00:00.153) 0:00:05.508 ******** 2026-01-02 01:01:21.320426 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:01:21.320431 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:01:21.320437 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:01:21.320443 | orchestrator | 2026-01-02 01:01:21.320448 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-02 01:01:21.320454 | orchestrator | Friday 02 January 2026 00:59:35 +0000 (0:00:00.342) 0:00:05.851 ******** 2026-01-02 01:01:21.320459 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:01:21.320465 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:01:21.320470 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:01:21.320476 | orchestrator | 2026-01-02 01:01:21.320481 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-02 01:01:21.320487 | orchestrator | Friday 02 January 2026 00:59:36 +0000 (0:00:00.389) 0:00:06.240 ******** 2026-01-02 01:01:21.320492 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:01:21.320498 | orchestrator | 2026-01-02 01:01:21.320503 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-02 01:01:21.320509 | orchestrator | Friday 02 January 2026 00:59:36 +0000 (0:00:00.146) 0:00:06.387 ******** 2026-01-02 01:01:21.320514 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:01:21.320520 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:01:21.320525 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:01:21.320531 | orchestrator | 2026-01-02 01:01:21.320537 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-02 01:01:21.320542 | orchestrator | Friday 02 January 2026 00:59:36 +0000 (0:00:00.521) 0:00:06.909 ******** 2026-01-02 01:01:21.320548 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:01:21.320553 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:01:21.320559 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:01:21.320564 | orchestrator | 2026-01-02 01:01:21.320570 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-02 01:01:21.320575 | orchestrator | Friday 02 January 2026 00:59:37 +0000 (0:00:00.341) 0:00:07.251 ******** 2026-01-02 01:01:21.320581 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:01:21.320586 | orchestrator | 2026-01-02 01:01:21.320592 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-02 01:01:21.320598 | orchestrator | Friday 02 January 2026 00:59:37 +0000 (0:00:00.139) 0:00:07.390 ******** 2026-01-02 01:01:21.320603 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:01:21.320609 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:01:21.320614 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:01:21.320620 | orchestrator | 2026-01-02 01:01:21.320625 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-02 01:01:21.320634 | orchestrator | Friday 02 January 2026 00:59:37 +0000 (0:00:00.291) 0:00:07.682 ******** 2026-01-02 01:01:21.320640 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:01:21.320646 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:01:21.320651 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:01:21.320657 | orchestrator | 2026-01-02 01:01:21.320662 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-02 01:01:21.320668 | orchestrator | Friday 02 January 2026 00:59:38 +0000 (0:00:00.547) 0:00:08.229 ******** 2026-01-02 01:01:21.320673 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:01:21.320679 | orchestrator | 2026-01-02 01:01:21.320684 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-02 01:01:21.320690 | orchestrator | Friday 02 January 2026 00:59:38 +0000 (0:00:00.144) 0:00:08.373 ******** 2026-01-02 01:01:21.320695 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:01:21.320701 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:01:21.320706 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:01:21.320712 | orchestrator | 2026-01-02 01:01:21.320718 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-02 01:01:21.320727 | orchestrator | Friday 02 January 2026 00:59:38 +0000 (0:00:00.338) 0:00:08.711 ******** 2026-01-02 01:01:21.320733 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:01:21.320738 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:01:21.320744 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:01:21.320750 | orchestrator | 2026-01-02 01:01:21.320755 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-02 01:01:21.320761 | orchestrator | Friday 02 January 2026 00:59:38 +0000 (0:00:00.313) 0:00:09.024 ******** 2026-01-02 01:01:21.320766 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:01:21.320772 | orchestrator | 2026-01-02 01:01:21.320778 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-02 01:01:21.320784 | orchestrator | Friday 02 January 2026 00:59:39 +0000 (0:00:00.163) 0:00:09.188 ******** 2026-01-02 01:01:21.320790 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:01:21.320797 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:01:21.320803 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:01:21.320809 | orchestrator | 2026-01-02 01:01:21.320816 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-02 01:01:21.320826 | orchestrator | Friday 02 January 2026 00:59:39 +0000 (0:00:00.352) 0:00:09.540 ******** 2026-01-02 01:01:21.320832 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:01:21.320839 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:01:21.320845 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:01:21.320851 | orchestrator | 2026-01-02 01:01:21.320858 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-02 01:01:21.320864 | orchestrator | Friday 02 January 2026 00:59:39 +0000 (0:00:00.568) 0:00:10.109 ******** 2026-01-02 01:01:21.320870 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:01:21.320876 | orchestrator | 2026-01-02 01:01:21.320883 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-02 01:01:21.320889 | orchestrator | Friday 02 January 2026 00:59:40 +0000 (0:00:00.143) 0:00:10.252 ******** 2026-01-02 01:01:21.320895 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:01:21.320902 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:01:21.320908 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:01:21.320914 | orchestrator | 2026-01-02 01:01:21.320920 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-02 01:01:21.320927 | orchestrator | Friday 02 January 2026 00:59:40 +0000 (0:00:00.314) 0:00:10.567 ******** 2026-01-02 01:01:21.320957 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:01:21.320965 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:01:21.320971 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:01:21.320977 | orchestrator | 2026-01-02 01:01:21.320984 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-02 01:01:21.320995 | orchestrator | Friday 02 January 2026 00:59:40 +0000 (0:00:00.389) 0:00:10.956 ******** 2026-01-02 01:01:21.321001 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:01:21.321008 | orchestrator | 2026-01-02 01:01:21.321014 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-02 01:01:21.321021 | orchestrator | Friday 02 January 2026 00:59:40 +0000 (0:00:00.182) 0:00:11.139 ******** 2026-01-02 01:01:21.321027 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:01:21.321033 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:01:21.321040 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:01:21.321046 | orchestrator | 2026-01-02 01:01:21.321053 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-02 01:01:21.321059 | orchestrator | Friday 02 January 2026 00:59:41 +0000 (0:00:00.295) 0:00:11.434 ******** 2026-01-02 01:01:21.321065 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:01:21.321072 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:01:21.321078 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:01:21.321085 | orchestrator | 2026-01-02 01:01:21.321091 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-02 01:01:21.321097 | orchestrator | Friday 02 January 2026 00:59:41 +0000 (0:00:00.561) 0:00:11.995 ******** 2026-01-02 01:01:21.321104 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:01:21.321110 | orchestrator | 2026-01-02 01:01:21.321117 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-02 01:01:21.321123 | orchestrator | Friday 02 January 2026 00:59:41 +0000 (0:00:00.122) 0:00:12.117 ******** 2026-01-02 01:01:21.321130 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:01:21.321140 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:01:21.321151 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:01:21.321160 | orchestrator | 2026-01-02 01:01:21.321171 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-02 01:01:21.321182 | orchestrator | Friday 02 January 2026 00:59:42 +0000 (0:00:00.321) 0:00:12.439 ******** 2026-01-02 01:01:21.321193 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:01:21.321203 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:01:21.321229 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:01:21.321237 | orchestrator | 2026-01-02 01:01:21.321243 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-02 01:01:21.321250 | orchestrator | Friday 02 January 2026 00:59:42 +0000 (0:00:00.317) 0:00:12.756 ******** 2026-01-02 01:01:21.321256 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:01:21.321262 | orchestrator | 2026-01-02 01:01:21.321268 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-02 01:01:21.321275 | orchestrator | Friday 02 January 2026 00:59:42 +0000 (0:00:00.117) 0:00:12.874 ******** 2026-01-02 01:01:21.321281 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:01:21.321287 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:01:21.321293 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:01:21.321300 | orchestrator | 2026-01-02 01:01:21.321306 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-01-02 01:01:21.321312 | orchestrator | Friday 02 January 2026 00:59:43 +0000 (0:00:00.573) 0:00:13.447 ******** 2026-01-02 01:01:21.321318 | orchestrator | changed: [testbed-node-1] 2026-01-02 01:01:21.321325 | orchestrator | changed: [testbed-node-0] 2026-01-02 01:01:21.321331 | orchestrator | changed: [testbed-node-2] 2026-01-02 01:01:21.321337 | orchestrator | 2026-01-02 01:01:21.321343 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-01-02 01:01:21.321349 | orchestrator | Friday 02 January 2026 00:59:45 +0000 (0:00:01.721) 0:00:15.169 ******** 2026-01-02 01:01:21.321360 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-02 01:01:21.321366 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-02 01:01:21.321373 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-02 01:01:21.321384 | orchestrator | 2026-01-02 01:01:21.321391 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-01-02 01:01:21.321397 | orchestrator | Friday 02 January 2026 00:59:47 +0000 (0:00:02.291) 0:00:17.460 ******** 2026-01-02 01:01:21.321403 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-02 01:01:21.321410 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-02 01:01:21.321417 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-02 01:01:21.321423 | orchestrator | 2026-01-02 01:01:21.321429 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-01-02 01:01:21.321440 | orchestrator | Friday 02 January 2026 00:59:49 +0000 (0:00:02.541) 0:00:20.001 ******** 2026-01-02 01:01:21.321447 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-02 01:01:21.321453 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-02 01:01:21.321459 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-02 01:01:21.321466 | orchestrator | 2026-01-02 01:01:21.321472 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-01-02 01:01:21.321478 | orchestrator | Friday 02 January 2026 00:59:52 +0000 (0:00:02.450) 0:00:22.451 ******** 2026-01-02 01:01:21.321484 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:01:21.321491 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:01:21.321497 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:01:21.321503 | orchestrator | 2026-01-02 01:01:21.321510 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-01-02 01:01:21.321516 | orchestrator | Friday 02 January 2026 00:59:52 +0000 (0:00:00.341) 0:00:22.793 ******** 2026-01-02 01:01:21.321522 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:01:21.321529 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:01:21.321535 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:01:21.321541 | orchestrator | 2026-01-02 01:01:21.321548 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-02 01:01:21.321554 | orchestrator | Friday 02 January 2026 00:59:52 +0000 (0:00:00.298) 0:00:23.091 ******** 2026-01-02 01:01:21.321560 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 01:01:21.321567 | orchestrator | 2026-01-02 01:01:21.321573 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-01-02 01:01:21.321579 | orchestrator | Friday 02 January 2026 00:59:53 +0000 (0:00:00.787) 0:00:23.878 ******** 2026-01-02 01:01:21.321598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-02 01:01:21.321617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-02 01:01:21.321629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-02 01:01:21.321641 | orchestrator | 2026-01-02 01:01:21.321648 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-01-02 01:01:21.321654 | orchestrator | Friday 02 January 2026 00:59:55 +0000 (0:00:01.661) 0:00:25.540 ******** 2026-01-02 01:01:21.321666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-02 01:01:21.321674 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:01:21.321852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-02 01:01:21.321870 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:01:21.321878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-02 01:01:21.321890 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:01:21.321896 | orchestrator | 2026-01-02 01:01:21.321902 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-01-02 01:01:21.321909 | orchestrator | Friday 02 January 2026 00:59:56 +0000 (0:00:00.669) 0:00:26.210 ******** 2026-01-02 01:01:21.321938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-02 01:01:21.321946 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:01:21.321953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-02 01:01:21.321964 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:01:21.321979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-02 01:01:21.321986 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:01:21.321993 | orchestrator | 2026-01-02 01:01:21.321999 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-01-02 01:01:21.322006 | orchestrator | Friday 02 January 2026 00:59:56 +0000 (0:00:00.842) 0:00:27.052 ******** 2026-01-02 01:01:21.322051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-02 01:01:21.322076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-02 01:01:21.322087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-02 01:01:21.322098 | orchestrator | 2026-01-02 01:01:21.322105 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-02 01:01:21.322111 | orchestrator | Friday 02 January 2026 00:59:58 +0000 (0:00:01.630) 0:00:28.683 ******** 2026-01-02 01:01:21.322118 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:01:21.322124 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:01:21.322131 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:01:21.322142 | orchestrator | 2026-01-02 01:01:21.322152 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-02 01:01:21.322163 | orchestrator | Friday 02 January 2026 00:59:58 +0000 (0:00:00.288) 0:00:28.971 ******** 2026-01-02 01:01:21.322173 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 01:01:21.322183 | orchestrator | 2026-01-02 01:01:21.322194 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-01-02 01:01:21.322210 | orchestrator | Friday 02 January 2026 00:59:59 +0000 (0:00:00.513) 0:00:29.485 ******** 2026-01-02 01:01:21.322237 | orchestrator | changed: [testbed-node-0] 2026-01-02 01:01:21.322243 | orchestrator | 2026-01-02 01:01:21.322250 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-01-02 01:01:21.322256 | orchestrator | Friday 02 January 2026 01:00:02 +0000 (0:00:03.275) 0:00:32.760 ******** 2026-01-02 01:01:21.322263 | orchestrator | changed: [testbed-node-0] 2026-01-02 01:01:21.322269 | orchestrator | 2026-01-02 01:01:21.322276 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-01-02 01:01:21.322282 | orchestrator | Friday 02 January 2026 01:00:06 +0000 (0:00:03.575) 0:00:36.335 ******** 2026-01-02 01:01:21.322288 | orchestrator | changed: [testbed-node-0] 2026-01-02 01:01:21.322294 | orchestrator | 2026-01-02 01:01:21.322301 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-02 01:01:21.322307 | orchestrator | Friday 02 January 2026 01:00:23 +0000 (0:00:17.173) 0:00:53.509 ******** 2026-01-02 01:01:21.322313 | orchestrator | 2026-01-02 01:01:21.322320 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-02 01:01:21.322326 | orchestrator | Friday 02 January 2026 01:00:23 +0000 (0:00:00.066) 0:00:53.576 ******** 2026-01-02 01:01:21.322332 | orchestrator | 2026-01-02 01:01:21.322339 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-02 01:01:21.322345 | orchestrator | Friday 02 January 2026 01:00:23 +0000 (0:00:00.065) 0:00:53.641 ******** 2026-01-02 01:01:21.322356 | orchestrator | 2026-01-02 01:01:21.322363 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-01-02 01:01:21.322369 | orchestrator | Friday 02 January 2026 01:00:23 +0000 (0:00:00.069) 0:00:53.711 ******** 2026-01-02 01:01:21.322376 | orchestrator | changed: [testbed-node-0] 2026-01-02 01:01:21.322382 | orchestrator | changed: [testbed-node-1] 2026-01-02 01:01:21.322389 | orchestrator | changed: [testbed-node-2] 2026-01-02 01:01:21.322395 | orchestrator | 2026-01-02 01:01:21.322402 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 01:01:21.322408 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-02 01:01:21.322416 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-01-02 01:01:21.322423 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-01-02 01:01:21.322431 | orchestrator | 2026-01-02 01:01:21.322438 | orchestrator | 2026-01-02 01:01:21.322446 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 01:01:21.322455 | orchestrator | Friday 02 January 2026 01:01:20 +0000 (0:00:57.242) 0:01:50.953 ******** 2026-01-02 01:01:21.322464 | orchestrator | =============================================================================== 2026-01-02 01:01:21.322472 | orchestrator | horizon : Restart horizon container ------------------------------------ 57.24s 2026-01-02 01:01:21.322481 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 17.17s 2026-01-02 01:01:21.322489 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 3.58s 2026-01-02 01:01:21.322498 | orchestrator | horizon : Creating Horizon database ------------------------------------- 3.28s 2026-01-02 01:01:21.322507 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.54s 2026-01-02 01:01:21.322515 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.45s 2026-01-02 01:01:21.322524 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.29s 2026-01-02 01:01:21.322533 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.72s 2026-01-02 01:01:21.322542 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.66s 2026-01-02 01:01:21.322551 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.63s 2026-01-02 01:01:21.322560 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.30s 2026-01-02 01:01:21.322568 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.84s 2026-01-02 01:01:21.322576 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.79s 2026-01-02 01:01:21.322585 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.76s 2026-01-02 01:01:21.322593 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.67s 2026-01-02 01:01:21.322609 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.57s 2026-01-02 01:01:21.322618 | orchestrator | horizon : Update policy file name --------------------------------------- 0.57s 2026-01-02 01:01:21.322627 | orchestrator | horizon : Update policy file name --------------------------------------- 0.56s 2026-01-02 01:01:21.322636 | orchestrator | horizon : Update policy file name --------------------------------------- 0.55s 2026-01-02 01:01:21.322645 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.53s 2026-01-02 01:01:21.322653 | orchestrator | 2026-01-02 01:01:21 | INFO  | Task 1ad88a0f-20c5-448e-9c72-5ef0f43ccfd2 is in state STARTED 2026-01-02 01:01:21.322662 | orchestrator | 2026-01-02 01:01:21 | INFO  | Task 02ca3a9b-313c-4803-a287-46a9cdbd32d2 is in state STARTED 2026-01-02 01:01:21.322728 | orchestrator | 2026-01-02 01:01:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:01:24.376629 | orchestrator | 2026-01-02 01:01:24 | INFO  | Task e23e40e0-2c82-46c9-93de-3748dbcc7c88 is in state STARTED 2026-01-02 01:01:24.379206 | orchestrator | 2026-01-02 01:01:24 | INFO  | Task 410a4e57-7f62-4810-b404-aa8d070afe21 is in state STARTED 2026-01-02 01:01:24.380507 | orchestrator | 2026-01-02 01:01:24 | INFO  | Task 3dde8fb5-af82-4940-abad-81550de8fb6c is in state STARTED 2026-01-02 01:01:24.382566 | orchestrator | 2026-01-02 01:01:24 | INFO  | Task 1ad88a0f-20c5-448e-9c72-5ef0f43ccfd2 is in state STARTED 2026-01-02 01:01:24.384670 | orchestrator | 2026-01-02 01:01:24 | INFO  | Task 02ca3a9b-313c-4803-a287-46a9cdbd32d2 is in state STARTED 2026-01-02 01:01:24.384806 | orchestrator | 2026-01-02 01:01:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:01:27.440315 | orchestrator | 2026-01-02 01:01:27 | INFO  | Task e23e40e0-2c82-46c9-93de-3748dbcc7c88 is in state STARTED 2026-01-02 01:01:27.443125 | orchestrator | 2026-01-02 01:01:27 | INFO  | Task 410a4e57-7f62-4810-b404-aa8d070afe21 is in state STARTED 2026-01-02 01:01:27.448512 | orchestrator | 2026-01-02 01:01:27 | INFO  | Task 3dde8fb5-af82-4940-abad-81550de8fb6c is in state STARTED 2026-01-02 01:01:27.450644 | orchestrator | 2026-01-02 01:01:27 | INFO  | Task 1ad88a0f-20c5-448e-9c72-5ef0f43ccfd2 is in state STARTED 2026-01-02 01:01:27.452412 | orchestrator | 2026-01-02 01:01:27 | INFO  | Task 02ca3a9b-313c-4803-a287-46a9cdbd32d2 is in state STARTED 2026-01-02 01:01:27.453784 | orchestrator | 2026-01-02 01:01:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:01:30.508175 | orchestrator | 2026-01-02 01:01:30 | INFO  | Task e23e40e0-2c82-46c9-93de-3748dbcc7c88 is in state STARTED 2026-01-02 01:01:30.509998 | orchestrator | 2026-01-02 01:01:30 | INFO  | Task 410a4e57-7f62-4810-b404-aa8d070afe21 is in state STARTED 2026-01-02 01:01:30.512459 | orchestrator | 2026-01-02 01:01:30 | INFO  | Task 3dde8fb5-af82-4940-abad-81550de8fb6c is in state STARTED 2026-01-02 01:01:30.514064 | orchestrator | 2026-01-02 01:01:30 | INFO  | Task 1ad88a0f-20c5-448e-9c72-5ef0f43ccfd2 is in state STARTED 2026-01-02 01:01:30.517310 | orchestrator | 2026-01-02 01:01:30 | INFO  | Task 02ca3a9b-313c-4803-a287-46a9cdbd32d2 is in state STARTED 2026-01-02 01:01:30.518158 | orchestrator | 2026-01-02 01:01:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:01:33.569702 | orchestrator | 2026-01-02 01:01:33 | INFO  | Task e23e40e0-2c82-46c9-93de-3748dbcc7c88 is in state STARTED 2026-01-02 01:01:33.571323 | orchestrator | 2026-01-02 01:01:33 | INFO  | Task 410a4e57-7f62-4810-b404-aa8d070afe21 is in state STARTED 2026-01-02 01:01:33.572797 | orchestrator | 2026-01-02 01:01:33 | INFO  | Task 3dde8fb5-af82-4940-abad-81550de8fb6c is in state STARTED 2026-01-02 01:01:33.574145 | orchestrator | 2026-01-02 01:01:33 | INFO  | Task 1ad88a0f-20c5-448e-9c72-5ef0f43ccfd2 is in state STARTED 2026-01-02 01:01:33.575298 | orchestrator | 2026-01-02 01:01:33 | INFO  | Task 02ca3a9b-313c-4803-a287-46a9cdbd32d2 is in state STARTED 2026-01-02 01:01:33.575363 | orchestrator | 2026-01-02 01:01:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:01:36.620609 | orchestrator | 2026-01-02 01:01:36 | INFO  | Task e23e40e0-2c82-46c9-93de-3748dbcc7c88 is in state STARTED 2026-01-02 01:01:36.621879 | orchestrator | 2026-01-02 01:01:36 | INFO  | Task 410a4e57-7f62-4810-b404-aa8d070afe21 is in state STARTED 2026-01-02 01:01:36.625759 | orchestrator | 2026-01-02 01:01:36 | INFO  | Task 3dde8fb5-af82-4940-abad-81550de8fb6c is in state STARTED 2026-01-02 01:01:36.627903 | orchestrator | 2026-01-02 01:01:36 | INFO  | Task 1ad88a0f-20c5-448e-9c72-5ef0f43ccfd2 is in state STARTED 2026-01-02 01:01:36.629856 | orchestrator | 2026-01-02 01:01:36 | INFO  | Task 02ca3a9b-313c-4803-a287-46a9cdbd32d2 is in state SUCCESS 2026-01-02 01:01:36.630360 | orchestrator | 2026-01-02 01:01:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:01:39.686924 | orchestrator | 2026-01-02 01:01:39 | INFO  | Task e23e40e0-2c82-46c9-93de-3748dbcc7c88 is in state STARTED 2026-01-02 01:01:39.687432 | orchestrator | 2026-01-02 01:01:39 | INFO  | Task 4b4064d5-e980-474e-a833-d514a39ea6b3 is in state STARTED 2026-01-02 01:01:39.689323 | orchestrator | 2026-01-02 01:01:39 | INFO  | Task 410a4e57-7f62-4810-b404-aa8d070afe21 is in state STARTED 2026-01-02 01:01:39.690180 | orchestrator | 2026-01-02 01:01:39 | INFO  | Task 3dde8fb5-af82-4940-abad-81550de8fb6c is in state STARTED 2026-01-02 01:01:39.691855 | orchestrator | 2026-01-02 01:01:39 | INFO  | Task 1ad88a0f-20c5-448e-9c72-5ef0f43ccfd2 is in state STARTED 2026-01-02 01:01:39.691931 | orchestrator | 2026-01-02 01:01:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:01:42.737040 | orchestrator | 2026-01-02 01:01:42 | INFO  | Task e23e40e0-2c82-46c9-93de-3748dbcc7c88 is in state STARTED 2026-01-02 01:01:42.737152 | orchestrator | 2026-01-02 01:01:42 | INFO  | Task 4b4064d5-e980-474e-a833-d514a39ea6b3 is in state STARTED 2026-01-02 01:01:42.739240 | orchestrator | 2026-01-02 01:01:42 | INFO  | Task 410a4e57-7f62-4810-b404-aa8d070afe21 is in state STARTED 2026-01-02 01:01:42.741992 | orchestrator | 2026-01-02 01:01:42 | INFO  | Task 3dde8fb5-af82-4940-abad-81550de8fb6c is in state STARTED 2026-01-02 01:01:42.742841 | orchestrator | 2026-01-02 01:01:42 | INFO  | Task 1ad88a0f-20c5-448e-9c72-5ef0f43ccfd2 is in state STARTED 2026-01-02 01:01:42.742897 | orchestrator | 2026-01-02 01:01:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:01:45.780533 | orchestrator | 2026-01-02 01:01:45 | INFO  | Task e23e40e0-2c82-46c9-93de-3748dbcc7c88 is in state STARTED 2026-01-02 01:01:45.781927 | orchestrator | 2026-01-02 01:01:45 | INFO  | Task 4b4064d5-e980-474e-a833-d514a39ea6b3 is in state STARTED 2026-01-02 01:01:45.783842 | orchestrator | 2026-01-02 01:01:45 | INFO  | Task 410a4e57-7f62-4810-b404-aa8d070afe21 is in state STARTED 2026-01-02 01:01:45.785509 | orchestrator | 2026-01-02 01:01:45 | INFO  | Task 3dde8fb5-af82-4940-abad-81550de8fb6c is in state STARTED 2026-01-02 01:01:45.786888 | orchestrator | 2026-01-02 01:01:45 | INFO  | Task 1ad88a0f-20c5-448e-9c72-5ef0f43ccfd2 is in state SUCCESS 2026-01-02 01:01:45.787233 | orchestrator | 2026-01-02 01:01:45.787262 | orchestrator | 2026-01-02 01:01:45.787273 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-01-02 01:01:45.787284 | orchestrator | 2026-01-02 01:01:45.787295 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-01-02 01:01:45.787305 | orchestrator | Friday 02 January 2026 01:01:01 +0000 (0:00:00.167) 0:00:00.167 ******** 2026-01-02 01:01:45.787315 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-01-02 01:01:45.787326 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-02 01:01:45.787336 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-02 01:01:45.787362 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-01-02 01:01:45.787373 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-02 01:01:45.787383 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-01-02 01:01:45.787393 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-01-02 01:01:45.787429 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-01-02 01:01:45.787440 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-01-02 01:01:45.787450 | orchestrator | 2026-01-02 01:01:45.787460 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-01-02 01:01:45.787470 | orchestrator | Friday 02 January 2026 01:01:06 +0000 (0:00:04.815) 0:00:04.983 ******** 2026-01-02 01:01:45.787480 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-01-02 01:01:45.787490 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-02 01:01:45.787499 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-02 01:01:45.787509 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-01-02 01:01:45.787519 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-02 01:01:45.787529 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-01-02 01:01:45.787599 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-01-02 01:01:45.787614 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-01-02 01:01:45.787624 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-01-02 01:01:45.787634 | orchestrator | 2026-01-02 01:01:45.787644 | orchestrator | TASK [Create share directory] ************************************************** 2026-01-02 01:01:45.787654 | orchestrator | Friday 02 January 2026 01:01:10 +0000 (0:00:04.324) 0:00:09.308 ******** 2026-01-02 01:01:45.787665 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-02 01:01:45.787675 | orchestrator | 2026-01-02 01:01:45.787685 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-01-02 01:01:45.787695 | orchestrator | Friday 02 January 2026 01:01:11 +0000 (0:00:01.019) 0:00:10.328 ******** 2026-01-02 01:01:45.787705 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-01-02 01:01:45.787715 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-02 01:01:45.787725 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-02 01:01:45.787735 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-01-02 01:01:45.787744 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-02 01:01:45.787754 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-01-02 01:01:45.787764 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-01-02 01:01:45.787774 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-01-02 01:01:45.787784 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-01-02 01:01:45.787794 | orchestrator | 2026-01-02 01:01:45.787806 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-01-02 01:01:45.787818 | orchestrator | Friday 02 January 2026 01:01:25 +0000 (0:00:13.940) 0:00:24.268 ******** 2026-01-02 01:01:45.787829 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-01-02 01:01:45.787841 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-01-02 01:01:45.787852 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-01-02 01:01:45.787872 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-01-02 01:01:45.787897 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-01-02 01:01:45.787910 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-01-02 01:01:45.787922 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-01-02 01:01:45.787934 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-01-02 01:01:45.787946 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-01-02 01:01:45.787958 | orchestrator | 2026-01-02 01:01:45.787969 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-01-02 01:01:45.787981 | orchestrator | Friday 02 January 2026 01:01:28 +0000 (0:00:03.229) 0:00:27.498 ******** 2026-01-02 01:01:45.787993 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-01-02 01:01:45.788005 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-02 01:01:45.788017 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-02 01:01:45.788029 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-01-02 01:01:45.788040 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-02 01:01:45.788052 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-01-02 01:01:45.788064 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-01-02 01:01:45.788075 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-01-02 01:01:45.788086 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-01-02 01:01:45.788098 | orchestrator | 2026-01-02 01:01:45.788110 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 01:01:45.788123 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 01:01:45.788136 | orchestrator | 2026-01-02 01:01:45.788149 | orchestrator | 2026-01-02 01:01:45.788161 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 01:01:45.788170 | orchestrator | Friday 02 January 2026 01:01:35 +0000 (0:00:07.199) 0:00:34.697 ******** 2026-01-02 01:01:45.788180 | orchestrator | =============================================================================== 2026-01-02 01:01:45.788222 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.94s 2026-01-02 01:01:45.788234 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.20s 2026-01-02 01:01:45.788244 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.82s 2026-01-02 01:01:45.788260 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.32s 2026-01-02 01:01:45.788270 | orchestrator | Check if target directories exist --------------------------------------- 3.23s 2026-01-02 01:01:45.788280 | orchestrator | Create share directory -------------------------------------------------- 1.02s 2026-01-02 01:01:45.788290 | orchestrator | 2026-01-02 01:01:45.788416 | orchestrator | 2026-01-02 01:01:45.788431 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-02 01:01:45.788441 | orchestrator | 2026-01-02 01:01:45.788451 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-02 01:01:45.788460 | orchestrator | Friday 02 January 2026 01:00:38 +0000 (0:00:00.271) 0:00:00.271 ******** 2026-01-02 01:01:45.788470 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:01:45.788482 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:01:45.788492 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:01:45.788502 | orchestrator | 2026-01-02 01:01:45.788512 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-02 01:01:45.788530 | orchestrator | Friday 02 January 2026 01:00:38 +0000 (0:00:00.352) 0:00:00.624 ******** 2026-01-02 01:01:45.788540 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-01-02 01:01:45.788550 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-01-02 01:01:45.788559 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-01-02 01:01:45.788569 | orchestrator | 2026-01-02 01:01:45.788585 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-01-02 01:01:45.788602 | orchestrator | 2026-01-02 01:01:45.788618 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-02 01:01:45.788635 | orchestrator | Friday 02 January 2026 01:00:39 +0000 (0:00:00.505) 0:00:01.130 ******** 2026-01-02 01:01:45.788651 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 01:01:45.788666 | orchestrator | 2026-01-02 01:01:45.788680 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-01-02 01:01:45.788693 | orchestrator | Friday 02 January 2026 01:00:39 +0000 (0:00:00.619) 0:00:01.749 ******** 2026-01-02 01:01:45.788707 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating services (5 retries left). 2026-01-02 01:01:45.788722 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating services (4 retries left). 2026-01-02 01:01:45.788735 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating services (3 retries left). 2026-01-02 01:01:45.788750 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating services (2 retries left). 2026-01-02 01:01:45.788763 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating services (1 retries left). 2026-01-02 01:01:45.788819 | orchestrator | failed: [testbed-node-0] (item=designate (dns)) => {"action": "os_keystone_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Designate DNS Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9001"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9001"}], "name": "designate", "type": "dns"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 133, in _do_create_plugin\n disc = self.get_discovery(session,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 605, in get_discovery\n return discover.get_discovery(session=session, url=url,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 1459, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 539, in __init__\n self._data = get_version_data(session, url,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 106, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1154, in get\n return self.request(url, 'GET', **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 985, in request\n raise exceptions.from_response(resp, method, url)\nkeystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1767315704.032427-3241-198531275161715/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1767315704.032427-3241-198531275161715/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1767315704.032427-3241-198531275161715/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_os_keystone_service_payload_g44zw2cs/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_os_keystone_service_payload_g44zw2cs/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_os_keystone_service_payload_g44zw2cs/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_os_keystone_service_payload_g44zw2cs/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_os_keystone_service_payload_g44zw2cs/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 88, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 286, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 268, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 131, in get_access\n self.auth_ref = self.get_auth_ref(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 203, in get_auth_ref\n self._plugin = self._do_create_plugin(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 155, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Service Unavailable (HTTP 503)\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2026-01-02 01:01:45.788857 | orchestrator | 2026-01-02 01:01:45.788874 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 01:01:45.788899 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-01-02 01:01:45.788915 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 01:01:45.788932 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 01:01:45.788947 | orchestrator | 2026-01-02 01:01:45.788962 | orchestrator | 2026-01-02 01:01:45.788979 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 01:01:45.788997 | orchestrator | Friday 02 January 2026 01:01:45 +0000 (0:01:05.592) 0:01:07.341 ******** 2026-01-02 01:01:45.789015 | orchestrator | =============================================================================== 2026-01-02 01:01:45.789033 | orchestrator | service-ks-register : designate | Creating services -------------------- 65.59s 2026-01-02 01:01:45.789051 | orchestrator | designate : include_tasks ----------------------------------------------- 0.62s 2026-01-02 01:01:45.789068 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.51s 2026-01-02 01:01:45.789081 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2026-01-02 01:01:45.789093 | orchestrator | 2026-01-02 01:01:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:01:48.854892 | orchestrator | 2026-01-02 01:01:48 | INFO  | Task e23e40e0-2c82-46c9-93de-3748dbcc7c88 is in state SUCCESS 2026-01-02 01:01:48.855740 | orchestrator | 2026-01-02 01:01:48.855788 | orchestrator | 2026-01-02 01:01:48.855804 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-02 01:01:48.855816 | orchestrator | 2026-01-02 01:01:48.855828 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-02 01:01:48.855840 | orchestrator | Friday 02 January 2026 01:00:38 +0000 (0:00:00.339) 0:00:00.339 ******** 2026-01-02 01:01:48.855851 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:01:48.855879 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:01:48.855891 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:01:48.855903 | orchestrator | 2026-01-02 01:01:48.855914 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-02 01:01:48.855926 | orchestrator | Friday 02 January 2026 01:00:38 +0000 (0:00:00.353) 0:00:00.692 ******** 2026-01-02 01:01:48.855937 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-01-02 01:01:48.855948 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-01-02 01:01:48.855959 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-01-02 01:01:48.855970 | orchestrator | 2026-01-02 01:01:48.855982 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-01-02 01:01:48.855993 | orchestrator | 2026-01-02 01:01:48.856004 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-02 01:01:48.856015 | orchestrator | Friday 02 January 2026 01:00:39 +0000 (0:00:00.447) 0:00:01.140 ******** 2026-01-02 01:01:48.856026 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 01:01:48.856037 | orchestrator | 2026-01-02 01:01:48.856048 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-01-02 01:01:48.856060 | orchestrator | Friday 02 January 2026 01:00:39 +0000 (0:00:00.631) 0:00:01.771 ******** 2026-01-02 01:01:48.856070 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating services (5 retries left). 2026-01-02 01:01:48.856081 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating services (4 retries left). 2026-01-02 01:01:48.856092 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating services (3 retries left). 2026-01-02 01:01:48.856104 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating services (2 retries left). 2026-01-02 01:01:48.856141 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating services (1 retries left). 2026-01-02 01:01:48.856244 | orchestrator | failed: [testbed-node-0] (item=barbican (key-manager)) => {"action": "os_keystone_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Barbican Key Management Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9311"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9311"}], "name": "barbican", "type": "key-manager"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 133, in _do_create_plugin\n disc = self.get_discovery(session,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 605, in get_discovery\n return discover.get_discovery(session=session, url=url,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 1459, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 539, in __init__\n self._data = get_version_data(session, url,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 106, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1154, in get\n return self.request(url, 'GET', **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 985, in request\n raise exceptions.from_response(resp, method, url)\nkeystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1767315704.1316113-3252-184625787625481/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1767315704.1316113-3252-184625787625481/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1767315704.1316113-3252-184625787625481/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_os_keystone_service_payload_a_cbzp4b/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_os_keystone_service_payload_a_cbzp4b/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_os_keystone_service_payload_a_cbzp4b/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_os_keystone_service_payload_a_cbzp4b/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_os_keystone_service_payload_a_cbzp4b/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 88, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 286, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 268, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 131, in get_access\n self.auth_ref = self.get_auth_ref(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 203, in get_auth_ref\n self._plugin = self._do_create_plugin(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 155, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Service Unavailable (HTTP 503)\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2026-01-02 01:01:48.856272 | orchestrator | 2026-01-02 01:01:48.856284 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 01:01:48.856304 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-01-02 01:01:48.856325 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 01:01:48.856347 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 01:01:48.856365 | orchestrator | 2026-01-02 01:01:48.856383 | orchestrator | 2026-01-02 01:01:48.856401 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 01:01:48.856421 | orchestrator | Friday 02 January 2026 01:01:45 +0000 (0:01:05.658) 0:01:07.430 ******** 2026-01-02 01:01:48.856440 | orchestrator | =============================================================================== 2026-01-02 01:01:48.856459 | orchestrator | service-ks-register : barbican | Creating services --------------------- 65.66s 2026-01-02 01:01:48.856476 | orchestrator | barbican : include_tasks ------------------------------------------------ 0.63s 2026-01-02 01:01:48.856487 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.45s 2026-01-02 01:01:48.856498 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2026-01-02 01:01:48.857576 | orchestrator | 2026-01-02 01:01:48 | INFO  | Task a10c967a-3468-48f0-a4eb-9cf3abc28f08 is in state STARTED 2026-01-02 01:01:48.860568 | orchestrator | 2026-01-02 01:01:48 | INFO  | Task 4b4064d5-e980-474e-a833-d514a39ea6b3 is in state STARTED 2026-01-02 01:01:48.862961 | orchestrator | 2026-01-02 01:01:48 | INFO  | Task 410a4e57-7f62-4810-b404-aa8d070afe21 is in state STARTED 2026-01-02 01:01:48.866805 | orchestrator | 2026-01-02 01:01:48 | INFO  | Task 3f9d421b-96b9-4fec-9b4c-aff1bde8ce98 is in state STARTED 2026-01-02 01:01:48.868305 | orchestrator | 2026-01-02 01:01:48 | INFO  | Task 3dde8fb5-af82-4940-abad-81550de8fb6c is in state STARTED 2026-01-02 01:01:48.868522 | orchestrator | 2026-01-02 01:01:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:01:51.910802 | orchestrator | 2026-01-02 01:01:51 | INFO  | Task a10c967a-3468-48f0-a4eb-9cf3abc28f08 is in state STARTED 2026-01-02 01:01:51.911729 | orchestrator | 2026-01-02 01:01:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:01:51.912592 | orchestrator | 2026-01-02 01:01:51 | INFO  | Task 4b4064d5-e980-474e-a833-d514a39ea6b3 is in state STARTED 2026-01-02 01:01:51.913872 | orchestrator | 2026-01-02 01:01:51.913905 | orchestrator | 2026-01-02 01:01:51.913914 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-02 01:01:51.913922 | orchestrator | 2026-01-02 01:01:51.913930 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-02 01:01:51.913938 | orchestrator | Friday 02 January 2026 01:00:38 +0000 (0:00:00.263) 0:00:00.263 ******** 2026-01-02 01:01:51.913945 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:01:51.913955 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:01:51.913962 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:01:51.913973 | orchestrator | ok: [testbed-node-3] 2026-01-02 01:01:51.913985 | orchestrator | ok: [testbed-node-4] 2026-01-02 01:01:51.913996 | orchestrator | ok: [testbed-node-5] 2026-01-02 01:01:51.914008 | orchestrator | 2026-01-02 01:01:51.914075 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-02 01:01:51.914083 | orchestrator | Friday 02 January 2026 01:00:39 +0000 (0:00:00.905) 0:00:01.168 ******** 2026-01-02 01:01:51.914105 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-01-02 01:01:51.914118 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-01-02 01:01:51.914130 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-01-02 01:01:51.914140 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-01-02 01:01:51.914159 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-01-02 01:01:51.914167 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-01-02 01:01:51.914173 | orchestrator | 2026-01-02 01:01:51.914208 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-01-02 01:01:51.914219 | orchestrator | 2026-01-02 01:01:51.914226 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-02 01:01:51.914243 | orchestrator | Friday 02 January 2026 01:00:39 +0000 (0:00:00.646) 0:00:01.815 ******** 2026-01-02 01:01:51.914251 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 01:01:51.914259 | orchestrator | 2026-01-02 01:01:51.914266 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-01-02 01:01:51.914273 | orchestrator | Friday 02 January 2026 01:00:40 +0000 (0:00:01.276) 0:00:03.091 ******** 2026-01-02 01:01:51.914280 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:01:51.914287 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:01:51.914294 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:01:51.914301 | orchestrator | ok: [testbed-node-3] 2026-01-02 01:01:51.914308 | orchestrator | ok: [testbed-node-4] 2026-01-02 01:01:51.914315 | orchestrator | ok: [testbed-node-5] 2026-01-02 01:01:51.914322 | orchestrator | 2026-01-02 01:01:51.914329 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-01-02 01:01:51.914352 | orchestrator | Friday 02 January 2026 01:00:42 +0000 (0:00:01.333) 0:00:04.424 ******** 2026-01-02 01:01:51.914360 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:01:51.914366 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:01:51.914373 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:01:51.914380 | orchestrator | ok: [testbed-node-3] 2026-01-02 01:01:51.914387 | orchestrator | ok: [testbed-node-4] 2026-01-02 01:01:51.914393 | orchestrator | ok: [testbed-node-5] 2026-01-02 01:01:51.914400 | orchestrator | 2026-01-02 01:01:51.914407 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-01-02 01:01:51.914414 | orchestrator | Friday 02 January 2026 01:00:43 +0000 (0:00:01.161) 0:00:05.586 ******** 2026-01-02 01:01:51.914421 | orchestrator | ok: [testbed-node-0] => { 2026-01-02 01:01:51.914429 | orchestrator |  "changed": false, 2026-01-02 01:01:51.914436 | orchestrator |  "msg": "All assertions passed" 2026-01-02 01:01:51.914443 | orchestrator | } 2026-01-02 01:01:51.914450 | orchestrator | ok: [testbed-node-1] => { 2026-01-02 01:01:51.914459 | orchestrator |  "changed": false, 2026-01-02 01:01:51.914470 | orchestrator |  "msg": "All assertions passed" 2026-01-02 01:01:51.914481 | orchestrator | } 2026-01-02 01:01:51.914492 | orchestrator | ok: [testbed-node-2] => { 2026-01-02 01:01:51.914503 | orchestrator |  "changed": false, 2026-01-02 01:01:51.914514 | orchestrator |  "msg": "All assertions passed" 2026-01-02 01:01:51.914526 | orchestrator | } 2026-01-02 01:01:51.914537 | orchestrator | ok: [testbed-node-3] => { 2026-01-02 01:01:51.914549 | orchestrator |  "changed": false, 2026-01-02 01:01:51.914561 | orchestrator |  "msg": "All assertions passed" 2026-01-02 01:01:51.914573 | orchestrator | } 2026-01-02 01:01:51.914586 | orchestrator | ok: [testbed-node-4] => { 2026-01-02 01:01:51.914596 | orchestrator |  "changed": false, 2026-01-02 01:01:51.914604 | orchestrator |  "msg": "All assertions passed" 2026-01-02 01:01:51.914612 | orchestrator | } 2026-01-02 01:01:51.914620 | orchestrator | ok: [testbed-node-5] => { 2026-01-02 01:01:51.914628 | orchestrator |  "changed": false, 2026-01-02 01:01:51.914636 | orchestrator |  "msg": "All assertions passed" 2026-01-02 01:01:51.914645 | orchestrator | } 2026-01-02 01:01:51.914653 | orchestrator | 2026-01-02 01:01:51.914661 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-01-02 01:01:51.914669 | orchestrator | Friday 02 January 2026 01:00:44 +0000 (0:00:00.774) 0:00:06.360 ******** 2026-01-02 01:01:51.914678 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:01:51.914686 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:01:51.914694 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:01:51.914701 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:01:51.914709 | orchestrator | skipping: [testbed-node-4] 2026-01-02 01:01:51.914717 | orchestrator | skipping: [testbed-node-5] 2026-01-02 01:01:51.914724 | orchestrator | 2026-01-02 01:01:51.914733 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-01-02 01:01:51.914741 | orchestrator | Friday 02 January 2026 01:00:44 +0000 (0:00:00.612) 0:00:06.973 ******** 2026-01-02 01:01:51.914749 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating services (5 retries left). 2026-01-02 01:01:51.914757 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating services (4 retries left). 2026-01-02 01:01:51.914765 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating services (3 retries left). 2026-01-02 01:01:51.914773 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating services (2 retries left). 2026-01-02 01:01:51.914781 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating services (1 retries left). 2026-01-02 01:01:51.914828 | orchestrator | failed: [testbed-node-0] (item=neutron (network)) => {"action": "os_keystone_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Openstack Networking", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9696"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9696"}], "name": "neutron", "type": "network"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 133, in _do_create_plugin\n disc = self.get_discovery(session,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 605, in get_discovery\n return discover.get_discovery(session=session, url=url,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 1459, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 539, in __init__\n self._data = get_version_data(session, url,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 106, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1154, in get\n return self.request(url, 'GET', **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 985, in request\n raise exceptions.from_response(resp, method, url)\nkeystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1767315708.4824536-3296-220416493321687/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1767315708.4824536-3296-220416493321687/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1767315708.4824536-3296-220416493321687/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_os_keystone_service_payload_bieeyoxj/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_os_keystone_service_payload_bieeyoxj/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_os_keystone_service_payload_bieeyoxj/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_os_keystone_service_payload_bieeyoxj/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_os_keystone_service_payload_bieeyoxj/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 88, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 286, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 268, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 131, in get_access\n self.auth_ref = self.get_auth_ref(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 203, in get_auth_ref\n self._plugin = self._do_create_plugin(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 155, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Service Unavailable (HTTP 503)\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2026-01-02 01:01:51.914852 | orchestrator | 2026-01-02 01:01:51.914860 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 01:01:51.914867 | orchestrator | testbed-node-0 : ok=6  changed=0 unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-01-02 01:01:51.914874 | orchestrator | testbed-node-1 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 01:01:51.914881 | orchestrator | testbed-node-2 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 01:01:51.914888 | orchestrator | testbed-node-3 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 01:01:51.914895 | orchestrator | testbed-node-4 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 01:01:51.914901 | orchestrator | testbed-node-5 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 01:01:51.914908 | orchestrator | 2026-01-02 01:01:51.914915 | orchestrator | 2026-01-02 01:01:51.914922 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 01:01:51.914928 | orchestrator | Friday 02 January 2026 01:01:49 +0000 (0:01:04.945) 0:01:11.918 ******** 2026-01-02 01:01:51.914935 | orchestrator | =============================================================================== 2026-01-02 01:01:51.914942 | orchestrator | service-ks-register : neutron | Creating services ---------------------- 64.95s 2026-01-02 01:01:51.914949 | orchestrator | neutron : Get container facts ------------------------------------------- 1.33s 2026-01-02 01:01:51.914955 | orchestrator | neutron : include_tasks ------------------------------------------------- 1.28s 2026-01-02 01:01:51.914962 | orchestrator | neutron : Get container volume facts ------------------------------------ 1.16s 2026-01-02 01:01:51.914974 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.91s 2026-01-02 01:01:51.914980 | orchestrator | neutron : Check for ML2/OVN presence ------------------------------------ 0.77s 2026-01-02 01:01:51.914987 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.65s 2026-01-02 01:01:51.914998 | orchestrator | neutron : Check for ML2/OVS presence ------------------------------------ 0.61s 2026-01-02 01:01:51.915091 | orchestrator | 2026-01-02 01:01:51 | INFO  | Task 410a4e57-7f62-4810-b404-aa8d070afe21 is in state SUCCESS 2026-01-02 01:01:51.915102 | orchestrator | 2026-01-02 01:01:51 | INFO  | Task 3f9d421b-96b9-4fec-9b4c-aff1bde8ce98 is in state STARTED 2026-01-02 01:01:51.916205 | orchestrator | 2026-01-02 01:01:51 | INFO  | Task 3dde8fb5-af82-4940-abad-81550de8fb6c is in state STARTED 2026-01-02 01:01:51.916429 | orchestrator | 2026-01-02 01:01:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:01:54.955008 | orchestrator | 2026-01-02 01:01:54 | INFO  | Task a10c967a-3468-48f0-a4eb-9cf3abc28f08 is in state STARTED 2026-01-02 01:01:54.957680 | orchestrator | 2026-01-02 01:01:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:01:54.960253 | orchestrator | 2026-01-02 01:01:54 | INFO  | Task 4b4064d5-e980-474e-a833-d514a39ea6b3 is in state STARTED 2026-01-02 01:01:54.962266 | orchestrator | 2026-01-02 01:01:54 | INFO  | Task 3f9d421b-96b9-4fec-9b4c-aff1bde8ce98 is in state STARTED 2026-01-02 01:01:54.963571 | orchestrator | 2026-01-02 01:01:54 | INFO  | Task 3dde8fb5-af82-4940-abad-81550de8fb6c is in state STARTED 2026-01-02 01:01:54.963734 | orchestrator | 2026-01-02 01:01:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:01:58.002444 | orchestrator | 2026-01-02 01:01:58 | INFO  | Task a10c967a-3468-48f0-a4eb-9cf3abc28f08 is in state STARTED 2026-01-02 01:01:58.003539 | orchestrator | 2026-01-02 01:01:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:01:58.006602 | orchestrator | 2026-01-02 01:01:58 | INFO  | Task 4b4064d5-e980-474e-a833-d514a39ea6b3 is in state STARTED 2026-01-02 01:01:58.009727 | orchestrator | 2026-01-02 01:01:58 | INFO  | Task 3f9d421b-96b9-4fec-9b4c-aff1bde8ce98 is in state STARTED 2026-01-02 01:01:58.012002 | orchestrator | 2026-01-02 01:01:58 | INFO  | Task 3dde8fb5-af82-4940-abad-81550de8fb6c is in state STARTED 2026-01-02 01:01:58.012064 | orchestrator | 2026-01-02 01:01:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:02:01.060290 | orchestrator | 2026-01-02 01:02:01 | INFO  | Task a10c967a-3468-48f0-a4eb-9cf3abc28f08 is in state STARTED 2026-01-02 01:02:01.061372 | orchestrator | 2026-01-02 01:02:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:02:01.062919 | orchestrator | 2026-01-02 01:02:01 | INFO  | Task 4b4064d5-e980-474e-a833-d514a39ea6b3 is in state STARTED 2026-01-02 01:02:01.064679 | orchestrator | 2026-01-02 01:02:01 | INFO  | Task 3f9d421b-96b9-4fec-9b4c-aff1bde8ce98 is in state STARTED 2026-01-02 01:02:01.065868 | orchestrator | 2026-01-02 01:02:01 | INFO  | Task 3dde8fb5-af82-4940-abad-81550de8fb6c is in state STARTED 2026-01-02 01:02:01.065915 | orchestrator | 2026-01-02 01:02:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:02:04.113490 | orchestrator | 2026-01-02 01:02:04 | INFO  | Task a10c967a-3468-48f0-a4eb-9cf3abc28f08 is in state STARTED 2026-01-02 01:02:04.116244 | orchestrator | 2026-01-02 01:02:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:02:04.117695 | orchestrator | 2026-01-02 01:02:04 | INFO  | Task 4b4064d5-e980-474e-a833-d514a39ea6b3 is in state STARTED 2026-01-02 01:02:04.118915 | orchestrator | 2026-01-02 01:02:04 | INFO  | Task 3f9d421b-96b9-4fec-9b4c-aff1bde8ce98 is in state STARTED 2026-01-02 01:02:04.120671 | orchestrator | 2026-01-02 01:02:04 | INFO  | Task 3dde8fb5-af82-4940-abad-81550de8fb6c is in state STARTED 2026-01-02 01:02:04.120729 | orchestrator | 2026-01-02 01:02:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:02:07.169641 | orchestrator | 2026-01-02 01:02:07 | INFO  | Task a10c967a-3468-48f0-a4eb-9cf3abc28f08 is in state STARTED 2026-01-02 01:02:07.170559 | orchestrator | 2026-01-02 01:02:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:02:07.172434 | orchestrator | 2026-01-02 01:02:07 | INFO  | Task 4b4064d5-e980-474e-a833-d514a39ea6b3 is in state STARTED 2026-01-02 01:02:07.173791 | orchestrator | 2026-01-02 01:02:07 | INFO  | Task 3f9d421b-96b9-4fec-9b4c-aff1bde8ce98 is in state STARTED 2026-01-02 01:02:07.175643 | orchestrator | 2026-01-02 01:02:07 | INFO  | Task 3dde8fb5-af82-4940-abad-81550de8fb6c is in state STARTED 2026-01-02 01:02:07.175654 | orchestrator | 2026-01-02 01:02:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:02:10.225433 | orchestrator | 2026-01-02 01:02:10 | INFO  | Task a10c967a-3468-48f0-a4eb-9cf3abc28f08 is in state STARTED 2026-01-02 01:02:10.226618 | orchestrator | 2026-01-02 01:02:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:02:10.228630 | orchestrator | 2026-01-02 01:02:10 | INFO  | Task 4b4064d5-e980-474e-a833-d514a39ea6b3 is in state STARTED 2026-01-02 01:02:10.229756 | orchestrator | 2026-01-02 01:02:10 | INFO  | Task 3f9d421b-96b9-4fec-9b4c-aff1bde8ce98 is in state STARTED 2026-01-02 01:02:10.230901 | orchestrator | 2026-01-02 01:02:10 | INFO  | Task 3dde8fb5-af82-4940-abad-81550de8fb6c is in state STARTED 2026-01-02 01:02:10.230931 | orchestrator | 2026-01-02 01:02:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:02:13.280010 | orchestrator | 2026-01-02 01:02:13 | INFO  | Task a10c967a-3468-48f0-a4eb-9cf3abc28f08 is in state STARTED 2026-01-02 01:02:13.280897 | orchestrator | 2026-01-02 01:02:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:02:13.282482 | orchestrator | 2026-01-02 01:02:13 | INFO  | Task 4b4064d5-e980-474e-a833-d514a39ea6b3 is in state STARTED 2026-01-02 01:02:13.283595 | orchestrator | 2026-01-02 01:02:13 | INFO  | Task 3f9d421b-96b9-4fec-9b4c-aff1bde8ce98 is in state STARTED 2026-01-02 01:02:13.284963 | orchestrator | 2026-01-02 01:02:13 | INFO  | Task 3dde8fb5-af82-4940-abad-81550de8fb6c is in state STARTED 2026-01-02 01:02:13.284994 | orchestrator | 2026-01-02 01:02:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:02:16.327342 | orchestrator | 2026-01-02 01:02:16 | INFO  | Task a10c967a-3468-48f0-a4eb-9cf3abc28f08 is in state STARTED 2026-01-02 01:02:16.329696 | orchestrator | 2026-01-02 01:02:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:02:16.332614 | orchestrator | 2026-01-02 01:02:16 | INFO  | Task 4b4064d5-e980-474e-a833-d514a39ea6b3 is in state STARTED 2026-01-02 01:02:16.334611 | orchestrator | 2026-01-02 01:02:16 | INFO  | Task 3f9d421b-96b9-4fec-9b4c-aff1bde8ce98 is in state STARTED 2026-01-02 01:02:16.336533 | orchestrator | 2026-01-02 01:02:16 | INFO  | Task 3dde8fb5-af82-4940-abad-81550de8fb6c is in state STARTED 2026-01-02 01:02:16.336562 | orchestrator | 2026-01-02 01:02:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:02:19.384673 | orchestrator | 2026-01-02 01:02:19 | INFO  | Task a10c967a-3468-48f0-a4eb-9cf3abc28f08 is in state STARTED 2026-01-02 01:02:19.386791 | orchestrator | 2026-01-02 01:02:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:02:19.389286 | orchestrator | 2026-01-02 01:02:19 | INFO  | Task 4b4064d5-e980-474e-a833-d514a39ea6b3 is in state STARTED 2026-01-02 01:02:19.391164 | orchestrator | 2026-01-02 01:02:19 | INFO  | Task 3f9d421b-96b9-4fec-9b4c-aff1bde8ce98 is in state STARTED 2026-01-02 01:02:19.393111 | orchestrator | 2026-01-02 01:02:19 | INFO  | Task 3dde8fb5-af82-4940-abad-81550de8fb6c is in state STARTED 2026-01-02 01:02:19.393156 | orchestrator | 2026-01-02 01:02:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:02:22.439587 | orchestrator | 2026-01-02 01:02:22 | INFO  | Task a10c967a-3468-48f0-a4eb-9cf3abc28f08 is in state STARTED 2026-01-02 01:02:22.442534 | orchestrator | 2026-01-02 01:02:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:02:22.445431 | orchestrator | 2026-01-02 01:02:22 | INFO  | Task 4b4064d5-e980-474e-a833-d514a39ea6b3 is in state STARTED 2026-01-02 01:02:22.447992 | orchestrator | 2026-01-02 01:02:22 | INFO  | Task 3f9d421b-96b9-4fec-9b4c-aff1bde8ce98 is in state STARTED 2026-01-02 01:02:22.449920 | orchestrator | 2026-01-02 01:02:22 | INFO  | Task 3dde8fb5-af82-4940-abad-81550de8fb6c is in state STARTED 2026-01-02 01:02:22.450270 | orchestrator | 2026-01-02 01:02:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:02:25.490808 | orchestrator | 2026-01-02 01:02:25 | INFO  | Task a10c967a-3468-48f0-a4eb-9cf3abc28f08 is in state STARTED 2026-01-02 01:02:25.491974 | orchestrator | 2026-01-02 01:02:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:02:25.495180 | orchestrator | 2026-01-02 01:02:25 | INFO  | Task 4b4064d5-e980-474e-a833-d514a39ea6b3 is in state STARTED 2026-01-02 01:02:25.501363 | orchestrator | 2026-01-02 01:02:25 | INFO  | Task 3f9d421b-96b9-4fec-9b4c-aff1bde8ce98 is in state STARTED 2026-01-02 01:02:25.504787 | orchestrator | 2026-01-02 01:02:25 | INFO  | Task 3dde8fb5-af82-4940-abad-81550de8fb6c is in state STARTED 2026-01-02 01:02:25.505614 | orchestrator | 2026-01-02 01:02:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:02:28.550127 | orchestrator | 2026-01-02 01:02:28 | INFO  | Task a10c967a-3468-48f0-a4eb-9cf3abc28f08 is in state STARTED 2026-01-02 01:02:28.551982 | orchestrator | 2026-01-02 01:02:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:02:28.554227 | orchestrator | 2026-01-02 01:02:28 | INFO  | Task 4b4064d5-e980-474e-a833-d514a39ea6b3 is in state STARTED 2026-01-02 01:02:28.556578 | orchestrator | 2026-01-02 01:02:28 | INFO  | Task 3f9d421b-96b9-4fec-9b4c-aff1bde8ce98 is in state STARTED 2026-01-02 01:02:28.558281 | orchestrator | 2026-01-02 01:02:28 | INFO  | Task 3dde8fb5-af82-4940-abad-81550de8fb6c is in state STARTED 2026-01-02 01:02:28.558344 | orchestrator | 2026-01-02 01:02:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:02:31.605694 | orchestrator | 2026-01-02 01:02:31 | INFO  | Task a10c967a-3468-48f0-a4eb-9cf3abc28f08 is in state STARTED 2026-01-02 01:02:31.607053 | orchestrator | 2026-01-02 01:02:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:02:31.608762 | orchestrator | 2026-01-02 01:02:31 | INFO  | Task 4b4064d5-e980-474e-a833-d514a39ea6b3 is in state STARTED 2026-01-02 01:02:31.612656 | orchestrator | 2026-01-02 01:02:31 | INFO  | Task 3f9d421b-96b9-4fec-9b4c-aff1bde8ce98 is in state STARTED 2026-01-02 01:02:31.615166 | orchestrator | 2026-01-02 01:02:31 | INFO  | Task 3dde8fb5-af82-4940-abad-81550de8fb6c is in state STARTED 2026-01-02 01:02:31.615694 | orchestrator | 2026-01-02 01:02:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:02:34.663891 | orchestrator | 2026-01-02 01:02:34 | INFO  | Task a10c967a-3468-48f0-a4eb-9cf3abc28f08 is in state STARTED 2026-01-02 01:02:34.665008 | orchestrator | 2026-01-02 01:02:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:02:34.668500 | orchestrator | 2026-01-02 01:02:34 | INFO  | Task 4b4064d5-e980-474e-a833-d514a39ea6b3 is in state SUCCESS 2026-01-02 01:02:34.670510 | orchestrator | 2026-01-02 01:02:34 | INFO  | Task 3f9d421b-96b9-4fec-9b4c-aff1bde8ce98 is in state STARTED 2026-01-02 01:02:34.672785 | orchestrator | 2026-01-02 01:02:34 | INFO  | Task 3dde8fb5-af82-4940-abad-81550de8fb6c is in state STARTED 2026-01-02 01:02:34.673371 | orchestrator | 2026-01-02 01:02:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:02:37.721363 | orchestrator | 2026-01-02 01:02:37 | INFO  | Task a10c967a-3468-48f0-a4eb-9cf3abc28f08 is in state STARTED 2026-01-02 01:02:37.724149 | orchestrator | 2026-01-02 01:02:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:02:37.727099 | orchestrator | 2026-01-02 01:02:37 | INFO  | Task 9016da11-0594-460a-b080-ec7c3b75bc66 is in state STARTED 2026-01-02 01:02:37.729043 | orchestrator | 2026-01-02 01:02:37 | INFO  | Task 3f9d421b-96b9-4fec-9b4c-aff1bde8ce98 is in state STARTED 2026-01-02 01:02:37.731593 | orchestrator | 2026-01-02 01:02:37 | INFO  | Task 3dde8fb5-af82-4940-abad-81550de8fb6c is in state STARTED 2026-01-02 01:02:37.731647 | orchestrator | 2026-01-02 01:02:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:02:40.794134 | orchestrator | 2026-01-02 01:02:40 | INFO  | Task a10c967a-3468-48f0-a4eb-9cf3abc28f08 is in state STARTED 2026-01-02 01:02:40.795966 | orchestrator | 2026-01-02 01:02:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:02:40.796960 | orchestrator | 2026-01-02 01:02:40 | INFO  | Task 9016da11-0594-460a-b080-ec7c3b75bc66 is in state STARTED 2026-01-02 01:02:40.799540 | orchestrator | 2026-01-02 01:02:40 | INFO  | Task 3f9d421b-96b9-4fec-9b4c-aff1bde8ce98 is in state STARTED 2026-01-02 01:02:40.801487 | orchestrator | 2026-01-02 01:02:40 | INFO  | Task 3dde8fb5-af82-4940-abad-81550de8fb6c is in state STARTED 2026-01-02 01:02:40.801513 | orchestrator | 2026-01-02 01:02:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:02:43.841287 | orchestrator | 2026-01-02 01:02:43 | INFO  | Task a10c967a-3468-48f0-a4eb-9cf3abc28f08 is in state STARTED 2026-01-02 01:02:43.843276 | orchestrator | 2026-01-02 01:02:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:02:43.845072 | orchestrator | 2026-01-02 01:02:43 | INFO  | Task 9016da11-0594-460a-b080-ec7c3b75bc66 is in state STARTED 2026-01-02 01:02:43.848155 | orchestrator | 2026-01-02 01:02:43 | INFO  | Task 3f9d421b-96b9-4fec-9b4c-aff1bde8ce98 is in state STARTED 2026-01-02 01:02:43.850008 | orchestrator | 2026-01-02 01:02:43 | INFO  | Task 3dde8fb5-af82-4940-abad-81550de8fb6c is in state SUCCESS 2026-01-02 01:02:43.850559 | orchestrator | 2026-01-02 01:02:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:02:46.906863 | orchestrator | 2026-01-02 01:02:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:02:46.908482 | orchestrator | 2026-01-02 01:02:46 | INFO  | Task a10c967a-3468-48f0-a4eb-9cf3abc28f08 is in state STARTED 2026-01-02 01:02:46.910619 | orchestrator | 2026-01-02 01:02:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:02:46.912462 | orchestrator | 2026-01-02 01:02:46 | INFO  | Task 9016da11-0594-460a-b080-ec7c3b75bc66 is in state STARTED 2026-01-02 01:02:46.914723 | orchestrator | 2026-01-02 01:02:46 | INFO  | Task 3f9d421b-96b9-4fec-9b4c-aff1bde8ce98 is in state STARTED 2026-01-02 01:02:46.914773 | orchestrator | 2026-01-02 01:02:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:02:49.952137 | orchestrator | 2026-01-02 01:02:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:02:49.954216 | orchestrator | 2026-01-02 01:02:49 | INFO  | Task a10c967a-3468-48f0-a4eb-9cf3abc28f08 is in state STARTED 2026-01-02 01:02:49.956178 | orchestrator | 2026-01-02 01:02:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:02:49.958322 | orchestrator | 2026-01-02 01:02:49 | INFO  | Task 9016da11-0594-460a-b080-ec7c3b75bc66 is in state STARTED 2026-01-02 01:02:49.960302 | orchestrator | 2026-01-02 01:02:49 | INFO  | Task 3f9d421b-96b9-4fec-9b4c-aff1bde8ce98 is in state STARTED 2026-01-02 01:02:49.960596 | orchestrator | 2026-01-02 01:02:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:02:53.007406 | orchestrator | 2026-01-02 01:02:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:02:53.008937 | orchestrator | 2026-01-02 01:02:53 | INFO  | Task a10c967a-3468-48f0-a4eb-9cf3abc28f08 is in state STARTED 2026-01-02 01:02:53.010607 | orchestrator | 2026-01-02 01:02:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:02:53.012339 | orchestrator | 2026-01-02 01:02:53 | INFO  | Task 9016da11-0594-460a-b080-ec7c3b75bc66 is in state STARTED 2026-01-02 01:02:53.013663 | orchestrator | 2026-01-02 01:02:53 | INFO  | Task 3f9d421b-96b9-4fec-9b4c-aff1bde8ce98 is in state STARTED 2026-01-02 01:02:53.013701 | orchestrator | 2026-01-02 01:02:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:02:56.061437 | orchestrator | 2026-01-02 01:02:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:02:56.062946 | orchestrator | 2026-01-02 01:02:56 | INFO  | Task a10c967a-3468-48f0-a4eb-9cf3abc28f08 is in state STARTED 2026-01-02 01:02:56.067721 | orchestrator | 2026-01-02 01:02:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:02:56.071291 | orchestrator | 2026-01-02 01:02:56 | INFO  | Task 9016da11-0594-460a-b080-ec7c3b75bc66 is in state STARTED 2026-01-02 01:02:56.074095 | orchestrator | 2026-01-02 01:02:56 | INFO  | Task 3f9d421b-96b9-4fec-9b4c-aff1bde8ce98 is in state STARTED 2026-01-02 01:02:56.074144 | orchestrator | 2026-01-02 01:02:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:02:59.113384 | orchestrator | 2026-01-02 01:02:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:02:59.114476 | orchestrator | 2026-01-02 01:02:59 | INFO  | Task a10c967a-3468-48f0-a4eb-9cf3abc28f08 is in state STARTED 2026-01-02 01:02:59.115615 | orchestrator | 2026-01-02 01:02:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:02:59.118906 | orchestrator | 2026-01-02 01:02:59 | INFO  | Task 9016da11-0594-460a-b080-ec7c3b75bc66 is in state STARTED 2026-01-02 01:02:59.120213 | orchestrator | 2026-01-02 01:02:59 | INFO  | Task 3f9d421b-96b9-4fec-9b4c-aff1bde8ce98 is in state SUCCESS 2026-01-02 01:02:59.120959 | orchestrator | 2026-01-02 01:02:59.120987 | orchestrator | 2026-01-02 01:02:59.120996 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-01-02 01:02:59.121007 | orchestrator | 2026-01-02 01:02:59.121016 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-01-02 01:02:59.121026 | orchestrator | Friday 02 January 2026 01:01:40 +0000 (0:00:00.237) 0:00:00.237 ******** 2026-01-02 01:02:59.121035 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-01-02 01:02:59.121068 | orchestrator | 2026-01-02 01:02:59.121078 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-01-02 01:02:59.121087 | orchestrator | Friday 02 January 2026 01:01:40 +0000 (0:00:00.240) 0:00:00.478 ******** 2026-01-02 01:02:59.121097 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-01-02 01:02:59.121106 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-01-02 01:02:59.121116 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-01-02 01:02:59.121125 | orchestrator | 2026-01-02 01:02:59.121133 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-01-02 01:02:59.121140 | orchestrator | Friday 02 January 2026 01:01:42 +0000 (0:00:01.286) 0:00:01.765 ******** 2026-01-02 01:02:59.121182 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-01-02 01:02:59.121193 | orchestrator | 2026-01-02 01:02:59.121202 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-01-02 01:02:59.121210 | orchestrator | Friday 02 January 2026 01:01:43 +0000 (0:00:01.503) 0:00:03.269 ******** 2026-01-02 01:02:59.121219 | orchestrator | changed: [testbed-manager] 2026-01-02 01:02:59.121230 | orchestrator | 2026-01-02 01:02:59.121295 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-01-02 01:02:59.121315 | orchestrator | Friday 02 January 2026 01:01:44 +0000 (0:00:00.946) 0:00:04.215 ******** 2026-01-02 01:02:59.121325 | orchestrator | changed: [testbed-manager] 2026-01-02 01:02:59.121333 | orchestrator | 2026-01-02 01:02:59.121342 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-01-02 01:02:59.121379 | orchestrator | Friday 02 January 2026 01:01:45 +0000 (0:00:00.982) 0:00:05.198 ******** 2026-01-02 01:02:59.121389 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-01-02 01:02:59.121398 | orchestrator | ok: [testbed-manager] 2026-01-02 01:02:59.121408 | orchestrator | 2026-01-02 01:02:59.121416 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-01-02 01:02:59.121425 | orchestrator | Friday 02 January 2026 01:02:23 +0000 (0:00:38.359) 0:00:43.557 ******** 2026-01-02 01:02:59.121434 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-01-02 01:02:59.121443 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-01-02 01:02:59.121452 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-01-02 01:02:59.121461 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-01-02 01:02:59.121469 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-01-02 01:02:59.121477 | orchestrator | 2026-01-02 01:02:59.121486 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-01-02 01:02:59.121495 | orchestrator | Friday 02 January 2026 01:02:28 +0000 (0:00:04.278) 0:00:47.836 ******** 2026-01-02 01:02:59.121503 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-01-02 01:02:59.121512 | orchestrator | 2026-01-02 01:02:59.121520 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-01-02 01:02:59.121529 | orchestrator | Friday 02 January 2026 01:02:28 +0000 (0:00:00.462) 0:00:48.299 ******** 2026-01-02 01:02:59.121537 | orchestrator | skipping: [testbed-manager] 2026-01-02 01:02:59.121546 | orchestrator | 2026-01-02 01:02:59.121554 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-01-02 01:02:59.121563 | orchestrator | Friday 02 January 2026 01:02:28 +0000 (0:00:00.135) 0:00:48.434 ******** 2026-01-02 01:02:59.121571 | orchestrator | skipping: [testbed-manager] 2026-01-02 01:02:59.121580 | orchestrator | 2026-01-02 01:02:59.121591 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-01-02 01:02:59.121600 | orchestrator | Friday 02 January 2026 01:02:29 +0000 (0:00:00.511) 0:00:48.946 ******** 2026-01-02 01:02:59.121609 | orchestrator | changed: [testbed-manager] 2026-01-02 01:02:59.121628 | orchestrator | 2026-01-02 01:02:59.121638 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-01-02 01:02:59.121647 | orchestrator | Friday 02 January 2026 01:02:30 +0000 (0:00:01.428) 0:00:50.374 ******** 2026-01-02 01:02:59.121657 | orchestrator | changed: [testbed-manager] 2026-01-02 01:02:59.121666 | orchestrator | 2026-01-02 01:02:59.121676 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-01-02 01:02:59.121685 | orchestrator | Friday 02 January 2026 01:02:31 +0000 (0:00:00.837) 0:00:51.211 ******** 2026-01-02 01:02:59.121694 | orchestrator | changed: [testbed-manager] 2026-01-02 01:02:59.121703 | orchestrator | 2026-01-02 01:02:59.121712 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-01-02 01:02:59.121722 | orchestrator | Friday 02 January 2026 01:02:32 +0000 (0:00:00.647) 0:00:51.859 ******** 2026-01-02 01:02:59.121731 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-01-02 01:02:59.121741 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-01-02 01:02:59.121750 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-01-02 01:02:59.121760 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-01-02 01:02:59.121768 | orchestrator | 2026-01-02 01:02:59.121778 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 01:02:59.121788 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 01:02:59.121799 | orchestrator | 2026-01-02 01:02:59.121809 | orchestrator | 2026-01-02 01:02:59.121830 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 01:02:59.121839 | orchestrator | Friday 02 January 2026 01:02:33 +0000 (0:00:01.567) 0:00:53.426 ******** 2026-01-02 01:02:59.121849 | orchestrator | =============================================================================== 2026-01-02 01:02:59.121858 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 38.36s 2026-01-02 01:02:59.121867 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.28s 2026-01-02 01:02:59.121876 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.57s 2026-01-02 01:02:59.121885 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.50s 2026-01-02 01:02:59.121894 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.43s 2026-01-02 01:02:59.121903 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.29s 2026-01-02 01:02:59.121912 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.98s 2026-01-02 01:02:59.121921 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.95s 2026-01-02 01:02:59.121930 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.84s 2026-01-02 01:02:59.121939 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.65s 2026-01-02 01:02:59.121948 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.51s 2026-01-02 01:02:59.121957 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.46s 2026-01-02 01:02:59.121965 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.24s 2026-01-02 01:02:59.121972 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.14s 2026-01-02 01:02:59.121980 | orchestrator | 2026-01-02 01:02:59.121987 | orchestrator | 2026-01-02 01:02:59.122001 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-01-02 01:02:59.122008 | orchestrator | 2026-01-02 01:02:59.122093 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-01-02 01:02:59.122103 | orchestrator | Friday 02 January 2026 01:01:25 +0000 (0:00:00.112) 0:00:00.112 ******** 2026-01-02 01:02:59.122111 | orchestrator | changed: [localhost] 2026-01-02 01:02:59.122120 | orchestrator | 2026-01-02 01:02:59.122128 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-01-02 01:02:59.122136 | orchestrator | Friday 02 January 2026 01:01:26 +0000 (0:00:00.913) 0:00:01.025 ******** 2026-01-02 01:02:59.122155 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2026-01-02 01:02:59.122163 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (2 retries left). 2026-01-02 01:02:59.122171 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (1 retries left). 2026-01-02 01:02:59.122182 | orchestrator | fatal: [localhost]: FAILED! => {"attempts": 3, "changed": false, "dest": "/share/ironic/ironic/ironic-agent.initramfs", "elapsed": 10, "msg": "Request failed: ", "url": "https://tarballs.opendev.org/openstack/ironic-python-agent/dib/files/ipa-centos9-stable-2024.2.initramfs.sha256"} 2026-01-02 01:02:59.122193 | orchestrator | 2026-01-02 01:02:59.122201 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 01:02:59.122209 | orchestrator | localhost : ok=1  changed=1  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-01-02 01:02:59.122218 | orchestrator | 2026-01-02 01:02:59.122226 | orchestrator | 2026-01-02 01:02:59.122234 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 01:02:59.122259 | orchestrator | Friday 02 January 2026 01:02:43 +0000 (0:01:16.935) 0:01:17.961 ******** 2026-01-02 01:02:59.122268 | orchestrator | =============================================================================== 2026-01-02 01:02:59.122276 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 76.94s 2026-01-02 01:02:59.122290 | orchestrator | Ensure the destination directory exists --------------------------------- 0.91s 2026-01-02 01:02:59.122299 | orchestrator | 2026-01-02 01:02:59.122605 | orchestrator | 2026-01-02 01:02:59.122715 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-02 01:02:59.122743 | orchestrator | 2026-01-02 01:02:59.122763 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-02 01:02:59.122781 | orchestrator | Friday 02 January 2026 01:01:51 +0000 (0:00:00.377) 0:00:00.377 ******** 2026-01-02 01:02:59.122800 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:02:59.122821 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:02:59.122840 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:02:59.122857 | orchestrator | 2026-01-02 01:02:59.122875 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-02 01:02:59.122894 | orchestrator | Friday 02 January 2026 01:01:51 +0000 (0:00:00.448) 0:00:00.826 ******** 2026-01-02 01:02:59.122912 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-01-02 01:02:59.122931 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-01-02 01:02:59.122949 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-01-02 01:02:59.122968 | orchestrator | 2026-01-02 01:02:59.122986 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-01-02 01:02:59.123006 | orchestrator | 2026-01-02 01:02:59.123023 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-02 01:02:59.123041 | orchestrator | Friday 02 January 2026 01:01:52 +0000 (0:00:00.764) 0:00:01.591 ******** 2026-01-02 01:02:59.123059 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 01:02:59.123077 | orchestrator | 2026-01-02 01:02:59.123095 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-01-02 01:02:59.123112 | orchestrator | Friday 02 January 2026 01:01:53 +0000 (0:00:00.645) 0:00:02.236 ******** 2026-01-02 01:02:59.123131 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating services (5 retries left). 2026-01-02 01:02:59.123151 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating services (4 retries left). 2026-01-02 01:02:59.123171 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating services (3 retries left). 2026-01-02 01:02:59.123195 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating services (2 retries left). 2026-01-02 01:02:59.123274 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating services (1 retries left). 2026-01-02 01:02:59.123357 | orchestrator | failed: [testbed-node-0] (item=placement (placement)) => {"action": "os_keystone_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Placement Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:8780"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:8780"}], "name": "placement", "type": "placement"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 133, in _do_create_plugin\n disc = self.get_discovery(session,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 605, in get_discovery\n return discover.get_discovery(session=session, url=url,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 1459, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 539, in __init__\n self._data = get_version_data(session, url,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 106, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1154, in get\n return self.request(url, 'GET', **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 985, in request\n raise exceptions.from_response(resp, method, url)\nkeystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1767315777.1575663-3703-92039905226697/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1767315777.1575663-3703-92039905226697/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1767315777.1575663-3703-92039905226697/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_os_keystone_service_payload_4e85ga0b/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_os_keystone_service_payload_4e85ga0b/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_os_keystone_service_payload_4e85ga0b/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_os_keystone_service_payload_4e85ga0b/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_os_keystone_service_payload_4e85ga0b/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 88, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 286, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 268, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 131, in get_access\n self.auth_ref = self.get_auth_ref(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 203, in get_auth_ref\n self._plugin = self._do_create_plugin(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 155, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Service Unavailable (HTTP 503)\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2026-01-02 01:02:59.123400 | orchestrator | 2026-01-02 01:02:59.123424 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 01:02:59.123454 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-01-02 01:02:59.123476 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 01:02:59.123497 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 01:02:59.123516 | orchestrator | 2026-01-02 01:02:59.123536 | orchestrator | 2026-01-02 01:02:59.123556 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 01:02:59.123576 | orchestrator | Friday 02 January 2026 01:02:58 +0000 (0:01:05.477) 0:01:07.714 ******** 2026-01-02 01:02:59.123595 | orchestrator | =============================================================================== 2026-01-02 01:02:59.123614 | orchestrator | service-ks-register : placement | Creating services -------------------- 65.48s 2026-01-02 01:02:59.123632 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.76s 2026-01-02 01:02:59.123653 | orchestrator | placement : include_tasks ----------------------------------------------- 0.65s 2026-01-02 01:02:59.123673 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.45s 2026-01-02 01:02:59.123690 | orchestrator | 2026-01-02 01:02:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:03:02.170467 | orchestrator | 2026-01-02 01:03:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:03:02.172737 | orchestrator | 2026-01-02 01:03:02 | INFO  | Task a10c967a-3468-48f0-a4eb-9cf3abc28f08 is in state SUCCESS 2026-01-02 01:03:02.173894 | orchestrator | 2026-01-02 01:03:02.173914 | orchestrator | 2026-01-02 01:03:02.173924 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-02 01:03:02.173933 | orchestrator | 2026-01-02 01:03:02.173941 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-02 01:03:02.173949 | orchestrator | Friday 02 January 2026 01:01:51 +0000 (0:00:00.326) 0:00:00.326 ******** 2026-01-02 01:03:02.173957 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:03:02.173969 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:03:02.173977 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:03:02.173986 | orchestrator | 2026-01-02 01:03:02.173992 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-02 01:03:02.173997 | orchestrator | Friday 02 January 2026 01:01:51 +0000 (0:00:00.391) 0:00:00.717 ******** 2026-01-02 01:03:02.174003 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-01-02 01:03:02.174008 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-01-02 01:03:02.174014 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-01-02 01:03:02.174048 | orchestrator | 2026-01-02 01:03:02.174053 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-01-02 01:03:02.174058 | orchestrator | 2026-01-02 01:03:02.174063 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-02 01:03:02.174068 | orchestrator | Friday 02 January 2026 01:01:52 +0000 (0:00:00.622) 0:00:01.340 ******** 2026-01-02 01:03:02.174073 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 01:03:02.174079 | orchestrator | 2026-01-02 01:03:02.174097 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-01-02 01:03:02.174103 | orchestrator | Friday 02 January 2026 01:01:52 +0000 (0:00:00.592) 0:00:01.932 ******** 2026-01-02 01:03:02.174107 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating services (5 retries left). 2026-01-02 01:03:02.174113 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating services (4 retries left). 2026-01-02 01:03:02.174118 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating services (3 retries left). 2026-01-02 01:03:02.174123 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating services (2 retries left). 2026-01-02 01:03:02.174127 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating services (1 retries left). 2026-01-02 01:03:02.174153 | orchestrator | failed: [testbed-node-0] (item=magnum (container-infra)) => {"action": "os_keystone_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Container Infrastructure Management Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9511/v1"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9511/v1"}], "name": "magnum", "type": "container-infra"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 133, in _do_create_plugin\n disc = self.get_discovery(session,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 605, in get_discovery\n return discover.get_discovery(session=session, url=url,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 1459, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 539, in __init__\n self._data = get_version_data(session, url,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 106, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1154, in get\n return self.request(url, 'GET', **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 985, in request\n raise exceptions.from_response(resp, method, url)\nkeystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1767315777.445822-3721-195598499539856/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1767315777.445822-3721-195598499539856/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1767315777.445822-3721-195598499539856/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_os_keystone_service_payload_ujc8e42p/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_os_keystone_service_payload_ujc8e42p/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_os_keystone_service_payload_ujc8e42p/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_os_keystone_service_payload_ujc8e42p/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_os_keystone_service_payload_ujc8e42p/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 88, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 286, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 268, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 131, in get_access\n self.auth_ref = self.get_auth_ref(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 203, in get_auth_ref\n self._plugin = self._do_create_plugin(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 155, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Service Unavailable (HTTP 503)\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2026-01-02 01:03:02.174181 | orchestrator | 2026-01-02 01:03:02.174186 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 01:03:02.174195 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-01-02 01:03:02.174201 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 01:03:02.174207 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 01:03:02.174212 | orchestrator | 2026-01-02 01:03:02.174217 | orchestrator | 2026-01-02 01:03:02.174222 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 01:03:02.174227 | orchestrator | Friday 02 January 2026 01:02:58 +0000 (0:01:05.843) 0:01:07.775 ******** 2026-01-02 01:03:02.174232 | orchestrator | =============================================================================== 2026-01-02 01:03:02.174237 | orchestrator | service-ks-register : magnum | Creating services ----------------------- 65.84s 2026-01-02 01:03:02.174242 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.62s 2026-01-02 01:03:02.174269 | orchestrator | magnum : include_tasks -------------------------------------------------- 0.59s 2026-01-02 01:03:02.174274 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.39s 2026-01-02 01:03:02.175835 | orchestrator | 2026-01-02 01:03:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:03:02.178589 | orchestrator | 2026-01-02 01:03:02 | INFO  | Task 9016da11-0594-460a-b080-ec7c3b75bc66 is in state STARTED 2026-01-02 01:03:02.181075 | orchestrator | 2026-01-02 01:03:02 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:03:02.181314 | orchestrator | 2026-01-02 01:03:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:03:05.229654 | orchestrator | 2026-01-02 01:03:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:03:05.231514 | orchestrator | 2026-01-02 01:03:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:03:05.238647 | orchestrator | 2026-01-02 01:03:05 | INFO  | Task 9016da11-0594-460a-b080-ec7c3b75bc66 is in state STARTED 2026-01-02 01:03:05.239835 | orchestrator | 2026-01-02 01:03:05 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:03:05.240073 | orchestrator | 2026-01-02 01:03:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:03:08.291097 | orchestrator | 2026-01-02 01:03:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:03:08.294164 | orchestrator | 2026-01-02 01:03:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:03:08.295575 | orchestrator | 2026-01-02 01:03:08 | INFO  | Task 9016da11-0594-460a-b080-ec7c3b75bc66 is in state STARTED 2026-01-02 01:03:08.297743 | orchestrator | 2026-01-02 01:03:08 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:03:08.298402 | orchestrator | 2026-01-02 01:03:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:03:11.346980 | orchestrator | 2026-01-02 01:03:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:03:11.350280 | orchestrator | 2026-01-02 01:03:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:03:11.353256 | orchestrator | 2026-01-02 01:03:11 | INFO  | Task 9016da11-0594-460a-b080-ec7c3b75bc66 is in state STARTED 2026-01-02 01:03:11.356153 | orchestrator | 2026-01-02 01:03:11 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:03:11.356209 | orchestrator | 2026-01-02 01:03:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:03:14.414131 | orchestrator | 2026-01-02 01:03:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:03:14.415678 | orchestrator | 2026-01-02 01:03:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:03:14.417067 | orchestrator | 2026-01-02 01:03:14 | INFO  | Task 9016da11-0594-460a-b080-ec7c3b75bc66 is in state STARTED 2026-01-02 01:03:14.419201 | orchestrator | 2026-01-02 01:03:14 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:03:14.419249 | orchestrator | 2026-01-02 01:03:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:03:17.469131 | orchestrator | 2026-01-02 01:03:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:03:17.470719 | orchestrator | 2026-01-02 01:03:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:03:17.472192 | orchestrator | 2026-01-02 01:03:17 | INFO  | Task 9016da11-0594-460a-b080-ec7c3b75bc66 is in state STARTED 2026-01-02 01:03:17.473720 | orchestrator | 2026-01-02 01:03:17 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:03:17.473752 | orchestrator | 2026-01-02 01:03:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:03:20.514354 | orchestrator | 2026-01-02 01:03:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:03:20.519722 | orchestrator | 2026-01-02 01:03:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:03:20.523250 | orchestrator | 2026-01-02 01:03:20 | INFO  | Task 9016da11-0594-460a-b080-ec7c3b75bc66 is in state STARTED 2026-01-02 01:03:20.525461 | orchestrator | 2026-01-02 01:03:20 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:03:20.525681 | orchestrator | 2026-01-02 01:03:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:03:23.570915 | orchestrator | 2026-01-02 01:03:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:03:23.571089 | orchestrator | 2026-01-02 01:03:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:03:23.575799 | orchestrator | 2026-01-02 01:03:23 | INFO  | Task 9016da11-0594-460a-b080-ec7c3b75bc66 is in state STARTED 2026-01-02 01:03:23.575877 | orchestrator | 2026-01-02 01:03:23 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:03:23.575892 | orchestrator | 2026-01-02 01:03:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:03:26.616129 | orchestrator | 2026-01-02 01:03:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:03:26.618223 | orchestrator | 2026-01-02 01:03:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:03:26.621404 | orchestrator | 2026-01-02 01:03:26 | INFO  | Task 9016da11-0594-460a-b080-ec7c3b75bc66 is in state STARTED 2026-01-02 01:03:26.622686 | orchestrator | 2026-01-02 01:03:26 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:03:26.622765 | orchestrator | 2026-01-02 01:03:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:03:29.665735 | orchestrator | 2026-01-02 01:03:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:03:29.665822 | orchestrator | 2026-01-02 01:03:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:03:29.666406 | orchestrator | 2026-01-02 01:03:29 | INFO  | Task 9016da11-0594-460a-b080-ec7c3b75bc66 is in state STARTED 2026-01-02 01:03:29.668927 | orchestrator | 2026-01-02 01:03:29 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:03:29.669011 | orchestrator | 2026-01-02 01:03:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:03:32.702712 | orchestrator | 2026-01-02 01:03:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:03:32.702828 | orchestrator | 2026-01-02 01:03:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:03:32.703482 | orchestrator | 2026-01-02 01:03:32 | INFO  | Task 9016da11-0594-460a-b080-ec7c3b75bc66 is in state STARTED 2026-01-02 01:03:32.704162 | orchestrator | 2026-01-02 01:03:32 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:03:32.704188 | orchestrator | 2026-01-02 01:03:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:03:35.734072 | orchestrator | 2026-01-02 01:03:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:03:35.734257 | orchestrator | 2026-01-02 01:03:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:03:35.735841 | orchestrator | 2026-01-02 01:03:35 | INFO  | Task 9016da11-0594-460a-b080-ec7c3b75bc66 is in state STARTED 2026-01-02 01:03:35.737011 | orchestrator | 2026-01-02 01:03:35 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:03:35.737032 | orchestrator | 2026-01-02 01:03:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:03:38.842248 | orchestrator | 2026-01-02 01:03:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:03:38.842393 | orchestrator | 2026-01-02 01:03:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:03:38.842954 | orchestrator | 2026-01-02 01:03:38 | INFO  | Task 9016da11-0594-460a-b080-ec7c3b75bc66 is in state STARTED 2026-01-02 01:03:38.843408 | orchestrator | 2026-01-02 01:03:38 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:03:38.843439 | orchestrator | 2026-01-02 01:03:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:03:41.873423 | orchestrator | 2026-01-02 01:03:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:03:41.874205 | orchestrator | 2026-01-02 01:03:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:03:41.875158 | orchestrator | 2026-01-02 01:03:41 | INFO  | Task 9016da11-0594-460a-b080-ec7c3b75bc66 is in state STARTED 2026-01-02 01:03:41.876842 | orchestrator | 2026-01-02 01:03:41 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:03:41.876940 | orchestrator | 2026-01-02 01:03:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:03:44.923217 | orchestrator | 2026-01-02 01:03:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:03:44.926669 | orchestrator | 2026-01-02 01:03:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:03:44.928382 | orchestrator | 2026-01-02 01:03:44 | INFO  | Task 9016da11-0594-460a-b080-ec7c3b75bc66 is in state STARTED 2026-01-02 01:03:44.933298 | orchestrator | 2026-01-02 01:03:44 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:03:44.933667 | orchestrator | 2026-01-02 01:03:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:03:47.989755 | orchestrator | 2026-01-02 01:03:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:03:47.990684 | orchestrator | 2026-01-02 01:03:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:03:47.992194 | orchestrator | 2026-01-02 01:03:47 | INFO  | Task 9016da11-0594-460a-b080-ec7c3b75bc66 is in state STARTED 2026-01-02 01:03:47.993763 | orchestrator | 2026-01-02 01:03:47 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:03:47.993781 | orchestrator | 2026-01-02 01:03:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:03:51.045067 | orchestrator | 2026-01-02 01:03:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:03:51.046422 | orchestrator | 2026-01-02 01:03:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:03:51.048683 | orchestrator | 2026-01-02 01:03:51 | INFO  | Task 9016da11-0594-460a-b080-ec7c3b75bc66 is in state STARTED 2026-01-02 01:03:51.050734 | orchestrator | 2026-01-02 01:03:51 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:03:51.050818 | orchestrator | 2026-01-02 01:03:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:03:54.100621 | orchestrator | 2026-01-02 01:03:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:03:54.101497 | orchestrator | 2026-01-02 01:03:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:03:54.103061 | orchestrator | 2026-01-02 01:03:54 | INFO  | Task 9016da11-0594-460a-b080-ec7c3b75bc66 is in state STARTED 2026-01-02 01:03:54.104149 | orchestrator | 2026-01-02 01:03:54 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:03:54.104191 | orchestrator | 2026-01-02 01:03:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:03:57.151414 | orchestrator | 2026-01-02 01:03:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:03:57.152568 | orchestrator | 2026-01-02 01:03:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:03:57.154189 | orchestrator | 2026-01-02 01:03:57 | INFO  | Task 9016da11-0594-460a-b080-ec7c3b75bc66 is in state STARTED 2026-01-02 01:03:57.155723 | orchestrator | 2026-01-02 01:03:57 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:03:57.155754 | orchestrator | 2026-01-02 01:03:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:04:00.209986 | orchestrator | 2026-01-02 01:04:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:04:00.213741 | orchestrator | 2026-01-02 01:04:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:04:00.216710 | orchestrator | 2026-01-02 01:04:00 | INFO  | Task 9016da11-0594-460a-b080-ec7c3b75bc66 is in state STARTED 2026-01-02 01:04:00.219013 | orchestrator | 2026-01-02 01:04:00 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:04:00.219192 | orchestrator | 2026-01-02 01:04:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:04:03.274290 | orchestrator | 2026-01-02 01:04:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:04:03.275909 | orchestrator | 2026-01-02 01:04:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:04:03.277979 | orchestrator | 2026-01-02 01:04:03 | INFO  | Task 9016da11-0594-460a-b080-ec7c3b75bc66 is in state STARTED 2026-01-02 01:04:03.279820 | orchestrator | 2026-01-02 01:04:03 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:04:03.280043 | orchestrator | 2026-01-02 01:04:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:04:06.325462 | orchestrator | 2026-01-02 01:04:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:04:06.326783 | orchestrator | 2026-01-02 01:04:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:04:06.328799 | orchestrator | 2026-01-02 01:04:06 | INFO  | Task 9016da11-0594-460a-b080-ec7c3b75bc66 is in state STARTED 2026-01-02 01:04:06.330186 | orchestrator | 2026-01-02 01:04:06 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:04:06.330820 | orchestrator | 2026-01-02 01:04:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:04:09.389445 | orchestrator | 2026-01-02 01:04:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:04:09.391763 | orchestrator | 2026-01-02 01:04:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:04:09.395508 | orchestrator | 2026-01-02 01:04:09 | INFO  | Task 9016da11-0594-460a-b080-ec7c3b75bc66 is in state SUCCESS 2026-01-02 01:04:09.399933 | orchestrator | 2026-01-02 01:04:09 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:04:09.401139 | orchestrator | 2026-01-02 01:04:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:04:12.448054 | orchestrator | 2026-01-02 01:04:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:04:12.450214 | orchestrator | 2026-01-02 01:04:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:04:12.452067 | orchestrator | 2026-01-02 01:04:12 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:04:12.452120 | orchestrator | 2026-01-02 01:04:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:04:15.492853 | orchestrator | 2026-01-02 01:04:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:04:15.492991 | orchestrator | 2026-01-02 01:04:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:04:15.493015 | orchestrator | 2026-01-02 01:04:15 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:04:15.493028 | orchestrator | 2026-01-02 01:04:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:04:18.540754 | orchestrator | 2026-01-02 01:04:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:04:18.540854 | orchestrator | 2026-01-02 01:04:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:04:18.541222 | orchestrator | 2026-01-02 01:04:18 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:04:18.541296 | orchestrator | 2026-01-02 01:04:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:04:21.576126 | orchestrator | 2026-01-02 01:04:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:04:21.580623 | orchestrator | 2026-01-02 01:04:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:04:21.581883 | orchestrator | 2026-01-02 01:04:21 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:04:21.582223 | orchestrator | 2026-01-02 01:04:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:04:24.622215 | orchestrator | 2026-01-02 01:04:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:04:24.623080 | orchestrator | 2026-01-02 01:04:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:04:24.624232 | orchestrator | 2026-01-02 01:04:24 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:04:24.624251 | orchestrator | 2026-01-02 01:04:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:04:27.664094 | orchestrator | 2026-01-02 01:04:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:04:27.666007 | orchestrator | 2026-01-02 01:04:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:04:27.668756 | orchestrator | 2026-01-02 01:04:27 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:04:27.668816 | orchestrator | 2026-01-02 01:04:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:04:30.713749 | orchestrator | 2026-01-02 01:04:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:04:30.716077 | orchestrator | 2026-01-02 01:04:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:04:30.719242 | orchestrator | 2026-01-02 01:04:30 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:04:30.719751 | orchestrator | 2026-01-02 01:04:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:04:33.771342 | orchestrator | 2026-01-02 01:04:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:04:33.772176 | orchestrator | 2026-01-02 01:04:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:04:33.773810 | orchestrator | 2026-01-02 01:04:33 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:04:33.773846 | orchestrator | 2026-01-02 01:04:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:04:36.819151 | orchestrator | 2026-01-02 01:04:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:04:36.821598 | orchestrator | 2026-01-02 01:04:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:04:36.823612 | orchestrator | 2026-01-02 01:04:36 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:04:36.823658 | orchestrator | 2026-01-02 01:04:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:04:39.862421 | orchestrator | 2026-01-02 01:04:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:04:39.864250 | orchestrator | 2026-01-02 01:04:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:04:39.866289 | orchestrator | 2026-01-02 01:04:39 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:04:39.866411 | orchestrator | 2026-01-02 01:04:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:04:42.912956 | orchestrator | 2026-01-02 01:04:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:04:42.914999 | orchestrator | 2026-01-02 01:04:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:04:42.916853 | orchestrator | 2026-01-02 01:04:42 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:04:42.916943 | orchestrator | 2026-01-02 01:04:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:04:45.959792 | orchestrator | 2026-01-02 01:04:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:04:45.961789 | orchestrator | 2026-01-02 01:04:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:04:45.963646 | orchestrator | 2026-01-02 01:04:45 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:04:45.963695 | orchestrator | 2026-01-02 01:04:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:04:49.012348 | orchestrator | 2026-01-02 01:04:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:04:49.013559 | orchestrator | 2026-01-02 01:04:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:04:49.014789 | orchestrator | 2026-01-02 01:04:49 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:04:49.014823 | orchestrator | 2026-01-02 01:04:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:04:52.065076 | orchestrator | 2026-01-02 01:04:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:04:52.066532 | orchestrator | 2026-01-02 01:04:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:04:52.067715 | orchestrator | 2026-01-02 01:04:52 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:04:52.067807 | orchestrator | 2026-01-02 01:04:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:04:55.131959 | orchestrator | 2026-01-02 01:04:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:04:55.134592 | orchestrator | 2026-01-02 01:04:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:04:55.135427 | orchestrator | 2026-01-02 01:04:55 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:04:55.135472 | orchestrator | 2026-01-02 01:04:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:04:58.171343 | orchestrator | 2026-01-02 01:04:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:04:58.174117 | orchestrator | 2026-01-02 01:04:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:04:58.175577 | orchestrator | 2026-01-02 01:04:58 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:04:58.175623 | orchestrator | 2026-01-02 01:04:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:05:01.232507 | orchestrator | 2026-01-02 01:05:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:05:01.235713 | orchestrator | 2026-01-02 01:05:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:05:01.237000 | orchestrator | 2026-01-02 01:05:01 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:05:01.237340 | orchestrator | 2026-01-02 01:05:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:05:04.296316 | orchestrator | 2026-01-02 01:05:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:05:04.297630 | orchestrator | 2026-01-02 01:05:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:05:04.300590 | orchestrator | 2026-01-02 01:05:04 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:05:04.301029 | orchestrator | 2026-01-02 01:05:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:05:07.342331 | orchestrator | 2026-01-02 01:05:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:05:07.343928 | orchestrator | 2026-01-02 01:05:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:05:07.346991 | orchestrator | 2026-01-02 01:05:07 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:05:07.347042 | orchestrator | 2026-01-02 01:05:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:05:10.399036 | orchestrator | 2026-01-02 01:05:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:05:10.401061 | orchestrator | 2026-01-02 01:05:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:05:10.402165 | orchestrator | 2026-01-02 01:05:10 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:05:10.402664 | orchestrator | 2026-01-02 01:05:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:05:13.443913 | orchestrator | 2026-01-02 01:05:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:05:13.446572 | orchestrator | 2026-01-02 01:05:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:05:13.448685 | orchestrator | 2026-01-02 01:05:13 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:05:13.448754 | orchestrator | 2026-01-02 01:05:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:05:16.497549 | orchestrator | 2026-01-02 01:05:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:05:16.499903 | orchestrator | 2026-01-02 01:05:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:05:16.502797 | orchestrator | 2026-01-02 01:05:16 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:05:16.502821 | orchestrator | 2026-01-02 01:05:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:05:19.545364 | orchestrator | 2026-01-02 01:05:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:05:19.546669 | orchestrator | 2026-01-02 01:05:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:05:19.549270 | orchestrator | 2026-01-02 01:05:19 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:05:19.549376 | orchestrator | 2026-01-02 01:05:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:05:22.582650 | orchestrator | 2026-01-02 01:05:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:05:22.582904 | orchestrator | 2026-01-02 01:05:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:05:22.583980 | orchestrator | 2026-01-02 01:05:22 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:05:22.584007 | orchestrator | 2026-01-02 01:05:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:05:25.631334 | orchestrator | 2026-01-02 01:05:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:05:25.633336 | orchestrator | 2026-01-02 01:05:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:05:25.634417 | orchestrator | 2026-01-02 01:05:25 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:05:25.635406 | orchestrator | 2026-01-02 01:05:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:05:28.675358 | orchestrator | 2026-01-02 01:05:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:05:28.676773 | orchestrator | 2026-01-02 01:05:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:05:28.678080 | orchestrator | 2026-01-02 01:05:28 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:05:28.678103 | orchestrator | 2026-01-02 01:05:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:05:31.721054 | orchestrator | 2026-01-02 01:05:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:05:31.721390 | orchestrator | 2026-01-02 01:05:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:05:31.722649 | orchestrator | 2026-01-02 01:05:31 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:05:31.722745 | orchestrator | 2026-01-02 01:05:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:05:34.772112 | orchestrator | 2026-01-02 01:05:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:05:34.776224 | orchestrator | 2026-01-02 01:05:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:05:34.779319 | orchestrator | 2026-01-02 01:05:34 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:05:34.779826 | orchestrator | 2026-01-02 01:05:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:05:37.821662 | orchestrator | 2026-01-02 01:05:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:05:37.823193 | orchestrator | 2026-01-02 01:05:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:05:37.824898 | orchestrator | 2026-01-02 01:05:37 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:05:37.825602 | orchestrator | 2026-01-02 01:05:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:05:40.881149 | orchestrator | 2026-01-02 01:05:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:05:40.883576 | orchestrator | 2026-01-02 01:05:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:05:40.885339 | orchestrator | 2026-01-02 01:05:40 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:05:40.885483 | orchestrator | 2026-01-02 01:05:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:05:43.927612 | orchestrator | 2026-01-02 01:05:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:05:43.928413 | orchestrator | 2026-01-02 01:05:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:05:43.934785 | orchestrator | 2026-01-02 01:05:43 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:05:43.934914 | orchestrator | 2026-01-02 01:05:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:05:46.980874 | orchestrator | 2026-01-02 01:05:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:05:46.983321 | orchestrator | 2026-01-02 01:05:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:05:46.986309 | orchestrator | 2026-01-02 01:05:46 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:05:46.986386 | orchestrator | 2026-01-02 01:05:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:05:50.036113 | orchestrator | 2026-01-02 01:05:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:05:50.039227 | orchestrator | 2026-01-02 01:05:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:05:50.042170 | orchestrator | 2026-01-02 01:05:50 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:05:50.042974 | orchestrator | 2026-01-02 01:05:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:05:53.087062 | orchestrator | 2026-01-02 01:05:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:05:53.089006 | orchestrator | 2026-01-02 01:05:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:05:53.090761 | orchestrator | 2026-01-02 01:05:53 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:05:53.090794 | orchestrator | 2026-01-02 01:05:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:05:56.136931 | orchestrator | 2026-01-02 01:05:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:05:56.140739 | orchestrator | 2026-01-02 01:05:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:05:56.143626 | orchestrator | 2026-01-02 01:05:56 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:05:56.143690 | orchestrator | 2026-01-02 01:05:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:05:59.197637 | orchestrator | 2026-01-02 01:05:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:05:59.200413 | orchestrator | 2026-01-02 01:05:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:05:59.203687 | orchestrator | 2026-01-02 01:05:59 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:05:59.203749 | orchestrator | 2026-01-02 01:05:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:06:02.256756 | orchestrator | 2026-01-02 01:06:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:06:02.258938 | orchestrator | 2026-01-02 01:06:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:06:02.261378 | orchestrator | 2026-01-02 01:06:02 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:06:02.261435 | orchestrator | 2026-01-02 01:06:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:06:05.317846 | orchestrator | 2026-01-02 01:06:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:06:05.319832 | orchestrator | 2026-01-02 01:06:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:06:05.323859 | orchestrator | 2026-01-02 01:06:05 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state STARTED 2026-01-02 01:06:05.324011 | orchestrator | 2026-01-02 01:06:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:06:08.371783 | orchestrator | 2026-01-02 01:06:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:06:08.377646 | orchestrator | 2026-01-02 01:06:08 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:06:08.384717 | orchestrator | 2026-01-02 01:06:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:06:08.390000 | orchestrator | 2026-01-02 01:06:08 | INFO  | Task 2f4db230-b528-43a8-ab0c-84833f3222a6 is in state SUCCESS 2026-01-02 01:06:08.392904 | orchestrator | 2026-01-02 01:06:08.392947 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-02 01:06:08.392960 | orchestrator | 2.16.14 2026-01-02 01:06:08.392974 | orchestrator | 2026-01-02 01:06:08.392986 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-01-02 01:06:08.392998 | orchestrator | 2026-01-02 01:06:08.393009 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-01-02 01:06:08.393021 | orchestrator | Friday 02 January 2026 01:02:38 +0000 (0:00:00.287) 0:00:00.287 ******** 2026-01-02 01:06:08.393033 | orchestrator | changed: [testbed-manager] 2026-01-02 01:06:08.393046 | orchestrator | 2026-01-02 01:06:08.393057 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-01-02 01:06:08.393069 | orchestrator | Friday 02 January 2026 01:02:40 +0000 (0:00:02.216) 0:00:02.503 ******** 2026-01-02 01:06:08.393081 | orchestrator | changed: [testbed-manager] 2026-01-02 01:06:08.393093 | orchestrator | 2026-01-02 01:06:08.393106 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-01-02 01:06:08.393119 | orchestrator | Friday 02 January 2026 01:02:42 +0000 (0:00:01.089) 0:00:03.592 ******** 2026-01-02 01:06:08.393131 | orchestrator | changed: [testbed-manager] 2026-01-02 01:06:08.393143 | orchestrator | 2026-01-02 01:06:08.393155 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-01-02 01:06:08.393167 | orchestrator | Friday 02 January 2026 01:02:43 +0000 (0:00:01.044) 0:00:04.637 ******** 2026-01-02 01:06:08.393179 | orchestrator | changed: [testbed-manager] 2026-01-02 01:06:08.393191 | orchestrator | 2026-01-02 01:06:08.393203 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-01-02 01:06:08.393215 | orchestrator | Friday 02 January 2026 01:02:44 +0000 (0:00:01.186) 0:00:05.824 ******** 2026-01-02 01:06:08.393227 | orchestrator | changed: [testbed-manager] 2026-01-02 01:06:08.393239 | orchestrator | 2026-01-02 01:06:08.393251 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-01-02 01:06:08.393263 | orchestrator | Friday 02 January 2026 01:02:45 +0000 (0:00:01.193) 0:00:07.018 ******** 2026-01-02 01:06:08.393275 | orchestrator | changed: [testbed-manager] 2026-01-02 01:06:08.393287 | orchestrator | 2026-01-02 01:06:08.393299 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-01-02 01:06:08.393311 | orchestrator | Friday 02 January 2026 01:02:46 +0000 (0:00:01.200) 0:00:08.218 ******** 2026-01-02 01:06:08.393323 | orchestrator | changed: [testbed-manager] 2026-01-02 01:06:08.393335 | orchestrator | 2026-01-02 01:06:08.393347 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-01-02 01:06:08.393359 | orchestrator | Friday 02 January 2026 01:02:48 +0000 (0:00:02.023) 0:00:10.242 ******** 2026-01-02 01:06:08.393371 | orchestrator | changed: [testbed-manager] 2026-01-02 01:06:08.393383 | orchestrator | 2026-01-02 01:06:08.393395 | orchestrator | TASK [Create admin user] ******************************************************* 2026-01-02 01:06:08.393407 | orchestrator | Friday 02 January 2026 01:02:49 +0000 (0:00:01.237) 0:00:11.480 ******** 2026-01-02 01:06:08.393554 | orchestrator | changed: [testbed-manager] 2026-01-02 01:06:08.393574 | orchestrator | 2026-01-02 01:06:08.393588 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-01-02 01:06:08.393599 | orchestrator | Friday 02 January 2026 01:03:43 +0000 (0:00:53.801) 0:01:05.281 ******** 2026-01-02 01:06:08.393611 | orchestrator | skipping: [testbed-manager] 2026-01-02 01:06:08.393622 | orchestrator | 2026-01-02 01:06:08.393633 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-02 01:06:08.393644 | orchestrator | 2026-01-02 01:06:08.393714 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-02 01:06:08.393728 | orchestrator | Friday 02 January 2026 01:03:43 +0000 (0:00:00.186) 0:01:05.468 ******** 2026-01-02 01:06:08.393739 | orchestrator | changed: [testbed-node-0] 2026-01-02 01:06:08.393750 | orchestrator | 2026-01-02 01:06:08.393762 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-02 01:06:08.393773 | orchestrator | 2026-01-02 01:06:08.393784 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-02 01:06:08.393795 | orchestrator | Friday 02 January 2026 01:03:55 +0000 (0:00:11.710) 0:01:17.179 ******** 2026-01-02 01:06:08.393806 | orchestrator | changed: [testbed-node-1] 2026-01-02 01:06:08.393817 | orchestrator | 2026-01-02 01:06:08.393828 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-02 01:06:08.393839 | orchestrator | 2026-01-02 01:06:08.393850 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-02 01:06:08.393861 | orchestrator | Friday 02 January 2026 01:03:56 +0000 (0:00:01.265) 0:01:18.444 ******** 2026-01-02 01:06:08.393872 | orchestrator | changed: [testbed-node-2] 2026-01-02 01:06:08.393883 | orchestrator | 2026-01-02 01:06:08.393894 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 01:06:08.393907 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 01:06:08.393920 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 01:06:08.393932 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 01:06:08.393943 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 01:06:08.393954 | orchestrator | 2026-01-02 01:06:08.393966 | orchestrator | 2026-01-02 01:06:08.393977 | orchestrator | 2026-01-02 01:06:08.393988 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 01:06:08.393999 | orchestrator | Friday 02 January 2026 01:04:08 +0000 (0:00:11.267) 0:01:29.712 ******** 2026-01-02 01:06:08.394010 | orchestrator | =============================================================================== 2026-01-02 01:06:08.394067 | orchestrator | Create admin user ------------------------------------------------------ 53.80s 2026-01-02 01:06:08.394095 | orchestrator | Restart ceph manager service ------------------------------------------- 24.24s 2026-01-02 01:06:08.394107 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.22s 2026-01-02 01:06:08.394118 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.02s 2026-01-02 01:06:08.394129 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.24s 2026-01-02 01:06:08.394140 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.20s 2026-01-02 01:06:08.394151 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.19s 2026-01-02 01:06:08.394194 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.19s 2026-01-02 01:06:08.394207 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.09s 2026-01-02 01:06:08.394219 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.04s 2026-01-02 01:06:08.394230 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.19s 2026-01-02 01:06:08.394241 | orchestrator | 2026-01-02 01:06:08.394252 | orchestrator | 2026-01-02 01:06:08.394263 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-02 01:06:08.394274 | orchestrator | 2026-01-02 01:06:08.394285 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-02 01:06:08.394296 | orchestrator | Friday 02 January 2026 01:03:03 +0000 (0:00:00.294) 0:00:00.294 ******** 2026-01-02 01:06:08.394316 | orchestrator | ok: [testbed-manager] 2026-01-02 01:06:08.394329 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:06:08.394340 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:06:08.394352 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:06:08.394363 | orchestrator | ok: [testbed-node-3] 2026-01-02 01:06:08.394374 | orchestrator | ok: [testbed-node-4] 2026-01-02 01:06:08.394385 | orchestrator | ok: [testbed-node-5] 2026-01-02 01:06:08.394396 | orchestrator | 2026-01-02 01:06:08.394408 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-02 01:06:08.394419 | orchestrator | Friday 02 January 2026 01:03:04 +0000 (0:00:00.789) 0:00:01.083 ******** 2026-01-02 01:06:08.394431 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-01-02 01:06:08.394549 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-01-02 01:06:08.394565 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-01-02 01:06:08.394576 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-01-02 01:06:08.394587 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-01-02 01:06:08.394598 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-01-02 01:06:08.394609 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-01-02 01:06:08.394620 | orchestrator | 2026-01-02 01:06:08.394632 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-01-02 01:06:08.394643 | orchestrator | 2026-01-02 01:06:08.394661 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-01-02 01:06:08.394672 | orchestrator | Friday 02 January 2026 01:03:05 +0000 (0:00:00.695) 0:00:01.778 ******** 2026-01-02 01:06:08.394684 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 01:06:08.394697 | orchestrator | 2026-01-02 01:06:08.394709 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-01-02 01:06:08.394720 | orchestrator | Friday 02 January 2026 01:03:06 +0000 (0:00:01.566) 0:00:03.345 ******** 2026-01-02 01:06:08.394736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:06:08.394751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:06:08.394814 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-02 01:06:08.394827 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:06:08.394849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:06:08.394862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:06:08.394880 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:06:08.394893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:06:08.394905 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:06:08.394917 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:06:08.394937 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:06:08.394957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:06:08.394972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:06:08.394984 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:06:08.395002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:06:08.395014 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:06:08.395028 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:06:08.395040 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-02 01:06:08.395060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:06:08.395079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:06:08.395091 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-02 01:06:08.395109 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-02 01:06:08.395124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:06:08.395136 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-02 01:06:08.395148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:06:08.395188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:06:08.395201 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:06:08.395213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:06:08.395225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:06:08.395238 | orchestrator | 2026-01-02 01:06:08.395250 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-01-02 01:06:08.395261 | orchestrator | Friday 02 January 2026 01:03:09 +0000 (0:00:03.253) 0:00:06.598 ******** 2026-01-02 01:06:08.395278 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 01:06:08.395290 | orchestrator | 2026-01-02 01:06:08.395302 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-01-02 01:06:08.395313 | orchestrator | Friday 02 January 2026 01:03:11 +0000 (0:00:01.448) 0:00:08.047 ******** 2026-01-02 01:06:08.395325 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-02 01:06:08.395337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:06:08.395365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:06:08.395385 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:06:08.395398 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:06:08.395410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:06:08.395421 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:06:08.395438 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:06:08.395450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:06:08.395462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:06:08.395481 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:06:08.395541 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:06:08.395554 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:06:08.395566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:06:08.395644 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:06:08.395661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:06:08.395673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:06:08.395692 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-02 01:06:08.395705 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-02 01:06:08.395726 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-02 01:06:08.395740 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-02 01:06:08.395751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:06:08.395768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:06:08.395781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:06:08.395804 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:06:08.395816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:06:08.395835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:06:08.395863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:06:08.395876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:06:08.395888 | orchestrator | 2026-01-02 01:06:08.395994 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-01-02 01:06:08.396008 | orchestrator | Friday 02 January 2026 01:03:17 +0000 (0:00:06.359) 0:00:14.407 ******** 2026-01-02 01:06:08.396049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-02 01:06:08.396062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:06:08.396083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:06:08.396095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-02 01:06:08.396114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:06:08.396127 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-02 01:06:08.396139 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-02 01:06:08.396155 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-02 01:06:08.396168 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-02 01:06:08.396187 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:06:08.396198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-02 01:06:08.396216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:06:08.396229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:06:08.396240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-02 01:06:08.396252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:06:08.396274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-02 01:06:08.396286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:06:08.396298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:06:08.396309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-02 01:06:08.396329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:06:08.396341 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:06:08.396353 | orchestrator | skipping: [testbed-manager] 2026-01-02 01:06:08.396364 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:06:08.396375 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:06:08.396387 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-02 01:06:08.396398 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-02 01:06:08.396421 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-02 01:06:08.396433 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:06:08.396445 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-02 01:06:08.396457 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-02 01:06:08.396469 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-02 01:06:08.396480 | orchestrator | skipping: [testbed-node-4] 2026-01-02 01:06:08.396586 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-02 01:06:08.396601 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-02 01:06:08.396646 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-02 01:06:08.396659 | orchestrator | skipping: [testbed-node-5] 2026-01-02 01:06:08.396670 | orchestrator | 2026-01-02 01:06:08.396700 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-01-02 01:06:08.396724 | orchestrator | Friday 02 January 2026 01:03:19 +0000 (0:00:01.522) 0:00:15.929 ******** 2026-01-02 01:06:08.396742 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-02 01:06:08.396755 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-02 01:06:08.396766 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-02 01:06:08.396786 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-02 01:06:08.396799 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:06:08.396811 | orchestrator | skipping: [testbed-manager] 2026-01-02 01:06:08.396823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-02 01:06:08.396842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:06:08.396858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:06:08.396870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-02 01:06:08.396882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:06:08.396894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-02 01:06:08.396913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:06:08.396925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:06:08.396937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-02 01:06:08.396955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:06:08.396966 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:06:08.396980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-02 01:06:08.396991 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:06:08.397001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:06:08.397011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:06:08.397021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-02 01:06:08.397038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 're2026-01-02 01:06:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:06:08.397051 | orchestrator | gistry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:06:08.397061 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:06:08.397071 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-02 01:06:08.397088 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-02 01:06:08.397103 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-02 01:06:08.397113 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:06:08.397123 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-02 01:06:08.397134 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-02 01:06:08.397144 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-02 01:06:08.397154 | orchestrator | skipping: [testbed-node-4] 2026-01-02 01:06:08.397169 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-02 01:06:08.397181 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-02 01:06:08.397205 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-02 01:06:08.397222 | orchestrator | skipping: [testbed-node-5] 2026-01-02 01:06:08.397239 | orchestrator | 2026-01-02 01:06:08.397256 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-01-02 01:06:08.397269 | orchestrator | Friday 02 January 2026 01:03:21 +0000 (0:00:02.062) 0:00:17.991 ******** 2026-01-02 01:06:08.397285 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-02 01:06:08.397296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:06:08.397306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:06:08.397317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:06:08.397334 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:06:08.397352 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:06:08.397362 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:06:08.397372 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:06:08.397387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:06:08.397398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:06:08.397408 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:06:08.397419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:06:08.397429 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:06:08.397453 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:06:08.397464 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:06:08.397475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:06:08.397514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:06:08.397525 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-02 01:06:08.397536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:06:08.397546 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-02 01:06:08.397557 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-02 01:06:08.397636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:06:08.397649 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-02 01:06:08.397668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:06:08.397679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:06:08.397689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:06:08.397699 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:06:08.397718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:06:08.397735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:06:08.397746 | orchestrator | 2026-01-02 01:06:08.397756 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-01-02 01:06:08.397766 | orchestrator | Friday 02 January 2026 01:03:27 +0000 (0:00:05.858) 0:00:23.850 ******** 2026-01-02 01:06:08.397777 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-02 01:06:08.397787 | orchestrator | 2026-01-02 01:06:08.397797 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-01-02 01:06:08.397806 | orchestrator | Friday 02 January 2026 01:03:28 +0000 (0:00:01.422) 0:00:25.273 ******** 2026-01-02 01:06:08.397817 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094315, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1060796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.397832 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094315, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1060796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.397843 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094315, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1060796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.397853 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094340, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.110365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.397871 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094315, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1060796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.397887 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094340, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.110365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.397898 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094304, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1051872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.397909 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094315, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1060796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.397923 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094340, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.110365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.397934 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094304, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1051872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.397944 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094315, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1060796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.397962 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094340, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.110365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.397980 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094329, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1079974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.397990 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094329, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1079974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.398001 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094340, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.110365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.398043 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094315, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1060796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-02 01:06:08.398057 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094299, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1028292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.398074 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094299, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1028292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.398084 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094304, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1051872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.398584 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094340, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.110365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.398626 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094317, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1060796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.398643 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094304, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1051872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.398672 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094304, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1051872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.398689 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094317, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1060796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.398719 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094329, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1079974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.398731 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094329, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1079974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.398752 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094324, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.10708, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.398763 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094329, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1079974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.398773 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094304, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1051872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.398788 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094299, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1028292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.398799 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094318, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1064167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.398815 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094324, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.10708, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.398826 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094299, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1028292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.398842 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094340, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.110365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-02 01:06:08.398853 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094299, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1028292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.398863 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094317, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1060796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.398873 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094317, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1060796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.398888 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094318, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1064167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.398904 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094329, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1079974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.398914 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094317, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1060796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.398929 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094311, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1051872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.398940 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094324, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.10708, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.398950 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094299, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1028292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.398961 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094311, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1051872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.398984 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094324, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.10708, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.399000 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094324, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.10708, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.399017 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094337, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1095805, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.399041 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094317, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1060796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.399172 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094337, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1095805, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.399196 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094318, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1064167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.399209 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094296, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1024084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.399237 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094324, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.10708, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.399250 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094296, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1024084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.399262 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094370, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1150556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.399283 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094370, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1150556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.399296 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094304, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1051872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-02 01:06:08.399309 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094318, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1064167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.399321 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094318, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1064167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.399344 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094311, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1051872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.399358 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094318, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1064167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.399370 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094335, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.108916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.399382 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094335, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.108916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.399400 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094337, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1095805, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.399413 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094311, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1051872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.399426 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094301, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1036005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.399451 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094301, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1036005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.399462 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094311, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1051872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.399475 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094296, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1024084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.399549 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094311, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1051872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.399570 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094337, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1095805, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.399582 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094298, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1024084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.399600 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094370, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1150556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.399616 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094329, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1079974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-02 01:06:08.399628 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094337, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1095805, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.399639 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094298, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1024084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.399651 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094337, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1095805, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.399668 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094335, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.108916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.399680 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094296, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1024084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.399697 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094296, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1024084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.399713 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094322, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.10684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.399725 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094301, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1036005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.399738 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094322, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.10684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.399754 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094296, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1024084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.399779 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094298, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1024084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.399797 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094370, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1150556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.399824 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094321, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1067011, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.399848 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094321, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1067011, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.399866 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094370, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1150556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.399883 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094322, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.10684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.399900 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094370, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1150556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.399927 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094335, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.108916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.399956 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094335, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.108916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.399968 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094335, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.108916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.399984 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094299, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1028292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-02 01:06:08.399994 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094321, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1067011, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.400004 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094368, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1150556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.400015 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:06:08.400028 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094301, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1036005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.400044 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094368, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1150556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.400062 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:06:08.400071 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094301, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1036005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.400080 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094301, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1036005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.400092 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094368, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1150556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.400100 | orchestrator | skipping: [testbed-node-5] 2026-01-02 01:06:08.400108 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094298, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1024084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.400117 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094298, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1024084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.400125 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094298, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1024084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.400143 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094322, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.10684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.400152 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094322, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.10684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.400160 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094322, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.10684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.400172 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094321, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1067011, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.400181 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094321, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1067011, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.400189 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094317, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1060796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-02 01:06:08.400198 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094368, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1150556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.400211 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:06:08.400224 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094321, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1067011, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.400233 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094368, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1150556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.400241 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:06:08.400249 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094368, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1150556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-02 01:06:08.400258 | orchestrator | skipping: [testbed-node-4] 2026-01-02 01:06:08.400270 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094324, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.10708, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-02 01:06:08.400278 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094318, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1064167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-02 01:06:08.400287 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094311, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1051872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-02 01:06:08.400295 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094337, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1095805, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-02 01:06:08.400315 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094296, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1024084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-02 01:06:08.400324 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094370, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1150556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-02 01:06:08.400332 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094335, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.108916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-02 01:06:08.400344 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094301, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1036005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-02 01:06:08.400352 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094298, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1024084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-02 01:06:08.400361 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094322, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.10684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-02 01:06:08.400377 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094321, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1067011, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-02 01:06:08.400390 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094368, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.1150556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-02 01:06:08.400398 | orchestrator | 2026-01-02 01:06:08.400407 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-01-02 01:06:08.400416 | orchestrator | Friday 02 January 2026 01:03:54 +0000 (0:00:26.051) 0:00:51.325 ******** 2026-01-02 01:06:08.400424 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-02 01:06:08.400432 | orchestrator | 2026-01-02 01:06:08.400440 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-01-02 01:06:08.400448 | orchestrator | Friday 02 January 2026 01:03:55 +0000 (0:00:00.773) 0:00:52.098 ******** 2026-01-02 01:06:08.400456 | orchestrator | [WARNING]: Skipped 2026-01-02 01:06:08.400465 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-02 01:06:08.400473 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-01-02 01:06:08.400481 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-02 01:06:08.400510 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-01-02 01:06:08.400519 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-02 01:06:08.400527 | orchestrator | [WARNING]: Skipped 2026-01-02 01:06:08.400536 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-02 01:06:08.400544 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-01-02 01:06:08.400552 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-02 01:06:08.400560 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-01-02 01:06:08.400569 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-02 01:06:08.400577 | orchestrator | [WARNING]: Skipped 2026-01-02 01:06:08.400585 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-02 01:06:08.400593 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-01-02 01:06:08.400605 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-02 01:06:08.400614 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-01-02 01:06:08.400622 | orchestrator | [WARNING]: Skipped 2026-01-02 01:06:08.400630 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-02 01:06:08.400638 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-01-02 01:06:08.400646 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-02 01:06:08.400654 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-01-02 01:06:08.400663 | orchestrator | [WARNING]: Skipped 2026-01-02 01:06:08.400671 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-02 01:06:08.400679 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-01-02 01:06:08.400693 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-02 01:06:08.400701 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-01-02 01:06:08.400709 | orchestrator | [WARNING]: Skipped 2026-01-02 01:06:08.400717 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-02 01:06:08.400726 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-01-02 01:06:08.400734 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-02 01:06:08.400742 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-01-02 01:06:08.400750 | orchestrator | [WARNING]: Skipped 2026-01-02 01:06:08.400758 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-02 01:06:08.400766 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-01-02 01:06:08.400774 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-02 01:06:08.400782 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-01-02 01:06:08.400790 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-02 01:06:08.400798 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-02 01:06:08.400806 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-02 01:06:08.400814 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-02 01:06:08.400823 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-02 01:06:08.400831 | orchestrator | 2026-01-02 01:06:08.400839 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-01-02 01:06:08.400847 | orchestrator | Friday 02 January 2026 01:03:57 +0000 (0:00:01.824) 0:00:53.923 ******** 2026-01-02 01:06:08.400855 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-02 01:06:08.400863 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:06:08.400872 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-02 01:06:08.400880 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:06:08.400888 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-02 01:06:08.400896 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:06:08.401169 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-02 01:06:08.401185 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:06:08.401193 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-02 01:06:08.401201 | orchestrator | skipping: [testbed-node-4] 2026-01-02 01:06:08.401209 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-02 01:06:08.401217 | orchestrator | skipping: [testbed-node-5] 2026-01-02 01:06:08.401226 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-01-02 01:06:08.401234 | orchestrator | 2026-01-02 01:06:08.401242 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-01-02 01:06:08.401250 | orchestrator | Friday 02 January 2026 01:04:12 +0000 (0:00:15.350) 0:01:09.273 ******** 2026-01-02 01:06:08.401258 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-02 01:06:08.401266 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:06:08.401274 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-02 01:06:08.401282 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:06:08.401290 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-02 01:06:08.401298 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:06:08.401306 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-02 01:06:08.401320 | orchestrator | skipping: [testbed-node-4] 2026-01-02 01:06:08.401328 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-02 01:06:08.401336 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:06:08.401344 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-02 01:06:08.401352 | orchestrator | skipping: [testbed-node-5] 2026-01-02 01:06:08.401360 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-01-02 01:06:08.401368 | orchestrator | 2026-01-02 01:06:08.401376 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-01-02 01:06:08.401384 | orchestrator | Friday 02 January 2026 01:04:15 +0000 (0:00:02.720) 0:01:11.994 ******** 2026-01-02 01:06:08.401397 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-02 01:06:08.401406 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-02 01:06:08.401414 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:06:08.401423 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-02 01:06:08.401431 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:06:08.401439 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:06:08.401447 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-02 01:06:08.401455 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:06:08.401463 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-02 01:06:08.401471 | orchestrator | skipping: [testbed-node-4] 2026-01-02 01:06:08.401479 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-01-02 01:06:08.401506 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-02 01:06:08.401515 | orchestrator | skipping: [testbed-node-5] 2026-01-02 01:06:08.401523 | orchestrator | 2026-01-02 01:06:08.401531 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-01-02 01:06:08.401539 | orchestrator | Friday 02 January 2026 01:04:16 +0000 (0:00:01.374) 0:01:13.368 ******** 2026-01-02 01:06:08.401547 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-02 01:06:08.401555 | orchestrator | 2026-01-02 01:06:08.401563 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-01-02 01:06:08.401571 | orchestrator | Friday 02 January 2026 01:04:17 +0000 (0:00:00.731) 0:01:14.100 ******** 2026-01-02 01:06:08.401578 | orchestrator | skipping: [testbed-manager] 2026-01-02 01:06:08.401586 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:06:08.401594 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:06:08.401602 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:06:08.401610 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:06:08.401618 | orchestrator | skipping: [testbed-node-4] 2026-01-02 01:06:08.401626 | orchestrator | skipping: [testbed-node-5] 2026-01-02 01:06:08.401634 | orchestrator | 2026-01-02 01:06:08.401641 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-01-02 01:06:08.401649 | orchestrator | Friday 02 January 2026 01:04:17 +0000 (0:00:00.622) 0:01:14.722 ******** 2026-01-02 01:06:08.401657 | orchestrator | skipping: [testbed-manager] 2026-01-02 01:06:08.401665 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:06:08.401673 | orchestrator | skipping: [testbed-node-4] 2026-01-02 01:06:08.401681 | orchestrator | skipping: [testbed-node-5] 2026-01-02 01:06:08.401689 | orchestrator | changed: [testbed-node-0] 2026-01-02 01:06:08.401702 | orchestrator | changed: [testbed-node-1] 2026-01-02 01:06:08.401710 | orchestrator | changed: [testbed-node-2] 2026-01-02 01:06:08.401718 | orchestrator | 2026-01-02 01:06:08.401731 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-01-02 01:06:08.401739 | orchestrator | Friday 02 January 2026 01:04:20 +0000 (0:00:02.064) 0:01:16.787 ******** 2026-01-02 01:06:08.401747 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-02 01:06:08.401755 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-02 01:06:08.401763 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:06:08.401771 | orchestrator | skipping: [testbed-manager] 2026-01-02 01:06:08.401779 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-02 01:06:08.401789 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:06:08.401798 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-02 01:06:08.401808 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:06:08.401817 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-02 01:06:08.401827 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:06:08.401836 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-02 01:06:08.401846 | orchestrator | skipping: [testbed-node-4] 2026-01-02 01:06:08.401856 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-02 01:06:08.401866 | orchestrator | skipping: [testbed-node-5] 2026-01-02 01:06:08.401875 | orchestrator | 2026-01-02 01:06:08.401885 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-01-02 01:06:08.401896 | orchestrator | Friday 02 January 2026 01:04:21 +0000 (0:00:01.334) 0:01:18.122 ******** 2026-01-02 01:06:08.401905 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-02 01:06:08.401915 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:06:08.401924 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-02 01:06:08.401933 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:06:08.401943 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-02 01:06:08.401952 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:06:08.401962 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-02 01:06:08.401975 | orchestrator | skipping: [testbed-node-4] 2026-01-02 01:06:08.401985 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-02 01:06:08.401995 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:06:08.402004 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-01-02 01:06:08.402013 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-02 01:06:08.402051 | orchestrator | skipping: [testbed-node-5] 2026-01-02 01:06:08.402061 | orchestrator | 2026-01-02 01:06:08.402071 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-01-02 01:06:08.402081 | orchestrator | Friday 02 January 2026 01:04:22 +0000 (0:00:01.291) 0:01:19.413 ******** 2026-01-02 01:06:08.402091 | orchestrator | [WARNING]: Skipped 2026-01-02 01:06:08.402100 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-01-02 01:06:08.402110 | orchestrator | due to this access issue: 2026-01-02 01:06:08.402119 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-01-02 01:06:08.402138 | orchestrator | not a directory 2026-01-02 01:06:08.402147 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-02 01:06:08.402155 | orchestrator | 2026-01-02 01:06:08.402162 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-01-02 01:06:08.402170 | orchestrator | Friday 02 January 2026 01:04:23 +0000 (0:00:01.043) 0:01:20.457 ******** 2026-01-02 01:06:08.402178 | orchestrator | skipping: [testbed-manager] 2026-01-02 01:06:08.402186 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:06:08.402194 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:06:08.402202 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:06:08.402210 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:06:08.402218 | orchestrator | skipping: [testbed-node-4] 2026-01-02 01:06:08.402226 | orchestrator | skipping: [testbed-node-5] 2026-01-02 01:06:08.402234 | orchestrator | 2026-01-02 01:06:08.402242 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-01-02 01:06:08.402250 | orchestrator | Friday 02 January 2026 01:04:24 +0000 (0:00:00.811) 0:01:21.268 ******** 2026-01-02 01:06:08.402257 | orchestrator | skipping: [testbed-manager] 2026-01-02 01:06:08.402265 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:06:08.402273 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:06:08.402281 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:06:08.402289 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:06:08.402297 | orchestrator | skipping: [testbed-node-4] 2026-01-02 01:06:08.402305 | orchestrator | skipping: [testbed-node-5] 2026-01-02 01:06:08.402313 | orchestrator | 2026-01-02 01:06:08.402321 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-01-02 01:06:08.402329 | orchestrator | Friday 02 January 2026 01:04:25 +0000 (0:00:00.948) 0:01:22.217 ******** 2026-01-02 01:06:08.402344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:06:08.402354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:06:08.402364 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-02 01:06:08.402376 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:06:08.402390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:06:08.402399 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:06:08.402407 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:06:08.402416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:06:08.402430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:06:08.402439 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:06:08.402447 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:06:08.402459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:06:08.402473 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:06:08.402482 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:06:08.402539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:06:08.402553 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:06:08.402561 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-02 01:06:08.402570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:06:08.402578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:06:08.402596 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-02 01:06:08.402605 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-02 01:06:08.402614 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-02 01:06:08.402624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:06:08.402636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:06:08.402645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:06:08.402653 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:06:08.402671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:06:08.402679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:06:08.402688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:06:08.402696 | orchestrator | 2026-01-02 01:06:08.402704 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-01-02 01:06:08.402712 | orchestrator | Friday 02 January 2026 01:04:29 +0000 (0:00:04.141) 0:01:26.359 ******** 2026-01-02 01:06:08.402720 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-02 01:06:08.402728 | orchestrator | skipping: [testbed-manager] 2026-01-02 01:06:08.402736 | orchestrator | 2026-01-02 01:06:08.402744 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-02 01:06:08.402752 | orchestrator | Friday 02 January 2026 01:04:30 +0000 (0:00:01.258) 0:01:27.618 ******** 2026-01-02 01:06:08.402760 | orchestrator | 2026-01-02 01:06:08.402769 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-02 01:06:08.402777 | orchestrator | Friday 02 January 2026 01:04:30 +0000 (0:00:00.068) 0:01:27.686 ******** 2026-01-02 01:06:08.402784 | orchestrator | 2026-01-02 01:06:08.402792 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-02 01:06:08.402800 | orchestrator | Friday 02 January 2026 01:04:31 +0000 (0:00:00.069) 0:01:27.756 ******** 2026-01-02 01:06:08.402808 | orchestrator | 2026-01-02 01:06:08.402816 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-02 01:06:08.402824 | orchestrator | Friday 02 January 2026 01:04:31 +0000 (0:00:00.065) 0:01:27.821 ******** 2026-01-02 01:06:08.402832 | orchestrator | 2026-01-02 01:06:08.402843 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-02 01:06:08.402852 | orchestrator | Friday 02 January 2026 01:04:31 +0000 (0:00:00.312) 0:01:28.133 ******** 2026-01-02 01:06:08.402860 | orchestrator | 2026-01-02 01:06:08.402868 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-02 01:06:08.402876 | orchestrator | Friday 02 January 2026 01:04:31 +0000 (0:00:00.065) 0:01:28.199 ******** 2026-01-02 01:06:08.402884 | orchestrator | 2026-01-02 01:06:08.402892 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-02 01:06:08.402900 | orchestrator | Friday 02 January 2026 01:04:31 +0000 (0:00:00.065) 0:01:28.265 ******** 2026-01-02 01:06:08.402913 | orchestrator | 2026-01-02 01:06:08.402921 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-01-02 01:06:08.402929 | orchestrator | Friday 02 January 2026 01:04:31 +0000 (0:00:00.085) 0:01:28.350 ******** 2026-01-02 01:06:08.402937 | orchestrator | changed: [testbed-manager] 2026-01-02 01:06:08.402945 | orchestrator | 2026-01-02 01:06:08.402953 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-01-02 01:06:08.402961 | orchestrator | Friday 02 January 2026 01:04:52 +0000 (0:00:21.359) 0:01:49.709 ******** 2026-01-02 01:06:08.402969 | orchestrator | changed: [testbed-node-0] 2026-01-02 01:06:08.402977 | orchestrator | changed: [testbed-node-2] 2026-01-02 01:06:08.402985 | orchestrator | changed: [testbed-node-1] 2026-01-02 01:06:08.402993 | orchestrator | changed: [testbed-node-4] 2026-01-02 01:06:08.403000 | orchestrator | changed: [testbed-node-3] 2026-01-02 01:06:08.403009 | orchestrator | changed: [testbed-node-5] 2026-01-02 01:06:08.403016 | orchestrator | changed: [testbed-manager] 2026-01-02 01:06:08.403025 | orchestrator | 2026-01-02 01:06:08.403033 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-01-02 01:06:08.403041 | orchestrator | Friday 02 January 2026 01:05:01 +0000 (0:00:08.399) 0:01:58.109 ******** 2026-01-02 01:06:08.403049 | orchestrator | changed: [testbed-node-1] 2026-01-02 01:06:08.403057 | orchestrator | changed: [testbed-node-2] 2026-01-02 01:06:08.403064 | orchestrator | changed: [testbed-node-0] 2026-01-02 01:06:08.403072 | orchestrator | 2026-01-02 01:06:08.403080 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-01-02 01:06:08.403087 | orchestrator | Friday 02 January 2026 01:05:11 +0000 (0:00:10.330) 0:02:08.440 ******** 2026-01-02 01:06:08.403094 | orchestrator | changed: [testbed-node-2] 2026-01-02 01:06:08.403100 | orchestrator | changed: [testbed-node-0] 2026-01-02 01:06:08.403107 | orchestrator | changed: [testbed-node-1] 2026-01-02 01:06:08.403114 | orchestrator | 2026-01-02 01:06:08.403121 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-01-02 01:06:08.403127 | orchestrator | Friday 02 January 2026 01:05:21 +0000 (0:00:09.664) 0:02:18.104 ******** 2026-01-02 01:06:08.403134 | orchestrator | changed: [testbed-node-2] 2026-01-02 01:06:08.403141 | orchestrator | changed: [testbed-manager] 2026-01-02 01:06:08.403151 | orchestrator | changed: [testbed-node-4] 2026-01-02 01:06:08.403158 | orchestrator | changed: [testbed-node-1] 2026-01-02 01:06:08.403165 | orchestrator | changed: [testbed-node-0] 2026-01-02 01:06:08.403172 | orchestrator | changed: [testbed-node-3] 2026-01-02 01:06:08.403179 | orchestrator | changed: [testbed-node-5] 2026-01-02 01:06:08.403186 | orchestrator | 2026-01-02 01:06:08.403192 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-01-02 01:06:08.403199 | orchestrator | Friday 02 January 2026 01:05:34 +0000 (0:00:13.518) 0:02:31.622 ******** 2026-01-02 01:06:08.403206 | orchestrator | changed: [testbed-manager] 2026-01-02 01:06:08.403212 | orchestrator | 2026-01-02 01:06:08.403219 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-01-02 01:06:08.403226 | orchestrator | Friday 02 January 2026 01:05:43 +0000 (0:00:08.660) 0:02:40.283 ******** 2026-01-02 01:06:08.403233 | orchestrator | changed: [testbed-node-1] 2026-01-02 01:06:08.403239 | orchestrator | changed: [testbed-node-0] 2026-01-02 01:06:08.403246 | orchestrator | changed: [testbed-node-2] 2026-01-02 01:06:08.403253 | orchestrator | 2026-01-02 01:06:08.403259 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-01-02 01:06:08.403266 | orchestrator | Friday 02 January 2026 01:05:48 +0000 (0:00:05.353) 0:02:45.636 ******** 2026-01-02 01:06:08.403273 | orchestrator | changed: [testbed-manager] 2026-01-02 01:06:08.403279 | orchestrator | 2026-01-02 01:06:08.403286 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-01-02 01:06:08.403293 | orchestrator | Friday 02 January 2026 01:05:54 +0000 (0:00:05.246) 0:02:50.883 ******** 2026-01-02 01:06:08.403300 | orchestrator | changed: [testbed-node-3] 2026-01-02 01:06:08.403306 | orchestrator | changed: [testbed-node-5] 2026-01-02 01:06:08.403318 | orchestrator | changed: [testbed-node-4] 2026-01-02 01:06:08.403324 | orchestrator | 2026-01-02 01:06:08.403331 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 01:06:08.403338 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-02 01:06:08.403347 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-02 01:06:08.403354 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-02 01:06:08.403361 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-02 01:06:08.403368 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-02 01:06:08.403375 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-02 01:06:08.403385 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-02 01:06:08.403392 | orchestrator | 2026-01-02 01:06:08.403398 | orchestrator | 2026-01-02 01:06:08.403405 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 01:06:08.403412 | orchestrator | Friday 02 January 2026 01:06:04 +0000 (0:00:10.701) 0:03:01.585 ******** 2026-01-02 01:06:08.403419 | orchestrator | =============================================================================== 2026-01-02 01:06:08.403426 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 26.05s 2026-01-02 01:06:08.403432 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 21.36s 2026-01-02 01:06:08.403439 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 15.35s 2026-01-02 01:06:08.403446 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 13.52s 2026-01-02 01:06:08.403453 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.70s 2026-01-02 01:06:08.403459 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.33s 2026-01-02 01:06:08.403466 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 9.66s 2026-01-02 01:06:08.403473 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.66s 2026-01-02 01:06:08.403480 | orchestrator | prometheus : Restart prometheus-node-exporter container ----------------- 8.40s 2026-01-02 01:06:08.403501 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.36s 2026-01-02 01:06:08.403509 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.86s 2026-01-02 01:06:08.403515 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 5.35s 2026-01-02 01:06:08.403522 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.25s 2026-01-02 01:06:08.403529 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.14s 2026-01-02 01:06:08.403536 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.25s 2026-01-02 01:06:08.403542 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.72s 2026-01-02 01:06:08.403549 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.06s 2026-01-02 01:06:08.403556 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.06s 2026-01-02 01:06:08.403563 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.82s 2026-01-02 01:06:08.403573 | orchestrator | prometheus : include_tasks ---------------------------------------------- 1.57s 2026-01-02 01:06:11.445409 | orchestrator | 2026-01-02 01:06:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:06:11.447420 | orchestrator | 2026-01-02 01:06:11 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:06:11.452880 | orchestrator | 2026-01-02 01:06:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:06:11.453127 | orchestrator | 2026-01-02 01:06:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:06:14.502000 | orchestrator | 2026-01-02 01:06:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:06:14.502815 | orchestrator | 2026-01-02 01:06:14 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:06:14.503693 | orchestrator | 2026-01-02 01:06:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:06:14.503721 | orchestrator | 2026-01-02 01:06:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:06:17.541484 | orchestrator | 2026-01-02 01:06:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:06:17.544143 | orchestrator | 2026-01-02 01:06:17 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:06:17.548479 | orchestrator | 2026-01-02 01:06:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:06:17.548590 | orchestrator | 2026-01-02 01:06:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:06:20.610943 | orchestrator | 2026-01-02 01:06:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:06:20.614163 | orchestrator | 2026-01-02 01:06:20 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:06:20.616970 | orchestrator | 2026-01-02 01:06:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:06:20.617044 | orchestrator | 2026-01-02 01:06:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:06:23.670816 | orchestrator | 2026-01-02 01:06:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:06:23.673257 | orchestrator | 2026-01-02 01:06:23 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:06:23.675965 | orchestrator | 2026-01-02 01:06:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:06:23.676390 | orchestrator | 2026-01-02 01:06:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:06:26.719937 | orchestrator | 2026-01-02 01:06:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:06:26.722266 | orchestrator | 2026-01-02 01:06:26 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:06:26.723947 | orchestrator | 2026-01-02 01:06:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:06:26.723992 | orchestrator | 2026-01-02 01:06:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:06:29.773357 | orchestrator | 2026-01-02 01:06:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:06:29.775365 | orchestrator | 2026-01-02 01:06:29 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:06:29.776800 | orchestrator | 2026-01-02 01:06:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:06:29.776838 | orchestrator | 2026-01-02 01:06:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:06:32.834099 | orchestrator | 2026-01-02 01:06:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:06:32.836140 | orchestrator | 2026-01-02 01:06:32 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:06:32.837861 | orchestrator | 2026-01-02 01:06:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:06:32.837986 | orchestrator | 2026-01-02 01:06:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:06:35.888425 | orchestrator | 2026-01-02 01:06:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:06:35.889040 | orchestrator | 2026-01-02 01:06:35 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:06:35.892004 | orchestrator | 2026-01-02 01:06:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:06:35.892051 | orchestrator | 2026-01-02 01:06:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:06:38.933164 | orchestrator | 2026-01-02 01:06:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:06:38.934233 | orchestrator | 2026-01-02 01:06:38 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:06:38.936135 | orchestrator | 2026-01-02 01:06:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:06:38.936179 | orchestrator | 2026-01-02 01:06:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:06:41.982412 | orchestrator | 2026-01-02 01:06:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:06:41.983886 | orchestrator | 2026-01-02 01:06:41 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:06:41.985838 | orchestrator | 2026-01-02 01:06:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:06:41.985874 | orchestrator | 2026-01-02 01:06:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:06:45.025468 | orchestrator | 2026-01-02 01:06:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:06:45.027490 | orchestrator | 2026-01-02 01:06:45 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:06:45.030009 | orchestrator | 2026-01-02 01:06:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:06:45.030079 | orchestrator | 2026-01-02 01:06:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:06:48.080421 | orchestrator | 2026-01-02 01:06:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:06:48.082373 | orchestrator | 2026-01-02 01:06:48 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:06:48.084351 | orchestrator | 2026-01-02 01:06:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:06:48.084427 | orchestrator | 2026-01-02 01:06:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:06:51.135256 | orchestrator | 2026-01-02 01:06:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:06:51.137866 | orchestrator | 2026-01-02 01:06:51 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:06:51.143208 | orchestrator | 2026-01-02 01:06:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:06:51.143246 | orchestrator | 2026-01-02 01:06:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:06:54.185193 | orchestrator | 2026-01-02 01:06:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:06:54.186247 | orchestrator | 2026-01-02 01:06:54 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:06:54.188507 | orchestrator | 2026-01-02 01:06:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:06:54.188642 | orchestrator | 2026-01-02 01:06:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:06:57.239752 | orchestrator | 2026-01-02 01:06:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:06:57.241906 | orchestrator | 2026-01-02 01:06:57 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:06:57.243907 | orchestrator | 2026-01-02 01:06:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:06:57.244077 | orchestrator | 2026-01-02 01:06:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:07:00.293761 | orchestrator | 2026-01-02 01:07:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:07:00.295791 | orchestrator | 2026-01-02 01:07:00 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:07:00.297967 | orchestrator | 2026-01-02 01:07:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:07:00.298079 | orchestrator | 2026-01-02 01:07:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:07:03.349930 | orchestrator | 2026-01-02 01:07:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:07:03.351787 | orchestrator | 2026-01-02 01:07:03 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:07:03.353702 | orchestrator | 2026-01-02 01:07:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:07:03.353738 | orchestrator | 2026-01-02 01:07:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:07:06.402365 | orchestrator | 2026-01-02 01:07:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:07:06.402470 | orchestrator | 2026-01-02 01:07:06 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:07:06.403496 | orchestrator | 2026-01-02 01:07:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:07:06.403740 | orchestrator | 2026-01-02 01:07:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:07:09.451083 | orchestrator | 2026-01-02 01:07:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:07:09.454227 | orchestrator | 2026-01-02 01:07:09 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:07:09.455895 | orchestrator | 2026-01-02 01:07:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:07:09.455953 | orchestrator | 2026-01-02 01:07:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:07:12.493918 | orchestrator | 2026-01-02 01:07:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:07:12.496433 | orchestrator | 2026-01-02 01:07:12 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:07:12.498707 | orchestrator | 2026-01-02 01:07:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:07:12.498881 | orchestrator | 2026-01-02 01:07:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:07:15.551399 | orchestrator | 2026-01-02 01:07:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:07:15.553874 | orchestrator | 2026-01-02 01:07:15 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:07:15.556911 | orchestrator | 2026-01-02 01:07:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:07:15.556958 | orchestrator | 2026-01-02 01:07:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:07:18.604250 | orchestrator | 2026-01-02 01:07:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:07:18.605886 | orchestrator | 2026-01-02 01:07:18 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:07:18.607939 | orchestrator | 2026-01-02 01:07:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:07:18.607985 | orchestrator | 2026-01-02 01:07:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:07:21.657508 | orchestrator | 2026-01-02 01:07:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:07:21.661235 | orchestrator | 2026-01-02 01:07:21 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:07:21.664287 | orchestrator | 2026-01-02 01:07:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:07:21.664342 | orchestrator | 2026-01-02 01:07:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:07:24.713159 | orchestrator | 2026-01-02 01:07:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:07:24.715245 | orchestrator | 2026-01-02 01:07:24 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:07:24.717231 | orchestrator | 2026-01-02 01:07:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:07:24.717284 | orchestrator | 2026-01-02 01:07:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:07:27.763317 | orchestrator | 2026-01-02 01:07:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:07:27.764471 | orchestrator | 2026-01-02 01:07:27 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:07:27.768367 | orchestrator | 2026-01-02 01:07:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:07:27.768462 | orchestrator | 2026-01-02 01:07:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:07:30.821123 | orchestrator | 2026-01-02 01:07:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:07:30.825122 | orchestrator | 2026-01-02 01:07:30 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:07:30.826719 | orchestrator | 2026-01-02 01:07:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:07:30.826759 | orchestrator | 2026-01-02 01:07:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:07:33.879976 | orchestrator | 2026-01-02 01:07:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:07:33.882973 | orchestrator | 2026-01-02 01:07:33 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:07:33.884449 | orchestrator | 2026-01-02 01:07:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:07:33.884478 | orchestrator | 2026-01-02 01:07:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:07:36.938637 | orchestrator | 2026-01-02 01:07:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:07:36.940134 | orchestrator | 2026-01-02 01:07:36 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:07:36.942745 | orchestrator | 2026-01-02 01:07:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:07:36.942813 | orchestrator | 2026-01-02 01:07:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:07:39.991289 | orchestrator | 2026-01-02 01:07:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:07:39.994718 | orchestrator | 2026-01-02 01:07:39 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:07:39.997728 | orchestrator | 2026-01-02 01:07:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:07:39.997782 | orchestrator | 2026-01-02 01:07:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:07:43.049609 | orchestrator | 2026-01-02 01:07:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:07:43.050655 | orchestrator | 2026-01-02 01:07:43 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:07:43.052296 | orchestrator | 2026-01-02 01:07:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:07:43.052336 | orchestrator | 2026-01-02 01:07:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:07:46.099100 | orchestrator | 2026-01-02 01:07:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:07:46.100625 | orchestrator | 2026-01-02 01:07:46 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:07:46.102185 | orchestrator | 2026-01-02 01:07:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:07:46.102219 | orchestrator | 2026-01-02 01:07:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:07:49.144245 | orchestrator | 2026-01-02 01:07:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:07:49.146236 | orchestrator | 2026-01-02 01:07:49 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:07:49.148255 | orchestrator | 2026-01-02 01:07:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:07:49.148451 | orchestrator | 2026-01-02 01:07:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:07:52.202414 | orchestrator | 2026-01-02 01:07:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:07:52.203445 | orchestrator | 2026-01-02 01:07:52 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:07:52.206173 | orchestrator | 2026-01-02 01:07:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:07:52.206367 | orchestrator | 2026-01-02 01:07:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:07:55.247894 | orchestrator | 2026-01-02 01:07:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:07:55.248219 | orchestrator | 2026-01-02 01:07:55 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:07:55.249708 | orchestrator | 2026-01-02 01:07:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:07:55.249787 | orchestrator | 2026-01-02 01:07:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:07:58.290538 | orchestrator | 2026-01-02 01:07:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:07:58.292044 | orchestrator | 2026-01-02 01:07:58 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:07:58.293995 | orchestrator | 2026-01-02 01:07:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:07:58.294156 | orchestrator | 2026-01-02 01:07:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:08:01.332705 | orchestrator | 2026-01-02 01:08:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:08:01.335242 | orchestrator | 2026-01-02 01:08:01 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:08:01.337044 | orchestrator | 2026-01-02 01:08:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:08:01.337147 | orchestrator | 2026-01-02 01:08:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:08:04.386208 | orchestrator | 2026-01-02 01:08:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:08:04.389912 | orchestrator | 2026-01-02 01:08:04 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:08:04.391374 | orchestrator | 2026-01-02 01:08:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:08:04.391431 | orchestrator | 2026-01-02 01:08:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:08:07.451920 | orchestrator | 2026-01-02 01:08:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:08:07.455642 | orchestrator | 2026-01-02 01:08:07 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:08:07.460279 | orchestrator | 2026-01-02 01:08:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:08:07.460362 | orchestrator | 2026-01-02 01:08:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:08:10.506405 | orchestrator | 2026-01-02 01:08:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:08:10.509024 | orchestrator | 2026-01-02 01:08:10 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:08:10.511710 | orchestrator | 2026-01-02 01:08:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:08:10.511877 | orchestrator | 2026-01-02 01:08:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:08:13.556986 | orchestrator | 2026-01-02 01:08:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:08:13.559472 | orchestrator | 2026-01-02 01:08:13 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:08:13.562332 | orchestrator | 2026-01-02 01:08:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:08:13.562366 | orchestrator | 2026-01-02 01:08:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:08:16.609890 | orchestrator | 2026-01-02 01:08:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:08:16.610564 | orchestrator | 2026-01-02 01:08:16 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:08:16.611868 | orchestrator | 2026-01-02 01:08:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:08:16.611897 | orchestrator | 2026-01-02 01:08:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:08:19.656030 | orchestrator | 2026-01-02 01:08:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:08:19.656884 | orchestrator | 2026-01-02 01:08:19 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:08:19.658950 | orchestrator | 2026-01-02 01:08:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:08:19.658987 | orchestrator | 2026-01-02 01:08:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:08:22.707251 | orchestrator | 2026-01-02 01:08:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:08:22.709415 | orchestrator | 2026-01-02 01:08:22 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:08:22.712650 | orchestrator | 2026-01-02 01:08:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:08:22.712708 | orchestrator | 2026-01-02 01:08:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:08:25.759050 | orchestrator | 2026-01-02 01:08:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:08:25.761337 | orchestrator | 2026-01-02 01:08:25 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:08:25.764333 | orchestrator | 2026-01-02 01:08:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:08:25.764363 | orchestrator | 2026-01-02 01:08:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:08:28.810282 | orchestrator | 2026-01-02 01:08:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:08:28.812990 | orchestrator | 2026-01-02 01:08:28 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:08:28.814671 | orchestrator | 2026-01-02 01:08:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:08:28.816088 | orchestrator | 2026-01-02 01:08:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:08:31.863735 | orchestrator | 2026-01-02 01:08:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:08:31.866194 | orchestrator | 2026-01-02 01:08:31 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:08:31.869145 | orchestrator | 2026-01-02 01:08:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:08:31.869178 | orchestrator | 2026-01-02 01:08:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:08:34.937118 | orchestrator | 2026-01-02 01:08:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:08:34.939045 | orchestrator | 2026-01-02 01:08:34 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:08:34.940775 | orchestrator | 2026-01-02 01:08:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:08:34.940952 | orchestrator | 2026-01-02 01:08:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:08:37.992959 | orchestrator | 2026-01-02 01:08:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:08:37.993175 | orchestrator | 2026-01-02 01:08:37 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:08:37.994979 | orchestrator | 2026-01-02 01:08:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:08:37.995023 | orchestrator | 2026-01-02 01:08:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:08:41.039378 | orchestrator | 2026-01-02 01:08:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:08:41.040601 | orchestrator | 2026-01-02 01:08:41 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state STARTED 2026-01-02 01:08:41.042280 | orchestrator | 2026-01-02 01:08:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:08:41.042329 | orchestrator | 2026-01-02 01:08:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:08:44.083228 | orchestrator | 2026-01-02 01:08:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:08:44.086958 | orchestrator | 2026-01-02 01:08:44 | INFO  | Task 9d2151db-27de-4676-b211-da2e4467d4ea is in state SUCCESS 2026-01-02 01:08:44.089022 | orchestrator | 2026-01-02 01:08:44.089173 | orchestrator | 2026-01-02 01:08:44.089195 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-02 01:08:44.089208 | orchestrator | 2026-01-02 01:08:44.089218 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-02 01:08:44.089229 | orchestrator | Friday 02 January 2026 01:06:09 +0000 (0:00:00.276) 0:00:00.276 ******** 2026-01-02 01:08:44.089252 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:08:44.089266 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:08:44.089277 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:08:44.089287 | orchestrator | 2026-01-02 01:08:44.089297 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-02 01:08:44.089307 | orchestrator | Friday 02 January 2026 01:06:10 +0000 (0:00:00.334) 0:00:00.611 ******** 2026-01-02 01:08:44.089319 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-01-02 01:08:44.089336 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-01-02 01:08:44.089363 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-01-02 01:08:44.089380 | orchestrator | 2026-01-02 01:08:44.089395 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-01-02 01:08:44.089411 | orchestrator | 2026-01-02 01:08:44.089428 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-01-02 01:08:44.089547 | orchestrator | Friday 02 January 2026 01:06:10 +0000 (0:00:00.489) 0:00:01.100 ******** 2026-01-02 01:08:44.089557 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 01:08:44.089568 | orchestrator | 2026-01-02 01:08:44.089578 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-01-02 01:08:44.089588 | orchestrator | Friday 02 January 2026 01:06:11 +0000 (0:00:00.553) 0:00:01.653 ******** 2026-01-02 01:08:44.089646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-02 01:08:44.089675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-02 01:08:44.089694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-02 01:08:44.089740 | orchestrator | 2026-01-02 01:08:44.089754 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-01-02 01:08:44.089765 | orchestrator | Friday 02 January 2026 01:06:12 +0000 (0:00:00.764) 0:00:02.418 ******** 2026-01-02 01:08:44.089779 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-01-02 01:08:44.089790 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-01-02 01:08:44.089801 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-02 01:08:44.089822 | orchestrator | 2026-01-02 01:08:44.089864 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-01-02 01:08:44.089875 | orchestrator | Friday 02 January 2026 01:06:12 +0000 (0:00:00.865) 0:00:03.284 ******** 2026-01-02 01:08:44.089885 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 01:08:44.089895 | orchestrator | 2026-01-02 01:08:44.089905 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-01-02 01:08:44.089915 | orchestrator | Friday 02 January 2026 01:06:13 +0000 (0:00:00.758) 0:00:04.043 ******** 2026-01-02 01:08:44.089941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-02 01:08:44.089952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-02 01:08:44.089968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-02 01:08:44.089979 | orchestrator | 2026-01-02 01:08:44.089989 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-01-02 01:08:44.089999 | orchestrator | Friday 02 January 2026 01:06:15 +0000 (0:00:01.316) 0:00:05.359 ******** 2026-01-02 01:08:44.090009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-02 01:08:44.090114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-02 01:08:44.090144 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:08:44.090155 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:08:44.090173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-02 01:08:44.090184 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:08:44.090205 | orchestrator | 2026-01-02 01:08:44.090219 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-01-02 01:08:44.090242 | orchestrator | Friday 02 January 2026 01:06:15 +0000 (0:00:00.421) 0:00:05.781 ******** 2026-01-02 01:08:44.090263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-02 01:08:44.090282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-02 01:08:44.090299 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:08:44.090316 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:08:44.090333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-02 01:08:44.090363 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:08:44.090373 | orchestrator | 2026-01-02 01:08:44.090383 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-01-02 01:08:44.090393 | orchestrator | Friday 02 January 2026 01:06:16 +0000 (0:00:00.824) 0:00:06.605 ******** 2026-01-02 01:08:44.090403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-02 01:08:44.090414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-02 01:08:44.090466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-02 01:08:44.090478 | orchestrator | 2026-01-02 01:08:44.090488 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-01-02 01:08:44.090498 | orchestrator | Friday 02 January 2026 01:06:17 +0000 (0:00:01.361) 0:00:07.967 ******** 2026-01-02 01:08:44.090508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-02 01:08:44.090523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-02 01:08:44.090540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-02 01:08:44.090550 | orchestrator | 2026-01-02 01:08:44.090560 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-01-02 01:08:44.090570 | orchestrator | Friday 02 January 2026 01:06:19 +0000 (0:00:01.380) 0:00:09.347 ******** 2026-01-02 01:08:44.090580 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:08:44.090590 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:08:44.090600 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:08:44.090656 | orchestrator | 2026-01-02 01:08:44.090669 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-01-02 01:08:44.090679 | orchestrator | Friday 02 January 2026 01:06:19 +0000 (0:00:00.516) 0:00:09.864 ******** 2026-01-02 01:08:44.090688 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-02 01:08:44.090699 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-02 01:08:44.090723 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-02 01:08:44.090733 | orchestrator | 2026-01-02 01:08:44.090742 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-01-02 01:08:44.090752 | orchestrator | Friday 02 January 2026 01:06:20 +0000 (0:00:01.245) 0:00:11.110 ******** 2026-01-02 01:08:44.090762 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-02 01:08:44.090772 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-02 01:08:44.090782 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-02 01:08:44.090792 | orchestrator | 2026-01-02 01:08:44.090802 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-01-02 01:08:44.090811 | orchestrator | Friday 02 January 2026 01:06:22 +0000 (0:00:01.334) 0:00:12.445 ******** 2026-01-02 01:08:44.090828 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-02 01:08:44.090838 | orchestrator | 2026-01-02 01:08:44.090849 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-01-02 01:08:44.090865 | orchestrator | Friday 02 January 2026 01:06:22 +0000 (0:00:00.757) 0:00:13.202 ******** 2026-01-02 01:08:44.090890 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-01-02 01:08:44.090911 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-01-02 01:08:44.090926 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:08:44.090943 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:08:44.090959 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:08:44.090975 | orchestrator | 2026-01-02 01:08:44.090991 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-01-02 01:08:44.091007 | orchestrator | Friday 02 January 2026 01:06:23 +0000 (0:00:00.774) 0:00:13.976 ******** 2026-01-02 01:08:44.091021 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:08:44.091035 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:08:44.091060 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:08:44.091076 | orchestrator | 2026-01-02 01:08:44.091093 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-01-02 01:08:44.091109 | orchestrator | Friday 02 January 2026 01:06:24 +0000 (0:00:00.545) 0:00:14.522 ******** 2026-01-02 01:08:44.091221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1094103, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0295148, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.091244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1094103, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0295148, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.091255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1094103, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0295148, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.091266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1094140, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0535154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.091286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1094140, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0535154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.091297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1094140, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0535154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.091316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1094110, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0341504, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.091331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1094110, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0341504, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.091341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1094110, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0341504, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.091352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1094141, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0568001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.091362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1094141, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0568001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.091380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1094141, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0568001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.091405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1094121, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0455153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.091420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1094121, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0455153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.091431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1094121, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0455153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.091441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1094134, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0518725, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.091452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1094134, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0518725, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.091470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1094134, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0518725, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.091487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1094102, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0289068, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.091502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1094102, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0289068, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.091513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1094102, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0289068, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.091523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1094105, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.031124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.091533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1094105, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.031124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.091544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1094105, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.031124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.091566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1094113, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0349493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.091577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1094113, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0349493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.091591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1094113, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0349493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.091601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1094123, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0485153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.091670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1094123, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0485153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.091683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1094123, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0485153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.091703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1094139, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0535154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.091720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1094139, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0535154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.091740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1094139, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0535154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.091758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1094108, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.031515, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.091776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1094108, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.031515, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.091791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1094108, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.031515, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.091816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1094128, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0507932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.091841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1094128, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0507932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.091856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1094128, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0507932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.091875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1094122, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0471418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.091890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1094122, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0471418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.091904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1094122, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0471418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.091921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1094120, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0455153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.091958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1094120, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0455153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.091978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1094120, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0455153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.092002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1094119, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.041515, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.092016 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1094119, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.041515, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.092026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1094119, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.041515, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.092036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1094126, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0495155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.092063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1094126, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0495155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.092074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1094126, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0495155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.092089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1094116, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0405152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.092099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1094116, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0405152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.092110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1094116, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0405152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.092120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1094136, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0526524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.092142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1094136, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0526524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.092153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1094136, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0526524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.092163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1094287, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0988781, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.092182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1094287, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0988781, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.092193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1094287, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0988781, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.092203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1094162, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0710225, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.092221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1094162, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0710225, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.092239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1094162, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0710225, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.092250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1094152, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0605156, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.092264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1094152, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0605156, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.092275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1094152, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0605156, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.092286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1094186, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.074288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.092303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1094186, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.074288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.092322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1094186, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.074288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.092333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1094146, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0576305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.092352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1094146, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0576305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.092369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1094146, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0576305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.092393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1094237, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0901196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.092424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1094237, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0901196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.092983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1094237, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0901196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.093033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1094189, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.083753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.093157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1094189, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.083753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.093185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1094189, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.083753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.093263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1094251, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0907462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.093297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1094251, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0907462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.093329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1094251, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0907462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.093346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1094280, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0978386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.093364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1094280, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0978386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.093390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1094235, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0865164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.093408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1094280, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0978386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.093436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1094235, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0865164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.093454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1094182, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0726054, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.093481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1094235, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0865164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.093499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1094182, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0726054, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.093523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1094159, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0661504, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.093541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1094182, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0726054, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.093569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1094159, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0661504, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.093589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1094178, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0726054, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.093646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1094178, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0726054, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.093667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1094159, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0661504, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.093691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1094153, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0632195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.093711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1094153, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0632195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.093741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1094178, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0726054, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.093760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1094183, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0735588, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.093788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1094183, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0735588, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.093812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1094153, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0632195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.093878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1094264, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0969884, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.093895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1094264, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0969884, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.093922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1094183, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0735588, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.093956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1094258, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0925822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.093975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1094258, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0925822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.093986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1094264, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0969884, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.093998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1094148, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0585155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.094007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1094148, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0585155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.094080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1094258, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0925822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.094092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1094150, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0595157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.094107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1094150, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0595157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.094116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1094148, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0585155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.094125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1094230, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0864744, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.094143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1094230, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0864744, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.094152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1094150, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0595157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.094160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1094254, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0910566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.094169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1094254, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0910566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.094183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1094230, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0864744, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.094192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1094254, 'dev': 123, 'nlink': 1, 'atime': 1767312165.0, 'mtime': 1767312165.0, 'ctime': 1767313046.0910566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-02 01:08:44.094205 | orchestrator | 2026-01-02 01:08:44.094233 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-01-02 01:08:44.094243 | orchestrator | Friday 02 January 2026 01:07:02 +0000 (0:00:38.743) 0:00:53.265 ******** 2026-01-02 01:08:44.094256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-02 01:08:44.094265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-02 01:08:44.094273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-02 01:08:44.094282 | orchestrator | 2026-01-02 01:08:44.094290 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-01-02 01:08:44.094298 | orchestrator | Friday 02 January 2026 01:07:04 +0000 (0:00:01.032) 0:00:54.298 ******** 2026-01-02 01:08:44.094306 | orchestrator | changed: [testbed-node-0] 2026-01-02 01:08:44.094315 | orchestrator | 2026-01-02 01:08:44.094324 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-01-02 01:08:44.094331 | orchestrator | Friday 02 January 2026 01:07:06 +0000 (0:00:02.287) 0:00:56.585 ******** 2026-01-02 01:08:44.094339 | orchestrator | changed: [testbed-node-0] 2026-01-02 01:08:44.094359 | orchestrator | 2026-01-02 01:08:44.094368 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-02 01:08:44.094376 | orchestrator | Friday 02 January 2026 01:07:08 +0000 (0:00:02.336) 0:00:58.922 ******** 2026-01-02 01:08:44.094384 | orchestrator | 2026-01-02 01:08:44.094392 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-02 01:08:44.094405 | orchestrator | Friday 02 January 2026 01:07:08 +0000 (0:00:00.063) 0:00:58.986 ******** 2026-01-02 01:08:44.094413 | orchestrator | 2026-01-02 01:08:44.094421 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-02 01:08:44.094430 | orchestrator | Friday 02 January 2026 01:07:08 +0000 (0:00:00.065) 0:00:59.052 ******** 2026-01-02 01:08:44.094438 | orchestrator | 2026-01-02 01:08:44.094446 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-01-02 01:08:44.094454 | orchestrator | Friday 02 January 2026 01:07:08 +0000 (0:00:00.231) 0:00:59.283 ******** 2026-01-02 01:08:44.094462 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:08:44.094475 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:08:44.094484 | orchestrator | changed: [testbed-node-0] 2026-01-02 01:08:44.094492 | orchestrator | 2026-01-02 01:08:44.094500 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-01-02 01:08:44.094508 | orchestrator | Friday 02 January 2026 01:07:10 +0000 (0:00:01.759) 0:01:01.042 ******** 2026-01-02 01:08:44.094516 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:08:44.094524 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:08:44.094532 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-01-02 01:08:44.094550 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-01-02 01:08:44.094559 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-01-02 01:08:44.094568 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2026-01-02 01:08:44.094576 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:08:44.094584 | orchestrator | 2026-01-02 01:08:44.094593 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-01-02 01:08:44.094601 | orchestrator | Friday 02 January 2026 01:08:01 +0000 (0:00:50.691) 0:01:51.734 ******** 2026-01-02 01:08:44.094625 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:08:44.094636 | orchestrator | changed: [testbed-node-1] 2026-01-02 01:08:44.094648 | orchestrator | changed: [testbed-node-2] 2026-01-02 01:08:44.094657 | orchestrator | 2026-01-02 01:08:44.094665 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-01-02 01:08:44.094683 | orchestrator | Friday 02 January 2026 01:08:37 +0000 (0:00:36.040) 0:02:27.775 ******** 2026-01-02 01:08:44.094692 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:08:44.094700 | orchestrator | 2026-01-02 01:08:44.094708 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-01-02 01:08:44.094716 | orchestrator | Friday 02 January 2026 01:08:40 +0000 (0:00:02.527) 0:02:30.302 ******** 2026-01-02 01:08:44.094724 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:08:44.094732 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:08:44.094740 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:08:44.094748 | orchestrator | 2026-01-02 01:08:44.094756 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-01-02 01:08:44.094764 | orchestrator | Friday 02 January 2026 01:08:40 +0000 (0:00:00.409) 0:02:30.712 ******** 2026-01-02 01:08:44.094773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-01-02 01:08:44.094785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-01-02 01:08:44.094793 | orchestrator | 2026-01-02 01:08:44.094802 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-01-02 01:08:44.094810 | orchestrator | Friday 02 January 2026 01:08:42 +0000 (0:00:02.488) 0:02:33.200 ******** 2026-01-02 01:08:44.094818 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:08:44.094826 | orchestrator | 2026-01-02 01:08:44.094834 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 01:08:44.094843 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-02 01:08:44.094852 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-02 01:08:44.094867 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-02 01:08:44.094875 | orchestrator | 2026-01-02 01:08:44.094883 | orchestrator | 2026-01-02 01:08:44.094891 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 01:08:44.094899 | orchestrator | Friday 02 January 2026 01:08:43 +0000 (0:00:00.260) 0:02:33.461 ******** 2026-01-02 01:08:44.094916 | orchestrator | =============================================================================== 2026-01-02 01:08:44.094925 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 50.69s 2026-01-02 01:08:44.094933 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 38.74s 2026-01-02 01:08:44.094941 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 36.04s 2026-01-02 01:08:44.094954 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.53s 2026-01-02 01:08:44.094962 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.49s 2026-01-02 01:08:44.094970 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.34s 2026-01-02 01:08:44.094978 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.29s 2026-01-02 01:08:44.094986 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.76s 2026-01-02 01:08:44.094994 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.38s 2026-01-02 01:08:44.095002 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.36s 2026-01-02 01:08:44.095010 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.33s 2026-01-02 01:08:44.095018 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.32s 2026-01-02 01:08:44.095026 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.25s 2026-01-02 01:08:44.095034 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.03s 2026-01-02 01:08:44.095042 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.87s 2026-01-02 01:08:44.095050 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.82s 2026-01-02 01:08:44.095058 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.77s 2026-01-02 01:08:44.095066 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.76s 2026-01-02 01:08:44.095074 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.76s 2026-01-02 01:08:44.095081 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.76s 2026-01-02 01:08:44.095090 | orchestrator | 2026-01-02 01:08:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:08:44.095102 | orchestrator | 2026-01-02 01:08:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:08:47.134314 | orchestrator | 2026-01-02 01:08:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:08:47.136691 | orchestrator | 2026-01-02 01:08:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:08:47.136889 | orchestrator | 2026-01-02 01:08:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:08:50.181816 | orchestrator | 2026-01-02 01:08:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:08:50.183949 | orchestrator | 2026-01-02 01:08:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:08:50.184008 | orchestrator | 2026-01-02 01:08:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:08:53.233385 | orchestrator | 2026-01-02 01:08:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:08:53.235389 | orchestrator | 2026-01-02 01:08:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:08:53.235459 | orchestrator | 2026-01-02 01:08:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:08:56.271463 | orchestrator | 2026-01-02 01:08:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:08:56.271765 | orchestrator | 2026-01-02 01:08:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:08:56.271794 | orchestrator | 2026-01-02 01:08:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:08:59.313937 | orchestrator | 2026-01-02 01:08:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:08:59.314068 | orchestrator | 2026-01-02 01:08:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:08:59.314178 | orchestrator | 2026-01-02 01:08:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:09:02.354008 | orchestrator | 2026-01-02 01:09:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:09:02.354775 | orchestrator | 2026-01-02 01:09:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:09:02.354911 | orchestrator | 2026-01-02 01:09:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:09:05.397604 | orchestrator | 2026-01-02 01:09:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:09:05.399938 | orchestrator | 2026-01-02 01:09:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:09:05.400179 | orchestrator | 2026-01-02 01:09:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:09:08.453565 | orchestrator | 2026-01-02 01:09:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:09:08.459264 | orchestrator | 2026-01-02 01:09:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:09:08.459331 | orchestrator | 2026-01-02 01:09:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:09:11.502317 | orchestrator | 2026-01-02 01:09:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:09:11.504196 | orchestrator | 2026-01-02 01:09:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:09:11.504287 | orchestrator | 2026-01-02 01:09:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:09:14.554336 | orchestrator | 2026-01-02 01:09:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:09:14.555435 | orchestrator | 2026-01-02 01:09:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:09:14.555802 | orchestrator | 2026-01-02 01:09:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:09:17.595125 | orchestrator | 2026-01-02 01:09:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:09:17.596963 | orchestrator | 2026-01-02 01:09:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:09:17.596995 | orchestrator | 2026-01-02 01:09:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:09:20.641557 | orchestrator | 2026-01-02 01:09:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:09:20.643987 | orchestrator | 2026-01-02 01:09:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:09:20.644206 | orchestrator | 2026-01-02 01:09:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:09:23.689235 | orchestrator | 2026-01-02 01:09:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:09:23.690326 | orchestrator | 2026-01-02 01:09:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:09:23.690566 | orchestrator | 2026-01-02 01:09:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:09:26.729346 | orchestrator | 2026-01-02 01:09:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:09:26.731317 | orchestrator | 2026-01-02 01:09:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:09:26.731431 | orchestrator | 2026-01-02 01:09:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:09:29.767850 | orchestrator | 2026-01-02 01:09:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:09:29.769906 | orchestrator | 2026-01-02 01:09:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:09:29.769964 | orchestrator | 2026-01-02 01:09:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:09:32.818246 | orchestrator | 2026-01-02 01:09:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:09:32.820505 | orchestrator | 2026-01-02 01:09:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:09:32.820533 | orchestrator | 2026-01-02 01:09:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:09:35.871175 | orchestrator | 2026-01-02 01:09:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:09:35.872813 | orchestrator | 2026-01-02 01:09:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:09:35.873060 | orchestrator | 2026-01-02 01:09:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:09:38.923257 | orchestrator | 2026-01-02 01:09:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:09:38.926417 | orchestrator | 2026-01-02 01:09:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:09:38.926505 | orchestrator | 2026-01-02 01:09:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:09:41.981099 | orchestrator | 2026-01-02 01:09:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:09:41.984771 | orchestrator | 2026-01-02 01:09:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:09:41.985091 | orchestrator | 2026-01-02 01:09:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:09:45.033693 | orchestrator | 2026-01-02 01:09:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:09:45.035771 | orchestrator | 2026-01-02 01:09:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:09:45.036226 | orchestrator | 2026-01-02 01:09:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:09:48.078696 | orchestrator | 2026-01-02 01:09:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:09:48.080312 | orchestrator | 2026-01-02 01:09:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:09:48.080406 | orchestrator | 2026-01-02 01:09:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:09:51.126743 | orchestrator | 2026-01-02 01:09:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:09:51.128394 | orchestrator | 2026-01-02 01:09:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:09:51.128429 | orchestrator | 2026-01-02 01:09:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:09:54.180538 | orchestrator | 2026-01-02 01:09:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:09:54.183622 | orchestrator | 2026-01-02 01:09:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:09:54.183722 | orchestrator | 2026-01-02 01:09:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:09:57.235773 | orchestrator | 2026-01-02 01:09:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:09:57.238344 | orchestrator | 2026-01-02 01:09:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:09:57.238402 | orchestrator | 2026-01-02 01:09:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:10:00.277751 | orchestrator | 2026-01-02 01:10:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:10:00.278462 | orchestrator | 2026-01-02 01:10:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:10:00.278498 | orchestrator | 2026-01-02 01:10:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:10:03.327934 | orchestrator | 2026-01-02 01:10:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:10:03.329598 | orchestrator | 2026-01-02 01:10:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:10:03.329644 | orchestrator | 2026-01-02 01:10:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:10:06.371928 | orchestrator | 2026-01-02 01:10:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:10:06.373917 | orchestrator | 2026-01-02 01:10:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:10:06.374135 | orchestrator | 2026-01-02 01:10:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:10:09.425713 | orchestrator | 2026-01-02 01:10:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:10:09.428250 | orchestrator | 2026-01-02 01:10:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:10:09.428331 | orchestrator | 2026-01-02 01:10:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:10:12.466286 | orchestrator | 2026-01-02 01:10:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:10:12.468849 | orchestrator | 2026-01-02 01:10:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:10:12.468932 | orchestrator | 2026-01-02 01:10:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:10:15.520111 | orchestrator | 2026-01-02 01:10:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:10:15.522373 | orchestrator | 2026-01-02 01:10:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:10:15.522446 | orchestrator | 2026-01-02 01:10:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:10:18.573233 | orchestrator | 2026-01-02 01:10:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:10:18.575596 | orchestrator | 2026-01-02 01:10:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:10:18.575701 | orchestrator | 2026-01-02 01:10:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:10:21.626948 | orchestrator | 2026-01-02 01:10:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:10:21.629201 | orchestrator | 2026-01-02 01:10:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:10:21.629363 | orchestrator | 2026-01-02 01:10:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:10:24.676499 | orchestrator | 2026-01-02 01:10:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:10:24.678444 | orchestrator | 2026-01-02 01:10:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:10:24.678491 | orchestrator | 2026-01-02 01:10:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:10:27.722384 | orchestrator | 2026-01-02 01:10:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:10:27.723534 | orchestrator | 2026-01-02 01:10:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:10:27.723571 | orchestrator | 2026-01-02 01:10:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:10:30.768183 | orchestrator | 2026-01-02 01:10:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:10:30.771234 | orchestrator | 2026-01-02 01:10:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:10:30.771790 | orchestrator | 2026-01-02 01:10:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:10:33.816540 | orchestrator | 2026-01-02 01:10:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:10:33.817159 | orchestrator | 2026-01-02 01:10:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:10:33.817242 | orchestrator | 2026-01-02 01:10:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:10:36.863555 | orchestrator | 2026-01-02 01:10:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:10:36.864909 | orchestrator | 2026-01-02 01:10:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:10:36.864954 | orchestrator | 2026-01-02 01:10:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:10:39.925867 | orchestrator | 2026-01-02 01:10:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:10:39.928016 | orchestrator | 2026-01-02 01:10:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:10:39.928089 | orchestrator | 2026-01-02 01:10:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:10:42.982545 | orchestrator | 2026-01-02 01:10:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:10:42.984862 | orchestrator | 2026-01-02 01:10:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:10:42.984921 | orchestrator | 2026-01-02 01:10:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:10:46.028155 | orchestrator | 2026-01-02 01:10:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:10:46.029591 | orchestrator | 2026-01-02 01:10:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:10:46.029639 | orchestrator | 2026-01-02 01:10:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:10:49.085072 | orchestrator | 2026-01-02 01:10:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:10:49.085531 | orchestrator | 2026-01-02 01:10:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:10:49.085563 | orchestrator | 2026-01-02 01:10:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:10:52.133096 | orchestrator | 2026-01-02 01:10:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:10:52.135212 | orchestrator | 2026-01-02 01:10:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:10:52.135264 | orchestrator | 2026-01-02 01:10:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:10:55.188635 | orchestrator | 2026-01-02 01:10:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:10:55.191450 | orchestrator | 2026-01-02 01:10:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:10:55.191521 | orchestrator | 2026-01-02 01:10:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:10:58.241209 | orchestrator | 2026-01-02 01:10:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:10:58.243538 | orchestrator | 2026-01-02 01:10:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:10:58.243606 | orchestrator | 2026-01-02 01:10:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:11:01.284169 | orchestrator | 2026-01-02 01:11:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:11:01.286153 | orchestrator | 2026-01-02 01:11:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:11:01.286274 | orchestrator | 2026-01-02 01:11:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:11:04.333864 | orchestrator | 2026-01-02 01:11:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:11:04.335304 | orchestrator | 2026-01-02 01:11:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:11:04.335384 | orchestrator | 2026-01-02 01:11:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:11:07.382655 | orchestrator | 2026-01-02 01:11:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:11:07.384170 | orchestrator | 2026-01-02 01:11:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:11:07.384235 | orchestrator | 2026-01-02 01:11:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:11:10.429537 | orchestrator | 2026-01-02 01:11:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:11:10.432251 | orchestrator | 2026-01-02 01:11:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:11:10.432302 | orchestrator | 2026-01-02 01:11:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:11:13.487260 | orchestrator | 2026-01-02 01:11:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:11:13.490842 | orchestrator | 2026-01-02 01:11:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:11:13.490890 | orchestrator | 2026-01-02 01:11:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:11:16.533189 | orchestrator | 2026-01-02 01:11:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:11:16.533368 | orchestrator | 2026-01-02 01:11:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:11:16.533386 | orchestrator | 2026-01-02 01:11:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:11:19.582791 | orchestrator | 2026-01-02 01:11:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:11:19.584216 | orchestrator | 2026-01-02 01:11:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:11:19.584271 | orchestrator | 2026-01-02 01:11:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:11:22.627198 | orchestrator | 2026-01-02 01:11:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:11:22.629199 | orchestrator | 2026-01-02 01:11:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:11:22.629304 | orchestrator | 2026-01-02 01:11:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:11:25.674662 | orchestrator | 2026-01-02 01:11:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:11:25.675435 | orchestrator | 2026-01-02 01:11:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:11:25.675739 | orchestrator | 2026-01-02 01:11:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:11:28.728630 | orchestrator | 2026-01-02 01:11:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:11:28.730846 | orchestrator | 2026-01-02 01:11:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:11:28.730905 | orchestrator | 2026-01-02 01:11:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:11:31.776857 | orchestrator | 2026-01-02 01:11:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:11:31.777893 | orchestrator | 2026-01-02 01:11:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:11:31.778082 | orchestrator | 2026-01-02 01:11:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:11:34.820963 | orchestrator | 2026-01-02 01:11:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:11:34.822682 | orchestrator | 2026-01-02 01:11:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:11:34.822797 | orchestrator | 2026-01-02 01:11:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:11:37.866774 | orchestrator | 2026-01-02 01:11:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:11:37.869833 | orchestrator | 2026-01-02 01:11:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:11:37.869945 | orchestrator | 2026-01-02 01:11:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:11:40.913856 | orchestrator | 2026-01-02 01:11:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:11:40.915683 | orchestrator | 2026-01-02 01:11:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:11:40.915793 | orchestrator | 2026-01-02 01:11:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:11:43.970918 | orchestrator | 2026-01-02 01:11:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:11:43.971984 | orchestrator | 2026-01-02 01:11:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:11:43.972026 | orchestrator | 2026-01-02 01:11:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:11:47.022236 | orchestrator | 2026-01-02 01:11:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:11:47.022838 | orchestrator | 2026-01-02 01:11:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:11:47.022884 | orchestrator | 2026-01-02 01:11:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:11:50.073422 | orchestrator | 2026-01-02 01:11:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:11:50.075947 | orchestrator | 2026-01-02 01:11:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:11:50.076032 | orchestrator | 2026-01-02 01:11:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:11:53.127544 | orchestrator | 2026-01-02 01:11:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:11:53.129177 | orchestrator | 2026-01-02 01:11:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:11:53.129212 | orchestrator | 2026-01-02 01:11:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:11:56.179324 | orchestrator | 2026-01-02 01:11:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:11:56.181757 | orchestrator | 2026-01-02 01:11:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:11:56.181814 | orchestrator | 2026-01-02 01:11:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:11:59.235320 | orchestrator | 2026-01-02 01:11:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:11:59.236805 | orchestrator | 2026-01-02 01:11:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:11:59.236912 | orchestrator | 2026-01-02 01:11:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:12:02.279450 | orchestrator | 2026-01-02 01:12:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:12:02.280811 | orchestrator | 2026-01-02 01:12:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:12:02.280867 | orchestrator | 2026-01-02 01:12:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:12:05.330574 | orchestrator | 2026-01-02 01:12:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:12:05.334080 | orchestrator | 2026-01-02 01:12:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:12:05.334649 | orchestrator | 2026-01-02 01:12:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:12:08.386229 | orchestrator | 2026-01-02 01:12:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:12:08.387760 | orchestrator | 2026-01-02 01:12:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:12:08.388120 | orchestrator | 2026-01-02 01:12:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:12:11.436162 | orchestrator | 2026-01-02 01:12:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:12:11.438102 | orchestrator | 2026-01-02 01:12:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:12:11.438120 | orchestrator | 2026-01-02 01:12:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:12:14.485535 | orchestrator | 2026-01-02 01:12:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:12:14.486938 | orchestrator | 2026-01-02 01:12:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:12:14.486982 | orchestrator | 2026-01-02 01:12:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:12:17.537814 | orchestrator | 2026-01-02 01:12:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:12:17.539291 | orchestrator | 2026-01-02 01:12:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:12:17.539353 | orchestrator | 2026-01-02 01:12:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:12:20.588888 | orchestrator | 2026-01-02 01:12:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:12:20.590460 | orchestrator | 2026-01-02 01:12:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:12:20.590520 | orchestrator | 2026-01-02 01:12:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:12:23.636862 | orchestrator | 2026-01-02 01:12:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:12:23.639620 | orchestrator | 2026-01-02 01:12:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:12:23.639691 | orchestrator | 2026-01-02 01:12:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:12:26.684617 | orchestrator | 2026-01-02 01:12:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:12:26.686580 | orchestrator | 2026-01-02 01:12:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:12:26.686830 | orchestrator | 2026-01-02 01:12:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:12:29.735647 | orchestrator | 2026-01-02 01:12:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:12:29.738123 | orchestrator | 2026-01-02 01:12:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:12:29.738179 | orchestrator | 2026-01-02 01:12:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:12:32.792497 | orchestrator | 2026-01-02 01:12:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:12:32.794684 | orchestrator | 2026-01-02 01:12:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:12:32.794865 | orchestrator | 2026-01-02 01:12:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:12:35.841610 | orchestrator | 2026-01-02 01:12:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:12:35.843341 | orchestrator | 2026-01-02 01:12:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:12:35.843447 | orchestrator | 2026-01-02 01:12:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:12:38.881960 | orchestrator | 2026-01-02 01:12:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:12:38.885916 | orchestrator | 2026-01-02 01:12:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:12:38.885982 | orchestrator | 2026-01-02 01:12:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:12:41.927428 | orchestrator | 2026-01-02 01:12:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:12:41.929263 | orchestrator | 2026-01-02 01:12:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:12:41.929399 | orchestrator | 2026-01-02 01:12:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:12:44.982872 | orchestrator | 2026-01-02 01:12:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:12:44.985129 | orchestrator | 2026-01-02 01:12:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:12:44.985253 | orchestrator | 2026-01-02 01:12:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:12:48.039008 | orchestrator | 2026-01-02 01:12:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:12:48.039288 | orchestrator | 2026-01-02 01:12:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:12:48.039424 | orchestrator | 2026-01-02 01:12:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:12:51.083969 | orchestrator | 2026-01-02 01:12:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:12:51.084525 | orchestrator | 2026-01-02 01:12:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:12:51.084543 | orchestrator | 2026-01-02 01:12:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:12:54.133538 | orchestrator | 2026-01-02 01:12:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:12:54.135487 | orchestrator | 2026-01-02 01:12:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:12:54.135601 | orchestrator | 2026-01-02 01:12:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:12:57.190124 | orchestrator | 2026-01-02 01:12:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:12:57.191690 | orchestrator | 2026-01-02 01:12:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:12:57.191807 | orchestrator | 2026-01-02 01:12:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:13:00.248055 | orchestrator | 2026-01-02 01:13:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:13:00.250075 | orchestrator | 2026-01-02 01:13:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:13:00.250137 | orchestrator | 2026-01-02 01:13:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:13:03.297582 | orchestrator | 2026-01-02 01:13:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:13:03.299382 | orchestrator | 2026-01-02 01:13:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:13:03.299467 | orchestrator | 2026-01-02 01:13:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:13:06.351446 | orchestrator | 2026-01-02 01:13:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:13:06.353253 | orchestrator | 2026-01-02 01:13:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:13:06.353397 | orchestrator | 2026-01-02 01:13:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:13:09.405243 | orchestrator | 2026-01-02 01:13:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:13:09.406329 | orchestrator | 2026-01-02 01:13:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:13:09.406378 | orchestrator | 2026-01-02 01:13:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:13:12.458152 | orchestrator | 2026-01-02 01:13:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:13:12.459531 | orchestrator | 2026-01-02 01:13:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:13:12.459656 | orchestrator | 2026-01-02 01:13:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:13:15.506325 | orchestrator | 2026-01-02 01:13:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:13:15.509013 | orchestrator | 2026-01-02 01:13:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:13:15.509076 | orchestrator | 2026-01-02 01:13:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:13:18.555376 | orchestrator | 2026-01-02 01:13:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:13:18.557302 | orchestrator | 2026-01-02 01:13:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:13:18.557423 | orchestrator | 2026-01-02 01:13:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:13:21.605622 | orchestrator | 2026-01-02 01:13:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:13:21.607269 | orchestrator | 2026-01-02 01:13:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:13:21.607309 | orchestrator | 2026-01-02 01:13:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:13:24.655008 | orchestrator | 2026-01-02 01:13:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:13:24.656806 | orchestrator | 2026-01-02 01:13:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:13:24.656984 | orchestrator | 2026-01-02 01:13:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:13:27.700166 | orchestrator | 2026-01-02 01:13:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:13:27.702676 | orchestrator | 2026-01-02 01:13:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:13:27.702867 | orchestrator | 2026-01-02 01:13:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:13:30.745186 | orchestrator | 2026-01-02 01:13:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:13:30.747047 | orchestrator | 2026-01-02 01:13:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:13:30.747088 | orchestrator | 2026-01-02 01:13:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:13:33.795416 | orchestrator | 2026-01-02 01:13:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:13:33.796532 | orchestrator | 2026-01-02 01:13:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:13:33.796562 | orchestrator | 2026-01-02 01:13:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:13:36.840125 | orchestrator | 2026-01-02 01:13:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:13:36.841217 | orchestrator | 2026-01-02 01:13:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:13:36.841246 | orchestrator | 2026-01-02 01:13:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:13:39.891106 | orchestrator | 2026-01-02 01:13:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:13:39.894591 | orchestrator | 2026-01-02 01:13:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:13:39.894682 | orchestrator | 2026-01-02 01:13:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:13:42.941928 | orchestrator | 2026-01-02 01:13:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:13:42.943930 | orchestrator | 2026-01-02 01:13:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:13:42.944425 | orchestrator | 2026-01-02 01:13:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:13:45.994636 | orchestrator | 2026-01-02 01:13:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:13:45.996599 | orchestrator | 2026-01-02 01:13:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:13:45.996845 | orchestrator | 2026-01-02 01:13:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:13:49.057099 | orchestrator | 2026-01-02 01:13:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:13:49.059819 | orchestrator | 2026-01-02 01:13:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:13:49.060125 | orchestrator | 2026-01-02 01:13:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:13:52.107804 | orchestrator | 2026-01-02 01:13:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:13:52.108606 | orchestrator | 2026-01-02 01:13:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:13:52.108639 | orchestrator | 2026-01-02 01:13:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:13:55.151154 | orchestrator | 2026-01-02 01:13:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:13:55.152113 | orchestrator | 2026-01-02 01:13:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:13:55.152170 | orchestrator | 2026-01-02 01:13:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:13:58.197978 | orchestrator | 2026-01-02 01:13:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:13:58.199666 | orchestrator | 2026-01-02 01:13:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:13:58.199699 | orchestrator | 2026-01-02 01:13:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:14:01.252881 | orchestrator | 2026-01-02 01:14:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:14:01.256351 | orchestrator | 2026-01-02 01:14:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:14:01.256387 | orchestrator | 2026-01-02 01:14:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:14:04.310305 | orchestrator | 2026-01-02 01:14:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:14:04.312014 | orchestrator | 2026-01-02 01:14:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:14:04.312084 | orchestrator | 2026-01-02 01:14:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:14:07.359587 | orchestrator | 2026-01-02 01:14:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:14:07.362334 | orchestrator | 2026-01-02 01:14:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:14:07.362425 | orchestrator | 2026-01-02 01:14:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:14:10.410530 | orchestrator | 2026-01-02 01:14:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:14:10.413046 | orchestrator | 2026-01-02 01:14:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:14:10.413096 | orchestrator | 2026-01-02 01:14:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:14:13.460181 | orchestrator | 2026-01-02 01:14:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:14:13.462426 | orchestrator | 2026-01-02 01:14:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:14:13.462584 | orchestrator | 2026-01-02 01:14:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:14:16.508315 | orchestrator | 2026-01-02 01:14:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:14:16.508867 | orchestrator | 2026-01-02 01:14:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:14:16.508906 | orchestrator | 2026-01-02 01:14:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:14:19.564772 | orchestrator | 2026-01-02 01:14:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:14:19.567678 | orchestrator | 2026-01-02 01:14:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:14:19.567720 | orchestrator | 2026-01-02 01:14:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:14:22.614322 | orchestrator | 2026-01-02 01:14:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:14:22.616125 | orchestrator | 2026-01-02 01:14:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:14:22.616309 | orchestrator | 2026-01-02 01:14:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:14:25.660881 | orchestrator | 2026-01-02 01:14:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:14:25.663250 | orchestrator | 2026-01-02 01:14:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:14:25.663365 | orchestrator | 2026-01-02 01:14:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:14:28.704041 | orchestrator | 2026-01-02 01:14:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:14:28.706249 | orchestrator | 2026-01-02 01:14:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:14:28.706330 | orchestrator | 2026-01-02 01:14:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:14:31.754728 | orchestrator | 2026-01-02 01:14:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:14:31.756925 | orchestrator | 2026-01-02 01:14:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:14:31.756983 | orchestrator | 2026-01-02 01:14:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:14:34.808560 | orchestrator | 2026-01-02 01:14:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:14:34.811797 | orchestrator | 2026-01-02 01:14:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:14:34.811935 | orchestrator | 2026-01-02 01:14:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:14:37.859679 | orchestrator | 2026-01-02 01:14:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:14:37.861559 | orchestrator | 2026-01-02 01:14:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:14:37.861614 | orchestrator | 2026-01-02 01:14:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:14:40.914556 | orchestrator | 2026-01-02 01:14:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:14:40.917982 | orchestrator | 2026-01-02 01:14:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:14:40.918069 | orchestrator | 2026-01-02 01:14:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:14:43.973980 | orchestrator | 2026-01-02 01:14:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:14:43.975553 | orchestrator | 2026-01-02 01:14:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:14:43.975568 | orchestrator | 2026-01-02 01:14:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:14:47.019044 | orchestrator | 2026-01-02 01:14:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:14:47.020721 | orchestrator | 2026-01-02 01:14:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:14:47.020851 | orchestrator | 2026-01-02 01:14:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:14:50.068315 | orchestrator | 2026-01-02 01:14:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:14:50.069556 | orchestrator | 2026-01-02 01:14:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:14:50.069616 | orchestrator | 2026-01-02 01:14:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:14:53.108881 | orchestrator | 2026-01-02 01:14:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:14:53.110263 | orchestrator | 2026-01-02 01:14:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:14:53.110652 | orchestrator | 2026-01-02 01:14:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:14:56.156978 | orchestrator | 2026-01-02 01:14:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:14:56.158996 | orchestrator | 2026-01-02 01:14:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:14:56.159079 | orchestrator | 2026-01-02 01:14:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:14:59.209975 | orchestrator | 2026-01-02 01:14:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:14:59.212888 | orchestrator | 2026-01-02 01:14:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:14:59.212970 | orchestrator | 2026-01-02 01:14:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:15:02.286867 | orchestrator | 2026-01-02 01:15:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:15:02.288306 | orchestrator | 2026-01-02 01:15:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:15:02.288386 | orchestrator | 2026-01-02 01:15:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:15:05.335990 | orchestrator | 2026-01-02 01:15:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:15:05.336487 | orchestrator | 2026-01-02 01:15:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:15:05.336698 | orchestrator | 2026-01-02 01:15:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:15:08.390331 | orchestrator | 2026-01-02 01:15:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:15:08.392363 | orchestrator | 2026-01-02 01:15:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:15:08.392437 | orchestrator | 2026-01-02 01:15:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:15:11.446500 | orchestrator | 2026-01-02 01:15:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:15:11.449201 | orchestrator | 2026-01-02 01:15:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:15:11.449234 | orchestrator | 2026-01-02 01:15:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:15:14.497528 | orchestrator | 2026-01-02 01:15:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:15:14.500417 | orchestrator | 2026-01-02 01:15:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:15:14.500486 | orchestrator | 2026-01-02 01:15:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:15:17.549524 | orchestrator | 2026-01-02 01:15:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:15:17.550956 | orchestrator | 2026-01-02 01:15:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:15:17.551332 | orchestrator | 2026-01-02 01:15:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:15:20.598171 | orchestrator | 2026-01-02 01:15:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:15:20.599438 | orchestrator | 2026-01-02 01:15:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:15:20.599479 | orchestrator | 2026-01-02 01:15:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:15:23.649357 | orchestrator | 2026-01-02 01:15:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:15:23.650677 | orchestrator | 2026-01-02 01:15:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:15:23.650727 | orchestrator | 2026-01-02 01:15:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:15:26.695160 | orchestrator | 2026-01-02 01:15:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:15:26.697042 | orchestrator | 2026-01-02 01:15:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:15:26.697088 | orchestrator | 2026-01-02 01:15:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:15:29.746951 | orchestrator | 2026-01-02 01:15:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:15:29.749145 | orchestrator | 2026-01-02 01:15:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:15:29.749200 | orchestrator | 2026-01-02 01:15:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:15:32.799451 | orchestrator | 2026-01-02 01:15:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:15:32.801495 | orchestrator | 2026-01-02 01:15:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:15:32.802121 | orchestrator | 2026-01-02 01:15:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:15:35.854136 | orchestrator | 2026-01-02 01:15:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:15:35.855170 | orchestrator | 2026-01-02 01:15:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:15:35.855425 | orchestrator | 2026-01-02 01:15:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:15:38.908958 | orchestrator | 2026-01-02 01:15:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:15:38.909992 | orchestrator | 2026-01-02 01:15:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:15:38.910117 | orchestrator | 2026-01-02 01:15:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:15:41.958514 | orchestrator | 2026-01-02 01:15:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:15:41.959860 | orchestrator | 2026-01-02 01:15:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:15:41.959934 | orchestrator | 2026-01-02 01:15:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:15:45.013463 | orchestrator | 2026-01-02 01:15:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:15:45.014435 | orchestrator | 2026-01-02 01:15:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:15:45.014488 | orchestrator | 2026-01-02 01:15:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:15:48.060881 | orchestrator | 2026-01-02 01:15:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:15:48.061099 | orchestrator | 2026-01-02 01:15:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:15:48.061122 | orchestrator | 2026-01-02 01:15:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:15:51.111883 | orchestrator | 2026-01-02 01:15:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:15:51.113483 | orchestrator | 2026-01-02 01:15:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:15:51.113542 | orchestrator | 2026-01-02 01:15:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:15:54.162287 | orchestrator | 2026-01-02 01:15:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:15:54.164899 | orchestrator | 2026-01-02 01:15:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:15:54.164925 | orchestrator | 2026-01-02 01:15:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:15:57.220606 | orchestrator | 2026-01-02 01:15:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:15:57.222528 | orchestrator | 2026-01-02 01:15:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:15:57.222675 | orchestrator | 2026-01-02 01:15:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:16:00.271427 | orchestrator | 2026-01-02 01:16:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:16:00.272112 | orchestrator | 2026-01-02 01:16:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:16:00.272172 | orchestrator | 2026-01-02 01:16:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:16:03.316427 | orchestrator | 2026-01-02 01:16:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:16:03.317532 | orchestrator | 2026-01-02 01:16:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:16:03.317993 | orchestrator | 2026-01-02 01:16:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:16:06.366531 | orchestrator | 2026-01-02 01:16:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:16:06.368484 | orchestrator | 2026-01-02 01:16:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:16:06.368787 | orchestrator | 2026-01-02 01:16:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:16:09.416123 | orchestrator | 2026-01-02 01:16:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:16:09.417734 | orchestrator | 2026-01-02 01:16:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:16:09.417784 | orchestrator | 2026-01-02 01:16:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:16:12.466721 | orchestrator | 2026-01-02 01:16:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:16:12.467978 | orchestrator | 2026-01-02 01:16:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:16:12.468024 | orchestrator | 2026-01-02 01:16:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:16:15.516526 | orchestrator | 2026-01-02 01:16:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:16:15.519011 | orchestrator | 2026-01-02 01:16:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:16:15.519126 | orchestrator | 2026-01-02 01:16:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:16:18.563109 | orchestrator | 2026-01-02 01:16:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:16:18.564750 | orchestrator | 2026-01-02 01:16:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:16:18.564908 | orchestrator | 2026-01-02 01:16:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:16:21.609751 | orchestrator | 2026-01-02 01:16:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:16:21.611774 | orchestrator | 2026-01-02 01:16:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:16:21.611860 | orchestrator | 2026-01-02 01:16:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:16:24.662170 | orchestrator | 2026-01-02 01:16:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:16:24.662928 | orchestrator | 2026-01-02 01:16:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:16:24.662981 | orchestrator | 2026-01-02 01:16:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:16:27.711688 | orchestrator | 2026-01-02 01:16:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:16:27.713223 | orchestrator | 2026-01-02 01:16:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:16:27.713288 | orchestrator | 2026-01-02 01:16:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:16:30.759399 | orchestrator | 2026-01-02 01:16:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:16:30.760759 | orchestrator | 2026-01-02 01:16:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:16:30.760913 | orchestrator | 2026-01-02 01:16:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:16:33.807983 | orchestrator | 2026-01-02 01:16:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:16:33.809212 | orchestrator | 2026-01-02 01:16:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:16:33.809246 | orchestrator | 2026-01-02 01:16:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:16:36.856757 | orchestrator | 2026-01-02 01:16:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:16:36.857994 | orchestrator | 2026-01-02 01:16:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:16:36.858089 | orchestrator | 2026-01-02 01:16:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:16:39.909730 | orchestrator | 2026-01-02 01:16:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:16:39.913069 | orchestrator | 2026-01-02 01:16:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:16:39.913119 | orchestrator | 2026-01-02 01:16:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:16:42.963456 | orchestrator | 2026-01-02 01:16:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:16:42.966089 | orchestrator | 2026-01-02 01:16:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:16:42.966154 | orchestrator | 2026-01-02 01:16:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:16:46.021989 | orchestrator | 2026-01-02 01:16:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:16:46.023610 | orchestrator | 2026-01-02 01:16:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:16:46.023665 | orchestrator | 2026-01-02 01:16:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:16:49.076959 | orchestrator | 2026-01-02 01:16:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:16:49.080193 | orchestrator | 2026-01-02 01:16:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:16:49.080260 | orchestrator | 2026-01-02 01:16:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:16:52.128464 | orchestrator | 2026-01-02 01:16:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:16:52.129706 | orchestrator | 2026-01-02 01:16:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:16:52.129750 | orchestrator | 2026-01-02 01:16:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:16:55.190013 | orchestrator | 2026-01-02 01:16:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:16:55.190263 | orchestrator | 2026-01-02 01:16:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:16:55.190598 | orchestrator | 2026-01-02 01:16:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:16:58.241349 | orchestrator | 2026-01-02 01:16:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:16:58.244420 | orchestrator | 2026-01-02 01:16:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:16:58.244495 | orchestrator | 2026-01-02 01:16:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:17:01.286838 | orchestrator | 2026-01-02 01:17:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:17:01.287991 | orchestrator | 2026-01-02 01:17:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:17:01.288186 | orchestrator | 2026-01-02 01:17:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:17:04.335761 | orchestrator | 2026-01-02 01:17:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:17:04.337076 | orchestrator | 2026-01-02 01:17:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:17:04.337148 | orchestrator | 2026-01-02 01:17:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:17:07.387482 | orchestrator | 2026-01-02 01:17:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:17:07.389733 | orchestrator | 2026-01-02 01:17:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:17:07.389868 | orchestrator | 2026-01-02 01:17:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:17:10.441974 | orchestrator | 2026-01-02 01:17:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:17:10.443225 | orchestrator | 2026-01-02 01:17:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:17:10.443246 | orchestrator | 2026-01-02 01:17:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:17:13.486707 | orchestrator | 2026-01-02 01:17:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:17:13.487997 | orchestrator | 2026-01-02 01:17:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:17:13.488078 | orchestrator | 2026-01-02 01:17:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:17:16.533654 | orchestrator | 2026-01-02 01:17:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:17:16.536335 | orchestrator | 2026-01-02 01:17:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:17:16.537182 | orchestrator | 2026-01-02 01:17:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:17:19.587361 | orchestrator | 2026-01-02 01:17:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:17:19.588967 | orchestrator | 2026-01-02 01:17:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:17:19.589008 | orchestrator | 2026-01-02 01:17:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:17:22.638061 | orchestrator | 2026-01-02 01:17:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:17:22.640394 | orchestrator | 2026-01-02 01:17:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:17:22.640434 | orchestrator | 2026-01-02 01:17:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:17:25.692187 | orchestrator | 2026-01-02 01:17:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:17:25.694166 | orchestrator | 2026-01-02 01:17:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:17:25.694267 | orchestrator | 2026-01-02 01:17:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:17:28.743902 | orchestrator | 2026-01-02 01:17:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:17:28.746136 | orchestrator | 2026-01-02 01:17:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:17:28.746188 | orchestrator | 2026-01-02 01:17:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:17:31.798955 | orchestrator | 2026-01-02 01:17:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:17:31.802160 | orchestrator | 2026-01-02 01:17:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:17:31.802400 | orchestrator | 2026-01-02 01:17:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:17:34.849447 | orchestrator | 2026-01-02 01:17:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:17:34.851488 | orchestrator | 2026-01-02 01:17:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:17:34.851885 | orchestrator | 2026-01-02 01:17:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:17:37.901157 | orchestrator | 2026-01-02 01:17:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:17:37.901506 | orchestrator | 2026-01-02 01:17:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:17:37.901532 | orchestrator | 2026-01-02 01:17:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:17:40.985313 | orchestrator | 2026-01-02 01:17:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:17:40.985419 | orchestrator | 2026-01-02 01:17:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:17:40.985435 | orchestrator | 2026-01-02 01:17:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:17:44.016454 | orchestrator | 2026-01-02 01:17:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:17:44.019024 | orchestrator | 2026-01-02 01:17:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:17:44.019142 | orchestrator | 2026-01-02 01:17:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:17:47.070378 | orchestrator | 2026-01-02 01:17:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:17:47.072589 | orchestrator | 2026-01-02 01:17:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:17:47.072623 | orchestrator | 2026-01-02 01:17:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:17:50.124135 | orchestrator | 2026-01-02 01:17:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:17:50.126124 | orchestrator | 2026-01-02 01:17:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:17:50.126169 | orchestrator | 2026-01-02 01:17:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:17:53.173464 | orchestrator | 2026-01-02 01:17:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:17:53.175058 | orchestrator | 2026-01-02 01:17:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:17:53.175096 | orchestrator | 2026-01-02 01:17:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:17:56.223844 | orchestrator | 2026-01-02 01:17:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:17:56.224358 | orchestrator | 2026-01-02 01:17:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:17:56.224831 | orchestrator | 2026-01-02 01:17:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:17:59.280074 | orchestrator | 2026-01-02 01:17:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:17:59.280176 | orchestrator | 2026-01-02 01:17:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:17:59.280191 | orchestrator | 2026-01-02 01:17:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:18:02.328447 | orchestrator | 2026-01-02 01:18:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:18:02.330741 | orchestrator | 2026-01-02 01:18:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:18:02.330978 | orchestrator | 2026-01-02 01:18:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:18:05.379548 | orchestrator | 2026-01-02 01:18:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:18:05.383826 | orchestrator | 2026-01-02 01:18:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:18:05.383932 | orchestrator | 2026-01-02 01:18:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:18:08.429584 | orchestrator | 2026-01-02 01:18:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:18:08.431443 | orchestrator | 2026-01-02 01:18:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:18:08.431485 | orchestrator | 2026-01-02 01:18:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:18:11.485675 | orchestrator | 2026-01-02 01:18:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:18:11.488311 | orchestrator | 2026-01-02 01:18:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:18:11.488358 | orchestrator | 2026-01-02 01:18:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:18:14.536004 | orchestrator | 2026-01-02 01:18:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:18:14.538523 | orchestrator | 2026-01-02 01:18:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:18:14.538575 | orchestrator | 2026-01-02 01:18:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:18:17.587562 | orchestrator | 2026-01-02 01:18:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:18:17.589193 | orchestrator | 2026-01-02 01:18:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:18:17.589238 | orchestrator | 2026-01-02 01:18:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:18:20.632693 | orchestrator | 2026-01-02 01:18:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:18:20.633812 | orchestrator | 2026-01-02 01:18:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:18:20.633844 | orchestrator | 2026-01-02 01:18:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:18:23.684251 | orchestrator | 2026-01-02 01:18:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:18:23.686222 | orchestrator | 2026-01-02 01:18:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:18:23.686266 | orchestrator | 2026-01-02 01:18:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:18:26.743500 | orchestrator | 2026-01-02 01:18:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:18:26.745350 | orchestrator | 2026-01-02 01:18:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:18:26.745500 | orchestrator | 2026-01-02 01:18:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:18:29.796461 | orchestrator | 2026-01-02 01:18:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:18:29.798155 | orchestrator | 2026-01-02 01:18:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:18:29.798200 | orchestrator | 2026-01-02 01:18:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:18:32.841300 | orchestrator | 2026-01-02 01:18:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:18:32.842653 | orchestrator | 2026-01-02 01:18:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:18:32.842683 | orchestrator | 2026-01-02 01:18:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:18:35.896563 | orchestrator | 2026-01-02 01:18:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:18:35.899314 | orchestrator | 2026-01-02 01:18:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:18:35.899482 | orchestrator | 2026-01-02 01:18:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:18:38.952100 | orchestrator | 2026-01-02 01:18:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:18:38.955572 | orchestrator | 2026-01-02 01:18:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:18:38.955669 | orchestrator | 2026-01-02 01:18:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:18:42.028211 | orchestrator | 2026-01-02 01:18:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:18:42.028327 | orchestrator | 2026-01-02 01:18:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:18:42.028351 | orchestrator | 2026-01-02 01:18:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:18:45.075737 | orchestrator | 2026-01-02 01:18:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:18:45.078506 | orchestrator | 2026-01-02 01:18:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:18:45.078555 | orchestrator | 2026-01-02 01:18:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:18:48.122907 | orchestrator | 2026-01-02 01:18:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:18:48.124692 | orchestrator | 2026-01-02 01:18:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:18:48.124727 | orchestrator | 2026-01-02 01:18:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:18:51.174280 | orchestrator | 2026-01-02 01:18:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:18:51.176030 | orchestrator | 2026-01-02 01:18:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:18:51.176156 | orchestrator | 2026-01-02 01:18:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:18:54.219705 | orchestrator | 2026-01-02 01:18:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:18:54.222061 | orchestrator | 2026-01-02 01:18:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:18:54.222080 | orchestrator | 2026-01-02 01:18:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:18:57.278516 | orchestrator | 2026-01-02 01:18:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:18:57.280036 | orchestrator | 2026-01-02 01:18:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:18:57.280081 | orchestrator | 2026-01-02 01:18:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:19:00.314871 | orchestrator | 2026-01-02 01:19:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:19:00.316814 | orchestrator | 2026-01-02 01:19:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:19:00.316908 | orchestrator | 2026-01-02 01:19:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:19:03.368246 | orchestrator | 2026-01-02 01:19:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:19:03.369528 | orchestrator | 2026-01-02 01:19:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:19:03.369669 | orchestrator | 2026-01-02 01:19:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:19:06.412012 | orchestrator | 2026-01-02 01:19:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:19:06.413493 | orchestrator | 2026-01-02 01:19:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:19:06.413532 | orchestrator | 2026-01-02 01:19:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:19:09.461945 | orchestrator | 2026-01-02 01:19:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:19:09.462113 | orchestrator | 2026-01-02 01:19:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:19:09.462132 | orchestrator | 2026-01-02 01:19:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:19:12.510079 | orchestrator | 2026-01-02 01:19:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:19:12.511242 | orchestrator | 2026-01-02 01:19:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:19:12.511306 | orchestrator | 2026-01-02 01:19:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:19:15.554888 | orchestrator | 2026-01-02 01:19:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:19:15.556327 | orchestrator | 2026-01-02 01:19:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:19:15.556384 | orchestrator | 2026-01-02 01:19:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:19:18.605164 | orchestrator | 2026-01-02 01:19:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:19:18.608069 | orchestrator | 2026-01-02 01:19:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:19:18.608233 | orchestrator | 2026-01-02 01:19:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:19:21.654814 | orchestrator | 2026-01-02 01:19:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:19:21.657085 | orchestrator | 2026-01-02 01:19:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:19:21.657112 | orchestrator | 2026-01-02 01:19:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:19:24.706002 | orchestrator | 2026-01-02 01:19:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:19:24.709983 | orchestrator | 2026-01-02 01:19:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:19:24.710126 | orchestrator | 2026-01-02 01:19:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:19:27.758609 | orchestrator | 2026-01-02 01:19:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:19:27.759803 | orchestrator | 2026-01-02 01:19:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:19:27.759948 | orchestrator | 2026-01-02 01:19:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:19:30.808140 | orchestrator | 2026-01-02 01:19:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:19:30.809870 | orchestrator | 2026-01-02 01:19:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:19:30.809972 | orchestrator | 2026-01-02 01:19:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:19:33.858665 | orchestrator | 2026-01-02 01:19:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:19:33.860376 | orchestrator | 2026-01-02 01:19:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:19:33.860475 | orchestrator | 2026-01-02 01:19:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:19:36.916244 | orchestrator | 2026-01-02 01:19:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:19:36.918876 | orchestrator | 2026-01-02 01:19:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:19:36.918913 | orchestrator | 2026-01-02 01:19:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:19:39.967527 | orchestrator | 2026-01-02 01:19:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:19:39.970318 | orchestrator | 2026-01-02 01:19:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:19:39.970366 | orchestrator | 2026-01-02 01:19:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:19:43.031019 | orchestrator | 2026-01-02 01:19:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:19:43.034713 | orchestrator | 2026-01-02 01:19:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:19:43.034877 | orchestrator | 2026-01-02 01:19:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:19:46.079894 | orchestrator | 2026-01-02 01:19:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:19:46.082120 | orchestrator | 2026-01-02 01:19:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:19:46.082150 | orchestrator | 2026-01-02 01:19:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:19:49.129511 | orchestrator | 2026-01-02 01:19:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:19:49.131241 | orchestrator | 2026-01-02 01:19:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:19:49.131279 | orchestrator | 2026-01-02 01:19:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:19:52.176814 | orchestrator | 2026-01-02 01:19:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:19:52.178511 | orchestrator | 2026-01-02 01:19:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:19:52.178600 | orchestrator | 2026-01-02 01:19:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:19:55.227020 | orchestrator | 2026-01-02 01:19:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:19:55.228095 | orchestrator | 2026-01-02 01:19:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:19:55.228140 | orchestrator | 2026-01-02 01:19:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:19:58.276103 | orchestrator | 2026-01-02 01:19:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:19:58.276424 | orchestrator | 2026-01-02 01:19:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:19:58.276916 | orchestrator | 2026-01-02 01:19:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:20:01.317088 | orchestrator | 2026-01-02 01:20:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:20:01.318321 | orchestrator | 2026-01-02 01:20:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:20:01.318369 | orchestrator | 2026-01-02 01:20:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:20:04.366451 | orchestrator | 2026-01-02 01:20:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:20:04.368448 | orchestrator | 2026-01-02 01:20:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:20:04.368800 | orchestrator | 2026-01-02 01:20:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:20:07.418376 | orchestrator | 2026-01-02 01:20:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:20:07.420115 | orchestrator | 2026-01-02 01:20:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:20:07.420164 | orchestrator | 2026-01-02 01:20:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:20:10.467106 | orchestrator | 2026-01-02 01:20:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:20:10.467838 | orchestrator | 2026-01-02 01:20:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:20:10.467857 | orchestrator | 2026-01-02 01:20:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:20:13.517490 | orchestrator | 2026-01-02 01:20:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:20:13.520235 | orchestrator | 2026-01-02 01:20:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:20:13.520298 | orchestrator | 2026-01-02 01:20:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:20:16.565986 | orchestrator | 2026-01-02 01:20:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:20:16.568190 | orchestrator | 2026-01-02 01:20:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:20:16.568668 | orchestrator | 2026-01-02 01:20:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:20:19.617800 | orchestrator | 2026-01-02 01:20:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:20:19.619072 | orchestrator | 2026-01-02 01:20:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:20:19.619370 | orchestrator | 2026-01-02 01:20:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:20:22.664069 | orchestrator | 2026-01-02 01:20:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:20:22.665453 | orchestrator | 2026-01-02 01:20:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:20:22.665593 | orchestrator | 2026-01-02 01:20:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:20:25.717929 | orchestrator | 2026-01-02 01:20:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:20:25.720124 | orchestrator | 2026-01-02 01:20:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:20:25.720165 | orchestrator | 2026-01-02 01:20:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:20:28.776137 | orchestrator | 2026-01-02 01:20:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:20:28.778223 | orchestrator | 2026-01-02 01:20:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:20:28.778271 | orchestrator | 2026-01-02 01:20:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:20:31.828291 | orchestrator | 2026-01-02 01:20:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:20:31.831184 | orchestrator | 2026-01-02 01:20:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:20:31.831241 | orchestrator | 2026-01-02 01:20:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:20:34.881088 | orchestrator | 2026-01-02 01:20:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:20:34.882466 | orchestrator | 2026-01-02 01:20:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:20:34.882542 | orchestrator | 2026-01-02 01:20:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:20:37.935043 | orchestrator | 2026-01-02 01:20:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:20:37.936856 | orchestrator | 2026-01-02 01:20:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:20:37.936933 | orchestrator | 2026-01-02 01:20:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:20:40.987410 | orchestrator | 2026-01-02 01:20:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:20:40.989784 | orchestrator | 2026-01-02 01:20:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:20:40.989824 | orchestrator | 2026-01-02 01:20:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:20:44.054867 | orchestrator | 2026-01-02 01:20:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:20:44.054942 | orchestrator | 2026-01-02 01:20:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:20:44.054950 | orchestrator | 2026-01-02 01:20:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:20:47.104956 | orchestrator | 2026-01-02 01:20:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:20:47.108789 | orchestrator | 2026-01-02 01:20:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:20:47.108848 | orchestrator | 2026-01-02 01:20:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:20:50.161182 | orchestrator | 2026-01-02 01:20:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:20:50.163083 | orchestrator | 2026-01-02 01:20:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:20:50.163154 | orchestrator | 2026-01-02 01:20:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:20:53.213793 | orchestrator | 2026-01-02 01:20:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:20:53.215535 | orchestrator | 2026-01-02 01:20:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:20:53.215578 | orchestrator | 2026-01-02 01:20:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:20:56.261572 | orchestrator | 2026-01-02 01:20:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:20:56.262369 | orchestrator | 2026-01-02 01:20:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:20:56.262435 | orchestrator | 2026-01-02 01:20:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:20:59.308840 | orchestrator | 2026-01-02 01:20:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:20:59.309029 | orchestrator | 2026-01-02 01:20:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:20:59.309051 | orchestrator | 2026-01-02 01:20:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:21:02.358685 | orchestrator | 2026-01-02 01:21:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:21:02.360834 | orchestrator | 2026-01-02 01:21:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:21:02.360984 | orchestrator | 2026-01-02 01:21:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:21:05.409854 | orchestrator | 2026-01-02 01:21:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:21:05.412021 | orchestrator | 2026-01-02 01:21:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:21:05.412098 | orchestrator | 2026-01-02 01:21:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:21:08.458778 | orchestrator | 2026-01-02 01:21:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:21:08.460545 | orchestrator | 2026-01-02 01:21:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:21:08.460578 | orchestrator | 2026-01-02 01:21:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:21:11.508760 | orchestrator | 2026-01-02 01:21:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:21:11.510813 | orchestrator | 2026-01-02 01:21:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:21:11.510951 | orchestrator | 2026-01-02 01:21:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:21:14.557949 | orchestrator | 2026-01-02 01:21:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:21:14.560252 | orchestrator | 2026-01-02 01:21:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:21:14.560336 | orchestrator | 2026-01-02 01:21:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:21:17.601776 | orchestrator | 2026-01-02 01:21:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:21:17.603362 | orchestrator | 2026-01-02 01:21:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:21:17.603515 | orchestrator | 2026-01-02 01:21:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:21:20.653557 | orchestrator | 2026-01-02 01:21:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:21:20.656213 | orchestrator | 2026-01-02 01:21:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:21:20.656283 | orchestrator | 2026-01-02 01:21:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:21:23.700406 | orchestrator | 2026-01-02 01:21:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:21:23.701303 | orchestrator | 2026-01-02 01:21:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:21:23.701351 | orchestrator | 2026-01-02 01:21:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:21:26.749076 | orchestrator | 2026-01-02 01:21:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:21:26.750590 | orchestrator | 2026-01-02 01:21:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:21:26.750704 | orchestrator | 2026-01-02 01:21:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:21:29.802860 | orchestrator | 2026-01-02 01:21:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:21:29.805175 | orchestrator | 2026-01-02 01:21:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:21:29.805280 | orchestrator | 2026-01-02 01:21:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:21:32.856483 | orchestrator | 2026-01-02 01:21:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:21:32.859033 | orchestrator | 2026-01-02 01:21:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:21:32.859083 | orchestrator | 2026-01-02 01:21:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:21:35.912495 | orchestrator | 2026-01-02 01:21:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:21:35.915489 | orchestrator | 2026-01-02 01:21:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:21:35.915880 | orchestrator | 2026-01-02 01:21:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:21:38.962762 | orchestrator | 2026-01-02 01:21:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:21:38.964883 | orchestrator | 2026-01-02 01:21:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:21:38.964959 | orchestrator | 2026-01-02 01:21:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:21:42.024697 | orchestrator | 2026-01-02 01:21:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:21:42.026553 | orchestrator | 2026-01-02 01:21:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:21:42.026594 | orchestrator | 2026-01-02 01:21:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:21:45.077425 | orchestrator | 2026-01-02 01:21:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:21:45.079256 | orchestrator | 2026-01-02 01:21:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:21:45.079357 | orchestrator | 2026-01-02 01:21:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:21:48.120314 | orchestrator | 2026-01-02 01:21:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:21:48.121910 | orchestrator | 2026-01-02 01:21:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:21:48.121960 | orchestrator | 2026-01-02 01:21:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:21:51.161378 | orchestrator | 2026-01-02 01:21:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:21:51.163480 | orchestrator | 2026-01-02 01:21:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:21:51.163532 | orchestrator | 2026-01-02 01:21:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:21:54.207364 | orchestrator | 2026-01-02 01:21:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:21:54.209052 | orchestrator | 2026-01-02 01:21:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:21:54.209521 | orchestrator | 2026-01-02 01:21:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:21:57.257971 | orchestrator | 2026-01-02 01:21:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:21:57.259144 | orchestrator | 2026-01-02 01:21:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:21:57.259186 | orchestrator | 2026-01-02 01:21:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:22:00.308434 | orchestrator | 2026-01-02 01:22:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:22:00.309252 | orchestrator | 2026-01-02 01:22:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:22:00.309352 | orchestrator | 2026-01-02 01:22:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:22:03.360828 | orchestrator | 2026-01-02 01:22:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:22:03.362362 | orchestrator | 2026-01-02 01:22:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:22:03.362549 | orchestrator | 2026-01-02 01:22:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:22:06.412355 | orchestrator | 2026-01-02 01:22:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:22:06.413755 | orchestrator | 2026-01-02 01:22:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:22:06.413958 | orchestrator | 2026-01-02 01:22:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:22:09.461863 | orchestrator | 2026-01-02 01:22:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:22:09.464875 | orchestrator | 2026-01-02 01:22:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:22:09.464919 | orchestrator | 2026-01-02 01:22:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:22:12.507306 | orchestrator | 2026-01-02 01:22:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:22:12.509482 | orchestrator | 2026-01-02 01:22:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:22:12.509533 | orchestrator | 2026-01-02 01:22:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:22:15.557244 | orchestrator | 2026-01-02 01:22:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:22:15.558270 | orchestrator | 2026-01-02 01:22:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:22:15.558364 | orchestrator | 2026-01-02 01:22:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:22:18.598297 | orchestrator | 2026-01-02 01:22:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:22:18.598891 | orchestrator | 2026-01-02 01:22:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:22:18.598983 | orchestrator | 2026-01-02 01:22:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:22:21.643889 | orchestrator | 2026-01-02 01:22:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:22:21.647578 | orchestrator | 2026-01-02 01:22:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:22:21.647743 | orchestrator | 2026-01-02 01:22:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:22:24.686867 | orchestrator | 2026-01-02 01:22:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:22:24.687825 | orchestrator | 2026-01-02 01:22:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:22:24.687902 | orchestrator | 2026-01-02 01:22:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:22:27.737276 | orchestrator | 2026-01-02 01:22:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:22:27.739675 | orchestrator | 2026-01-02 01:22:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:22:27.739909 | orchestrator | 2026-01-02 01:22:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:22:30.786201 | orchestrator | 2026-01-02 01:22:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:22:30.788754 | orchestrator | 2026-01-02 01:22:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:22:30.788842 | orchestrator | 2026-01-02 01:22:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:22:33.833089 | orchestrator | 2026-01-02 01:22:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:22:33.834759 | orchestrator | 2026-01-02 01:22:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:22:33.834796 | orchestrator | 2026-01-02 01:22:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:22:36.883955 | orchestrator | 2026-01-02 01:22:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:22:36.885653 | orchestrator | 2026-01-02 01:22:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:22:36.885728 | orchestrator | 2026-01-02 01:22:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:22:39.936141 | orchestrator | 2026-01-02 01:22:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:22:39.938434 | orchestrator | 2026-01-02 01:22:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:22:39.938495 | orchestrator | 2026-01-02 01:22:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:22:42.987870 | orchestrator | 2026-01-02 01:22:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:22:42.988655 | orchestrator | 2026-01-02 01:22:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:22:42.988790 | orchestrator | 2026-01-02 01:22:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:22:46.043378 | orchestrator | 2026-01-02 01:22:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:22:46.045050 | orchestrator | 2026-01-02 01:22:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:22:46.045104 | orchestrator | 2026-01-02 01:22:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:22:49.094625 | orchestrator | 2026-01-02 01:22:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:22:49.096804 | orchestrator | 2026-01-02 01:22:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:22:49.096880 | orchestrator | 2026-01-02 01:22:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:22:52.144358 | orchestrator | 2026-01-02 01:22:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:22:52.145101 | orchestrator | 2026-01-02 01:22:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:22:52.145142 | orchestrator | 2026-01-02 01:22:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:22:55.192610 | orchestrator | 2026-01-02 01:22:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:22:55.194898 | orchestrator | 2026-01-02 01:22:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:22:55.195270 | orchestrator | 2026-01-02 01:22:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:22:58.234660 | orchestrator | 2026-01-02 01:22:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:22:58.236119 | orchestrator | 2026-01-02 01:22:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:22:58.236159 | orchestrator | 2026-01-02 01:22:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:23:01.285054 | orchestrator | 2026-01-02 01:23:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:23:01.289631 | orchestrator | 2026-01-02 01:23:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:23:01.289916 | orchestrator | 2026-01-02 01:23:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:23:04.338641 | orchestrator | 2026-01-02 01:23:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:23:04.340906 | orchestrator | 2026-01-02 01:23:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:23:04.340950 | orchestrator | 2026-01-02 01:23:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:23:07.385050 | orchestrator | 2026-01-02 01:23:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:23:07.387624 | orchestrator | 2026-01-02 01:23:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:23:07.387775 | orchestrator | 2026-01-02 01:23:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:23:10.438492 | orchestrator | 2026-01-02 01:23:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:23:10.441075 | orchestrator | 2026-01-02 01:23:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:23:10.441201 | orchestrator | 2026-01-02 01:23:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:23:13.493471 | orchestrator | 2026-01-02 01:23:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:23:13.495286 | orchestrator | 2026-01-02 01:23:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:23:13.495452 | orchestrator | 2026-01-02 01:23:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:23:16.543240 | orchestrator | 2026-01-02 01:23:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:23:16.544453 | orchestrator | 2026-01-02 01:23:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:23:16.544494 | orchestrator | 2026-01-02 01:23:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:23:19.599814 | orchestrator | 2026-01-02 01:23:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:23:19.601972 | orchestrator | 2026-01-02 01:23:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:23:19.602188 | orchestrator | 2026-01-02 01:23:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:23:22.648326 | orchestrator | 2026-01-02 01:23:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:23:22.649633 | orchestrator | 2026-01-02 01:23:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:23:22.649780 | orchestrator | 2026-01-02 01:23:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:23:25.699924 | orchestrator | 2026-01-02 01:23:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:23:25.702278 | orchestrator | 2026-01-02 01:23:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:23:25.702316 | orchestrator | 2026-01-02 01:23:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:23:28.749820 | orchestrator | 2026-01-02 01:23:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:23:28.751490 | orchestrator | 2026-01-02 01:23:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:23:28.751814 | orchestrator | 2026-01-02 01:23:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:23:31.802624 | orchestrator | 2026-01-02 01:23:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:23:31.803518 | orchestrator | 2026-01-02 01:23:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:23:31.803577 | orchestrator | 2026-01-02 01:23:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:23:34.856119 | orchestrator | 2026-01-02 01:23:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:23:34.858008 | orchestrator | 2026-01-02 01:23:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:23:34.858090 | orchestrator | 2026-01-02 01:23:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:23:37.907782 | orchestrator | 2026-01-02 01:23:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:23:37.909124 | orchestrator | 2026-01-02 01:23:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:23:37.909195 | orchestrator | 2026-01-02 01:23:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:23:40.956186 | orchestrator | 2026-01-02 01:23:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:23:40.956269 | orchestrator | 2026-01-02 01:23:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:23:40.956279 | orchestrator | 2026-01-02 01:23:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:23:44.006750 | orchestrator | 2026-01-02 01:23:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:23:44.007183 | orchestrator | 2026-01-02 01:23:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:23:44.007205 | orchestrator | 2026-01-02 01:23:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:23:47.057100 | orchestrator | 2026-01-02 01:23:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:23:47.058190 | orchestrator | 2026-01-02 01:23:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:23:47.058261 | orchestrator | 2026-01-02 01:23:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:23:50.104204 | orchestrator | 2026-01-02 01:23:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:23:50.106382 | orchestrator | 2026-01-02 01:23:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:23:50.106451 | orchestrator | 2026-01-02 01:23:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:23:53.150380 | orchestrator | 2026-01-02 01:23:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:23:53.151140 | orchestrator | 2026-01-02 01:23:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:23:53.151169 | orchestrator | 2026-01-02 01:23:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:23:56.199353 | orchestrator | 2026-01-02 01:23:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:23:56.201512 | orchestrator | 2026-01-02 01:23:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:23:56.201629 | orchestrator | 2026-01-02 01:23:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:23:59.251834 | orchestrator | 2026-01-02 01:23:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:23:59.252791 | orchestrator | 2026-01-02 01:23:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:23:59.252921 | orchestrator | 2026-01-02 01:23:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:24:02.306676 | orchestrator | 2026-01-02 01:24:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:24:02.308292 | orchestrator | 2026-01-02 01:24:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:24:02.308423 | orchestrator | 2026-01-02 01:24:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:24:05.355903 | orchestrator | 2026-01-02 01:24:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:24:05.358323 | orchestrator | 2026-01-02 01:24:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:24:05.358392 | orchestrator | 2026-01-02 01:24:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:24:08.410529 | orchestrator | 2026-01-02 01:24:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:24:08.411899 | orchestrator | 2026-01-02 01:24:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:24:08.411976 | orchestrator | 2026-01-02 01:24:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:24:11.460411 | orchestrator | 2026-01-02 01:24:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:24:11.463020 | orchestrator | 2026-01-02 01:24:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:24:11.463073 | orchestrator | 2026-01-02 01:24:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:24:14.518518 | orchestrator | 2026-01-02 01:24:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:24:14.519656 | orchestrator | 2026-01-02 01:24:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:24:14.519676 | orchestrator | 2026-01-02 01:24:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:24:17.562325 | orchestrator | 2026-01-02 01:24:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:24:17.563571 | orchestrator | 2026-01-02 01:24:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:24:17.563724 | orchestrator | 2026-01-02 01:24:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:24:20.612532 | orchestrator | 2026-01-02 01:24:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:24:20.614414 | orchestrator | 2026-01-02 01:24:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:24:20.614575 | orchestrator | 2026-01-02 01:24:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:24:23.660790 | orchestrator | 2026-01-02 01:24:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:24:23.661974 | orchestrator | 2026-01-02 01:24:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:24:23.662200 | orchestrator | 2026-01-02 01:24:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:24:26.717092 | orchestrator | 2026-01-02 01:24:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:24:26.719593 | orchestrator | 2026-01-02 01:24:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:24:26.719709 | orchestrator | 2026-01-02 01:24:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:24:29.761777 | orchestrator | 2026-01-02 01:24:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:24:29.764456 | orchestrator | 2026-01-02 01:24:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:24:29.764558 | orchestrator | 2026-01-02 01:24:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:24:32.812864 | orchestrator | 2026-01-02 01:24:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:24:32.814408 | orchestrator | 2026-01-02 01:24:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:24:32.814435 | orchestrator | 2026-01-02 01:24:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:24:35.866308 | orchestrator | 2026-01-02 01:24:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:24:35.868617 | orchestrator | 2026-01-02 01:24:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:24:35.869024 | orchestrator | 2026-01-02 01:24:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:24:38.919286 | orchestrator | 2026-01-02 01:24:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:24:38.920167 | orchestrator | 2026-01-02 01:24:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:24:38.920543 | orchestrator | 2026-01-02 01:24:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:24:41.966091 | orchestrator | 2026-01-02 01:24:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:24:41.967008 | orchestrator | 2026-01-02 01:24:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:24:41.967107 | orchestrator | 2026-01-02 01:24:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:24:45.016147 | orchestrator | 2026-01-02 01:24:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:24:45.018667 | orchestrator | 2026-01-02 01:24:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:24:45.018811 | orchestrator | 2026-01-02 01:24:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:24:48.067275 | orchestrator | 2026-01-02 01:24:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:24:48.069085 | orchestrator | 2026-01-02 01:24:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:24:48.069143 | orchestrator | 2026-01-02 01:24:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:24:51.116096 | orchestrator | 2026-01-02 01:24:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:24:51.118581 | orchestrator | 2026-01-02 01:24:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:24:51.118871 | orchestrator | 2026-01-02 01:24:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:24:54.168451 | orchestrator | 2026-01-02 01:24:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:24:54.168750 | orchestrator | 2026-01-02 01:24:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:24:54.168790 | orchestrator | 2026-01-02 01:24:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:24:57.213380 | orchestrator | 2026-01-02 01:24:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:24:57.214750 | orchestrator | 2026-01-02 01:24:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:24:57.215110 | orchestrator | 2026-01-02 01:24:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:25:00.274212 | orchestrator | 2026-01-02 01:25:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:25:00.275096 | orchestrator | 2026-01-02 01:25:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:25:00.275478 | orchestrator | 2026-01-02 01:25:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:25:03.332114 | orchestrator | 2026-01-02 01:25:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:25:03.333374 | orchestrator | 2026-01-02 01:25:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:25:03.333397 | orchestrator | 2026-01-02 01:25:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:25:06.379636 | orchestrator | 2026-01-02 01:25:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:25:06.381855 | orchestrator | 2026-01-02 01:25:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:25:06.381924 | orchestrator | 2026-01-02 01:25:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:25:09.448476 | orchestrator | 2026-01-02 01:25:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:25:09.449374 | orchestrator | 2026-01-02 01:25:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:25:09.449412 | orchestrator | 2026-01-02 01:25:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:25:12.495283 | orchestrator | 2026-01-02 01:25:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:25:12.499038 | orchestrator | 2026-01-02 01:25:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:25:12.499145 | orchestrator | 2026-01-02 01:25:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:25:15.552848 | orchestrator | 2026-01-02 01:25:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:25:15.553812 | orchestrator | 2026-01-02 01:25:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:25:15.553849 | orchestrator | 2026-01-02 01:25:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:25:18.603188 | orchestrator | 2026-01-02 01:25:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:25:18.604858 | orchestrator | 2026-01-02 01:25:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:25:18.605044 | orchestrator | 2026-01-02 01:25:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:25:21.658339 | orchestrator | 2026-01-02 01:25:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:25:21.659544 | orchestrator | 2026-01-02 01:25:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:25:21.659584 | orchestrator | 2026-01-02 01:25:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:25:24.719306 | orchestrator | 2026-01-02 01:25:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:25:24.720791 | orchestrator | 2026-01-02 01:25:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:25:24.720849 | orchestrator | 2026-01-02 01:25:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:25:27.774193 | orchestrator | 2026-01-02 01:25:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:25:27.775721 | orchestrator | 2026-01-02 01:25:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:25:27.775776 | orchestrator | 2026-01-02 01:25:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:25:30.825351 | orchestrator | 2026-01-02 01:25:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:25:30.826116 | orchestrator | 2026-01-02 01:25:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:25:30.826144 | orchestrator | 2026-01-02 01:25:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:25:33.872964 | orchestrator | 2026-01-02 01:25:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:25:33.875932 | orchestrator | 2026-01-02 01:25:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:25:33.876012 | orchestrator | 2026-01-02 01:25:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:25:36.925463 | orchestrator | 2026-01-02 01:25:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:25:36.928413 | orchestrator | 2026-01-02 01:25:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:25:36.928531 | orchestrator | 2026-01-02 01:25:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:25:39.982244 | orchestrator | 2026-01-02 01:25:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:25:39.983528 | orchestrator | 2026-01-02 01:25:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:25:39.983629 | orchestrator | 2026-01-02 01:25:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:25:43.033242 | orchestrator | 2026-01-02 01:25:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:25:43.034828 | orchestrator | 2026-01-02 01:25:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:25:43.034948 | orchestrator | 2026-01-02 01:25:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:25:46.078765 | orchestrator | 2026-01-02 01:25:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:25:46.081788 | orchestrator | 2026-01-02 01:25:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:25:46.081833 | orchestrator | 2026-01-02 01:25:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:25:49.131978 | orchestrator | 2026-01-02 01:25:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:25:49.133544 | orchestrator | 2026-01-02 01:25:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:25:49.133696 | orchestrator | 2026-01-02 01:25:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:25:52.181115 | orchestrator | 2026-01-02 01:25:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:25:52.181370 | orchestrator | 2026-01-02 01:25:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:25:52.181408 | orchestrator | 2026-01-02 01:25:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:25:55.232515 | orchestrator | 2026-01-02 01:25:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:25:55.237092 | orchestrator | 2026-01-02 01:25:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:25:55.237160 | orchestrator | 2026-01-02 01:25:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:25:58.282828 | orchestrator | 2026-01-02 01:25:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:25:58.285861 | orchestrator | 2026-01-02 01:25:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:25:58.285915 | orchestrator | 2026-01-02 01:25:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:26:01.331297 | orchestrator | 2026-01-02 01:26:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:26:01.332642 | orchestrator | 2026-01-02 01:26:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:26:01.332766 | orchestrator | 2026-01-02 01:26:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:26:04.382259 | orchestrator | 2026-01-02 01:26:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:26:04.383325 | orchestrator | 2026-01-02 01:26:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:26:04.383378 | orchestrator | 2026-01-02 01:26:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:26:07.424103 | orchestrator | 2026-01-02 01:26:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:26:07.424932 | orchestrator | 2026-01-02 01:26:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:26:07.424988 | orchestrator | 2026-01-02 01:26:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:26:10.471070 | orchestrator | 2026-01-02 01:26:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:26:10.472080 | orchestrator | 2026-01-02 01:26:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:26:10.472141 | orchestrator | 2026-01-02 01:26:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:26:13.524137 | orchestrator | 2026-01-02 01:26:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:26:13.526243 | orchestrator | 2026-01-02 01:26:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:26:13.526303 | orchestrator | 2026-01-02 01:26:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:26:16.578863 | orchestrator | 2026-01-02 01:26:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:26:16.581271 | orchestrator | 2026-01-02 01:26:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:26:16.581370 | orchestrator | 2026-01-02 01:26:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:26:19.630559 | orchestrator | 2026-01-02 01:26:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:26:19.634815 | orchestrator | 2026-01-02 01:26:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:26:19.634874 | orchestrator | 2026-01-02 01:26:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:26:22.689587 | orchestrator | 2026-01-02 01:26:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:26:22.692500 | orchestrator | 2026-01-02 01:26:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:26:22.692719 | orchestrator | 2026-01-02 01:26:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:26:25.741808 | orchestrator | 2026-01-02 01:26:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:26:25.745975 | orchestrator | 2026-01-02 01:26:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:26:25.746093 | orchestrator | 2026-01-02 01:26:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:26:28.801138 | orchestrator | 2026-01-02 01:26:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:26:28.805382 | orchestrator | 2026-01-02 01:26:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:26:28.805458 | orchestrator | 2026-01-02 01:26:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:26:31.862345 | orchestrator | 2026-01-02 01:26:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:26:31.863314 | orchestrator | 2026-01-02 01:26:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:26:31.863546 | orchestrator | 2026-01-02 01:26:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:26:34.913078 | orchestrator | 2026-01-02 01:26:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:26:34.914985 | orchestrator | 2026-01-02 01:26:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:26:34.915767 | orchestrator | 2026-01-02 01:26:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:26:37.970810 | orchestrator | 2026-01-02 01:26:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:26:37.974092 | orchestrator | 2026-01-02 01:26:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:26:37.974221 | orchestrator | 2026-01-02 01:26:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:26:41.029133 | orchestrator | 2026-01-02 01:26:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:26:41.030115 | orchestrator | 2026-01-02 01:26:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:26:41.030145 | orchestrator | 2026-01-02 01:26:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:26:44.073794 | orchestrator | 2026-01-02 01:26:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:26:44.074339 | orchestrator | 2026-01-02 01:26:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:26:44.074373 | orchestrator | 2026-01-02 01:26:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:26:47.119056 | orchestrator | 2026-01-02 01:26:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:26:47.120525 | orchestrator | 2026-01-02 01:26:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:26:47.120613 | orchestrator | 2026-01-02 01:26:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:26:50.171136 | orchestrator | 2026-01-02 01:26:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:26:50.172755 | orchestrator | 2026-01-02 01:26:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:26:50.172838 | orchestrator | 2026-01-02 01:26:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:26:53.221559 | orchestrator | 2026-01-02 01:26:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:26:53.222466 | orchestrator | 2026-01-02 01:26:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:26:53.222520 | orchestrator | 2026-01-02 01:26:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:26:56.265480 | orchestrator | 2026-01-02 01:26:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:26:56.266155 | orchestrator | 2026-01-02 01:26:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:26:56.266193 | orchestrator | 2026-01-02 01:26:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:26:59.313872 | orchestrator | 2026-01-02 01:26:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:26:59.315282 | orchestrator | 2026-01-02 01:26:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:26:59.315303 | orchestrator | 2026-01-02 01:26:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:27:02.360571 | orchestrator | 2026-01-02 01:27:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:27:02.361903 | orchestrator | 2026-01-02 01:27:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:27:02.361964 | orchestrator | 2026-01-02 01:27:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:27:05.415107 | orchestrator | 2026-01-02 01:27:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:27:05.416433 | orchestrator | 2026-01-02 01:27:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:27:05.416583 | orchestrator | 2026-01-02 01:27:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:27:08.467918 | orchestrator | 2026-01-02 01:27:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:27:08.470335 | orchestrator | 2026-01-02 01:27:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:27:08.470480 | orchestrator | 2026-01-02 01:27:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:27:11.524769 | orchestrator | 2026-01-02 01:27:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:27:11.526264 | orchestrator | 2026-01-02 01:27:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:27:11.526296 | orchestrator | 2026-01-02 01:27:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:27:14.578761 | orchestrator | 2026-01-02 01:27:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:27:14.579766 | orchestrator | 2026-01-02 01:27:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:27:14.579798 | orchestrator | 2026-01-02 01:27:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:27:17.630544 | orchestrator | 2026-01-02 01:27:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:27:17.631846 | orchestrator | 2026-01-02 01:27:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:27:17.631935 | orchestrator | 2026-01-02 01:27:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:27:20.674611 | orchestrator | 2026-01-02 01:27:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:27:20.676016 | orchestrator | 2026-01-02 01:27:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:27:20.676450 | orchestrator | 2026-01-02 01:27:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:27:23.724981 | orchestrator | 2026-01-02 01:27:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:27:23.726767 | orchestrator | 2026-01-02 01:27:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:27:23.726937 | orchestrator | 2026-01-02 01:27:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:27:26.780910 | orchestrator | 2026-01-02 01:27:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:27:26.783247 | orchestrator | 2026-01-02 01:27:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:27:26.783520 | orchestrator | 2026-01-02 01:27:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:27:29.833935 | orchestrator | 2026-01-02 01:27:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:27:29.835400 | orchestrator | 2026-01-02 01:27:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:27:29.835765 | orchestrator | 2026-01-02 01:27:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:27:32.880071 | orchestrator | 2026-01-02 01:27:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:27:32.880649 | orchestrator | 2026-01-02 01:27:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:27:32.883298 | orchestrator | 2026-01-02 01:27:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:27:35.930869 | orchestrator | 2026-01-02 01:27:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:27:35.932752 | orchestrator | 2026-01-02 01:27:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:27:35.932797 | orchestrator | 2026-01-02 01:27:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:27:38.983423 | orchestrator | 2026-01-02 01:27:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:27:38.985604 | orchestrator | 2026-01-02 01:27:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:27:38.985734 | orchestrator | 2026-01-02 01:27:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:27:42.048942 | orchestrator | 2026-01-02 01:27:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:27:42.051507 | orchestrator | 2026-01-02 01:27:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:27:42.051595 | orchestrator | 2026-01-02 01:27:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:27:45.100246 | orchestrator | 2026-01-02 01:27:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:27:45.102083 | orchestrator | 2026-01-02 01:27:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:27:45.102120 | orchestrator | 2026-01-02 01:27:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:27:48.149062 | orchestrator | 2026-01-02 01:27:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:27:48.151178 | orchestrator | 2026-01-02 01:27:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:27:48.151318 | orchestrator | 2026-01-02 01:27:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:27:51.201847 | orchestrator | 2026-01-02 01:27:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:27:51.203806 | orchestrator | 2026-01-02 01:27:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:27:51.203865 | orchestrator | 2026-01-02 01:27:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:27:54.254255 | orchestrator | 2026-01-02 01:27:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:27:54.256761 | orchestrator | 2026-01-02 01:27:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:27:54.256823 | orchestrator | 2026-01-02 01:27:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:27:57.306458 | orchestrator | 2026-01-02 01:27:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:27:57.307607 | orchestrator | 2026-01-02 01:27:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:27:57.307750 | orchestrator | 2026-01-02 01:27:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:28:00.348881 | orchestrator | 2026-01-02 01:28:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:28:00.350197 | orchestrator | 2026-01-02 01:28:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:28:00.350746 | orchestrator | 2026-01-02 01:28:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:28:03.398445 | orchestrator | 2026-01-02 01:28:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:28:03.399730 | orchestrator | 2026-01-02 01:28:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:28:03.399993 | orchestrator | 2026-01-02 01:28:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:28:06.446065 | orchestrator | 2026-01-02 01:28:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:28:06.447508 | orchestrator | 2026-01-02 01:28:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:28:06.447543 | orchestrator | 2026-01-02 01:28:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:28:09.492305 | orchestrator | 2026-01-02 01:28:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:28:09.493646 | orchestrator | 2026-01-02 01:28:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:28:09.493774 | orchestrator | 2026-01-02 01:28:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:28:12.545269 | orchestrator | 2026-01-02 01:28:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:28:12.547229 | orchestrator | 2026-01-02 01:28:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:28:12.547332 | orchestrator | 2026-01-02 01:28:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:28:15.593138 | orchestrator | 2026-01-02 01:28:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:28:15.594229 | orchestrator | 2026-01-02 01:28:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:28:15.594268 | orchestrator | 2026-01-02 01:28:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:28:18.644775 | orchestrator | 2026-01-02 01:28:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:28:18.647017 | orchestrator | 2026-01-02 01:28:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:28:18.647102 | orchestrator | 2026-01-02 01:28:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:28:21.696135 | orchestrator | 2026-01-02 01:28:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:28:21.696618 | orchestrator | 2026-01-02 01:28:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:28:21.696693 | orchestrator | 2026-01-02 01:28:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:28:24.748341 | orchestrator | 2026-01-02 01:28:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:28:24.750819 | orchestrator | 2026-01-02 01:28:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:28:24.751158 | orchestrator | 2026-01-02 01:28:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:28:27.803987 | orchestrator | 2026-01-02 01:28:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:28:27.805805 | orchestrator | 2026-01-02 01:28:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:28:27.805917 | orchestrator | 2026-01-02 01:28:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:28:30.859779 | orchestrator | 2026-01-02 01:28:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:28:30.861911 | orchestrator | 2026-01-02 01:28:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:28:30.861991 | orchestrator | 2026-01-02 01:28:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:28:33.903075 | orchestrator | 2026-01-02 01:28:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:28:33.904049 | orchestrator | 2026-01-02 01:28:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:28:33.904134 | orchestrator | 2026-01-02 01:28:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:28:36.947364 | orchestrator | 2026-01-02 01:28:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:28:36.948358 | orchestrator | 2026-01-02 01:28:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:28:36.948397 | orchestrator | 2026-01-02 01:28:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:28:40.007352 | orchestrator | 2026-01-02 01:28:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:28:40.010839 | orchestrator | 2026-01-02 01:28:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:28:40.010900 | orchestrator | 2026-01-02 01:28:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:28:43.066546 | orchestrator | 2026-01-02 01:28:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:28:43.067621 | orchestrator | 2026-01-02 01:28:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:28:43.067704 | orchestrator | 2026-01-02 01:28:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:28:46.117995 | orchestrator | 2026-01-02 01:28:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:28:46.118459 | orchestrator | 2026-01-02 01:28:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:28:46.118500 | orchestrator | 2026-01-02 01:28:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:28:49.166174 | orchestrator | 2026-01-02 01:28:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:28:49.169137 | orchestrator | 2026-01-02 01:28:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:28:49.169199 | orchestrator | 2026-01-02 01:28:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:28:52.215010 | orchestrator | 2026-01-02 01:28:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:28:52.216554 | orchestrator | 2026-01-02 01:28:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:28:52.216613 | orchestrator | 2026-01-02 01:28:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:28:55.259994 | orchestrator | 2026-01-02 01:28:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:28:55.260853 | orchestrator | 2026-01-02 01:28:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:28:55.261075 | orchestrator | 2026-01-02 01:28:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:28:58.314874 | orchestrator | 2026-01-02 01:28:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:28:58.316955 | orchestrator | 2026-01-02 01:28:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:28:58.317048 | orchestrator | 2026-01-02 01:28:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:29:01.359877 | orchestrator | 2026-01-02 01:29:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:29:01.362435 | orchestrator | 2026-01-02 01:29:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:29:01.362505 | orchestrator | 2026-01-02 01:29:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:29:04.405234 | orchestrator | 2026-01-02 01:29:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:29:04.407164 | orchestrator | 2026-01-02 01:29:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:29:04.407298 | orchestrator | 2026-01-02 01:29:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:29:07.459854 | orchestrator | 2026-01-02 01:29:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:29:07.462417 | orchestrator | 2026-01-02 01:29:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:29:07.462526 | orchestrator | 2026-01-02 01:29:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:29:10.513735 | orchestrator | 2026-01-02 01:29:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:29:10.517724 | orchestrator | 2026-01-02 01:29:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:29:10.517785 | orchestrator | 2026-01-02 01:29:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:29:13.558222 | orchestrator | 2026-01-02 01:29:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:29:13.562474 | orchestrator | 2026-01-02 01:29:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:29:13.562566 | orchestrator | 2026-01-02 01:29:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:29:16.611058 | orchestrator | 2026-01-02 01:29:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:29:16.612987 | orchestrator | 2026-01-02 01:29:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:29:16.613034 | orchestrator | 2026-01-02 01:29:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:29:19.661208 | orchestrator | 2026-01-02 01:29:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:29:19.662761 | orchestrator | 2026-01-02 01:29:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:29:19.662794 | orchestrator | 2026-01-02 01:29:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:29:22.705460 | orchestrator | 2026-01-02 01:29:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:29:22.706962 | orchestrator | 2026-01-02 01:29:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:29:22.707069 | orchestrator | 2026-01-02 01:29:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:29:25.755981 | orchestrator | 2026-01-02 01:29:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:29:25.757828 | orchestrator | 2026-01-02 01:29:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:29:25.758116 | orchestrator | 2026-01-02 01:29:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:29:28.802457 | orchestrator | 2026-01-02 01:29:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:29:28.804466 | orchestrator | 2026-01-02 01:29:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:29:28.804576 | orchestrator | 2026-01-02 01:29:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:29:31.852497 | orchestrator | 2026-01-02 01:29:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:29:31.854455 | orchestrator | 2026-01-02 01:29:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:29:31.854514 | orchestrator | 2026-01-02 01:29:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:29:34.908613 | orchestrator | 2026-01-02 01:29:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:29:34.911239 | orchestrator | 2026-01-02 01:29:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:29:34.911482 | orchestrator | 2026-01-02 01:29:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:29:37.964203 | orchestrator | 2026-01-02 01:29:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:29:37.966362 | orchestrator | 2026-01-02 01:29:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:29:37.966483 | orchestrator | 2026-01-02 01:29:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:29:41.016599 | orchestrator | 2026-01-02 01:29:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:29:41.019038 | orchestrator | 2026-01-02 01:29:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:29:41.019089 | orchestrator | 2026-01-02 01:29:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:29:44.075766 | orchestrator | 2026-01-02 01:29:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:29:44.078552 | orchestrator | 2026-01-02 01:29:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:29:44.078606 | orchestrator | 2026-01-02 01:29:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:29:47.121725 | orchestrator | 2026-01-02 01:29:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:29:47.123456 | orchestrator | 2026-01-02 01:29:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:29:47.123547 | orchestrator | 2026-01-02 01:29:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:29:50.166192 | orchestrator | 2026-01-02 01:29:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:29:50.168829 | orchestrator | 2026-01-02 01:29:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:29:50.168885 | orchestrator | 2026-01-02 01:29:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:29:53.204594 | orchestrator | 2026-01-02 01:29:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:29:53.204991 | orchestrator | 2026-01-02 01:29:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:29:53.205082 | orchestrator | 2026-01-02 01:29:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:29:56.251077 | orchestrator | 2026-01-02 01:29:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:29:56.251176 | orchestrator | 2026-01-02 01:29:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:29:56.251183 | orchestrator | 2026-01-02 01:29:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:29:59.297782 | orchestrator | 2026-01-02 01:29:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:29:59.299350 | orchestrator | 2026-01-02 01:29:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:29:59.299473 | orchestrator | 2026-01-02 01:29:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:30:02.348941 | orchestrator | 2026-01-02 01:30:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:30:02.350909 | orchestrator | 2026-01-02 01:30:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:30:02.350971 | orchestrator | 2026-01-02 01:30:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:30:05.403914 | orchestrator | 2026-01-02 01:30:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:30:05.405040 | orchestrator | 2026-01-02 01:30:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:30:05.405128 | orchestrator | 2026-01-02 01:30:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:30:08.459896 | orchestrator | 2026-01-02 01:30:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:30:08.461807 | orchestrator | 2026-01-02 01:30:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:30:08.461837 | orchestrator | 2026-01-02 01:30:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:30:11.509869 | orchestrator | 2026-01-02 01:30:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:30:11.516487 | orchestrator | 2026-01-02 01:30:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:30:11.516553 | orchestrator | 2026-01-02 01:30:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:30:14.565810 | orchestrator | 2026-01-02 01:30:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:30:14.567033 | orchestrator | 2026-01-02 01:30:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:30:14.567227 | orchestrator | 2026-01-02 01:30:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:30:17.616473 | orchestrator | 2026-01-02 01:30:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:30:17.617428 | orchestrator | 2026-01-02 01:30:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:30:17.617534 | orchestrator | 2026-01-02 01:30:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:30:20.671716 | orchestrator | 2026-01-02 01:30:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:30:20.672941 | orchestrator | 2026-01-02 01:30:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:30:20.672995 | orchestrator | 2026-01-02 01:30:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:30:23.710235 | orchestrator | 2026-01-02 01:30:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:30:23.711909 | orchestrator | 2026-01-02 01:30:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:30:23.711991 | orchestrator | 2026-01-02 01:30:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:30:26.756970 | orchestrator | 2026-01-02 01:30:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:30:26.759294 | orchestrator | 2026-01-02 01:30:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:30:26.759540 | orchestrator | 2026-01-02 01:30:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:30:29.806006 | orchestrator | 2026-01-02 01:30:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:30:29.808593 | orchestrator | 2026-01-02 01:30:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:30:29.808677 | orchestrator | 2026-01-02 01:30:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:30:32.853745 | orchestrator | 2026-01-02 01:30:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:30:32.855212 | orchestrator | 2026-01-02 01:30:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:30:32.855371 | orchestrator | 2026-01-02 01:30:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:30:35.903025 | orchestrator | 2026-01-02 01:30:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:30:35.903915 | orchestrator | 2026-01-02 01:30:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:30:35.903948 | orchestrator | 2026-01-02 01:30:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:30:38.945304 | orchestrator | 2026-01-02 01:30:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:30:38.946887 | orchestrator | 2026-01-02 01:30:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:30:38.947012 | orchestrator | 2026-01-02 01:30:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:30:41.998107 | orchestrator | 2026-01-02 01:30:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:30:42.010592 | orchestrator | 2026-01-02 01:30:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:30:42.010731 | orchestrator | 2026-01-02 01:30:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:30:45.064952 | orchestrator | 2026-01-02 01:30:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:30:45.067021 | orchestrator | 2026-01-02 01:30:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:30:45.067067 | orchestrator | 2026-01-02 01:30:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:30:48.120915 | orchestrator | 2026-01-02 01:30:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:30:48.124412 | orchestrator | 2026-01-02 01:30:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:30:48.124493 | orchestrator | 2026-01-02 01:30:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:30:51.171186 | orchestrator | 2026-01-02 01:30:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:30:51.173970 | orchestrator | 2026-01-02 01:30:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:30:51.174257 | orchestrator | 2026-01-02 01:30:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:30:54.223946 | orchestrator | 2026-01-02 01:30:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:30:54.225883 | orchestrator | 2026-01-02 01:30:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:30:54.225940 | orchestrator | 2026-01-02 01:30:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:30:57.274764 | orchestrator | 2026-01-02 01:30:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:30:57.275823 | orchestrator | 2026-01-02 01:30:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:30:57.275901 | orchestrator | 2026-01-02 01:30:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:31:00.323192 | orchestrator | 2026-01-02 01:31:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:31:00.324196 | orchestrator | 2026-01-02 01:31:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:31:00.324231 | orchestrator | 2026-01-02 01:31:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:31:03.373420 | orchestrator | 2026-01-02 01:31:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:31:03.374407 | orchestrator | 2026-01-02 01:31:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:31:03.374465 | orchestrator | 2026-01-02 01:31:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:31:06.420100 | orchestrator | 2026-01-02 01:31:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:31:06.421634 | orchestrator | 2026-01-02 01:31:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:31:06.422140 | orchestrator | 2026-01-02 01:31:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:31:09.470349 | orchestrator | 2026-01-02 01:31:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:31:09.472097 | orchestrator | 2026-01-02 01:31:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:31:09.472117 | orchestrator | 2026-01-02 01:31:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:31:12.513225 | orchestrator | 2026-01-02 01:31:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:31:12.514574 | orchestrator | 2026-01-02 01:31:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:31:12.514607 | orchestrator | 2026-01-02 01:31:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:31:15.566917 | orchestrator | 2026-01-02 01:31:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:31:15.567468 | orchestrator | 2026-01-02 01:31:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:31:15.567506 | orchestrator | 2026-01-02 01:31:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:31:18.619623 | orchestrator | 2026-01-02 01:31:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:31:18.621304 | orchestrator | 2026-01-02 01:31:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:31:18.621359 | orchestrator | 2026-01-02 01:31:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:31:21.671589 | orchestrator | 2026-01-02 01:31:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:31:21.674890 | orchestrator | 2026-01-02 01:31:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:31:21.674982 | orchestrator | 2026-01-02 01:31:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:31:24.724967 | orchestrator | 2026-01-02 01:31:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:31:24.725458 | orchestrator | 2026-01-02 01:31:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:31:24.725492 | orchestrator | 2026-01-02 01:31:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:31:27.769512 | orchestrator | 2026-01-02 01:31:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:31:27.771085 | orchestrator | 2026-01-02 01:31:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:31:27.771125 | orchestrator | 2026-01-02 01:31:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:31:30.817915 | orchestrator | 2026-01-02 01:31:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:31:30.819349 | orchestrator | 2026-01-02 01:31:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:31:30.819710 | orchestrator | 2026-01-02 01:31:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:31:33.868560 | orchestrator | 2026-01-02 01:31:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:31:33.870095 | orchestrator | 2026-01-02 01:31:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:31:33.870157 | orchestrator | 2026-01-02 01:31:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:31:36.926589 | orchestrator | 2026-01-02 01:31:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:31:36.928716 | orchestrator | 2026-01-02 01:31:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:31:36.928844 | orchestrator | 2026-01-02 01:31:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:31:39.977937 | orchestrator | 2026-01-02 01:31:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:31:39.980477 | orchestrator | 2026-01-02 01:31:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:31:39.980667 | orchestrator | 2026-01-02 01:31:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:31:43.047795 | orchestrator | 2026-01-02 01:31:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:31:43.049771 | orchestrator | 2026-01-02 01:31:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:31:43.050039 | orchestrator | 2026-01-02 01:31:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:31:46.099258 | orchestrator | 2026-01-02 01:31:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:31:46.101393 | orchestrator | 2026-01-02 01:31:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:31:46.101731 | orchestrator | 2026-01-02 01:31:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:31:49.152188 | orchestrator | 2026-01-02 01:31:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:31:49.155765 | orchestrator | 2026-01-02 01:31:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:31:49.155871 | orchestrator | 2026-01-02 01:31:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:31:52.208157 | orchestrator | 2026-01-02 01:31:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:31:52.210195 | orchestrator | 2026-01-02 01:31:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:31:52.210243 | orchestrator | 2026-01-02 01:31:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:31:55.262099 | orchestrator | 2026-01-02 01:31:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:31:55.263294 | orchestrator | 2026-01-02 01:31:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:31:55.263366 | orchestrator | 2026-01-02 01:31:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:31:58.310265 | orchestrator | 2026-01-02 01:31:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:31:58.311300 | orchestrator | 2026-01-02 01:31:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:31:58.311346 | orchestrator | 2026-01-02 01:31:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:32:01.352776 | orchestrator | 2026-01-02 01:32:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:32:01.354213 | orchestrator | 2026-01-02 01:32:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:32:01.354449 | orchestrator | 2026-01-02 01:32:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:32:04.406400 | orchestrator | 2026-01-02 01:32:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:32:04.408231 | orchestrator | 2026-01-02 01:32:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:32:04.408286 | orchestrator | 2026-01-02 01:32:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:32:07.460572 | orchestrator | 2026-01-02 01:32:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:32:07.462110 | orchestrator | 2026-01-02 01:32:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:32:07.462171 | orchestrator | 2026-01-02 01:32:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:32:10.515313 | orchestrator | 2026-01-02 01:32:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:32:10.516320 | orchestrator | 2026-01-02 01:32:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:32:10.516392 | orchestrator | 2026-01-02 01:32:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:32:13.557902 | orchestrator | 2026-01-02 01:32:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:32:13.559948 | orchestrator | 2026-01-02 01:32:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:32:13.560080 | orchestrator | 2026-01-02 01:32:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:32:16.601744 | orchestrator | 2026-01-02 01:32:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:32:16.602091 | orchestrator | 2026-01-02 01:32:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:32:16.602129 | orchestrator | 2026-01-02 01:32:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:32:19.654218 | orchestrator | 2026-01-02 01:32:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:32:19.657552 | orchestrator | 2026-01-02 01:32:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:32:19.657688 | orchestrator | 2026-01-02 01:32:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:32:22.704188 | orchestrator | 2026-01-02 01:32:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:32:22.706626 | orchestrator | 2026-01-02 01:32:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:32:22.706766 | orchestrator | 2026-01-02 01:32:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:32:25.758836 | orchestrator | 2026-01-02 01:32:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:32:25.760600 | orchestrator | 2026-01-02 01:32:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:32:25.760689 | orchestrator | 2026-01-02 01:32:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:32:28.807422 | orchestrator | 2026-01-02 01:32:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:32:28.810520 | orchestrator | 2026-01-02 01:32:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:32:28.810572 | orchestrator | 2026-01-02 01:32:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:32:31.854325 | orchestrator | 2026-01-02 01:32:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:32:31.855910 | orchestrator | 2026-01-02 01:32:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:32:31.855959 | orchestrator | 2026-01-02 01:32:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:32:34.902780 | orchestrator | 2026-01-02 01:32:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:32:34.905588 | orchestrator | 2026-01-02 01:32:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:32:34.905806 | orchestrator | 2026-01-02 01:32:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:32:37.954556 | orchestrator | 2026-01-02 01:32:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:32:37.956715 | orchestrator | 2026-01-02 01:32:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:32:37.956761 | orchestrator | 2026-01-02 01:32:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:32:41.011684 | orchestrator | 2026-01-02 01:32:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:32:41.014087 | orchestrator | 2026-01-02 01:32:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:32:41.014157 | orchestrator | 2026-01-02 01:32:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:32:44.070551 | orchestrator | 2026-01-02 01:32:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:32:44.072119 | orchestrator | 2026-01-02 01:32:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:32:44.072156 | orchestrator | 2026-01-02 01:32:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:32:47.118483 | orchestrator | 2026-01-02 01:32:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:32:47.120523 | orchestrator | 2026-01-02 01:32:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:32:47.120566 | orchestrator | 2026-01-02 01:32:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:32:50.169997 | orchestrator | 2026-01-02 01:32:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:32:50.172212 | orchestrator | 2026-01-02 01:32:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:32:50.172266 | orchestrator | 2026-01-02 01:32:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:32:53.215811 | orchestrator | 2026-01-02 01:32:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:32:53.217080 | orchestrator | 2026-01-02 01:32:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:32:53.217116 | orchestrator | 2026-01-02 01:32:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:32:56.261022 | orchestrator | 2026-01-02 01:32:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:32:56.262773 | orchestrator | 2026-01-02 01:32:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:32:56.262805 | orchestrator | 2026-01-02 01:32:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:32:59.306109 | orchestrator | 2026-01-02 01:32:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:32:59.307012 | orchestrator | 2026-01-02 01:32:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:32:59.307094 | orchestrator | 2026-01-02 01:32:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:33:02.358286 | orchestrator | 2026-01-02 01:33:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:33:02.360470 | orchestrator | 2026-01-02 01:33:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:33:02.360580 | orchestrator | 2026-01-02 01:33:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:33:05.412264 | orchestrator | 2026-01-02 01:33:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:33:05.413251 | orchestrator | 2026-01-02 01:33:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:33:05.413382 | orchestrator | 2026-01-02 01:33:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:33:08.463852 | orchestrator | 2026-01-02 01:33:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:33:08.466250 | orchestrator | 2026-01-02 01:33:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:33:08.466302 | orchestrator | 2026-01-02 01:33:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:33:11.518757 | orchestrator | 2026-01-02 01:33:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:33:11.520856 | orchestrator | 2026-01-02 01:33:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:33:11.520959 | orchestrator | 2026-01-02 01:33:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:33:14.570308 | orchestrator | 2026-01-02 01:33:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:33:14.572721 | orchestrator | 2026-01-02 01:33:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:33:14.573485 | orchestrator | 2026-01-02 01:33:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:33:17.619008 | orchestrator | 2026-01-02 01:33:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:33:17.622080 | orchestrator | 2026-01-02 01:33:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:33:17.622241 | orchestrator | 2026-01-02 01:33:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:33:20.668161 | orchestrator | 2026-01-02 01:33:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:33:20.669953 | orchestrator | 2026-01-02 01:33:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:33:20.670008 | orchestrator | 2026-01-02 01:33:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:33:23.717264 | orchestrator | 2026-01-02 01:33:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:33:23.719367 | orchestrator | 2026-01-02 01:33:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:33:23.719504 | orchestrator | 2026-01-02 01:33:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:33:26.762286 | orchestrator | 2026-01-02 01:33:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:33:26.765008 | orchestrator | 2026-01-02 01:33:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:33:26.765057 | orchestrator | 2026-01-02 01:33:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:33:29.813996 | orchestrator | 2026-01-02 01:33:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:33:29.814392 | orchestrator | 2026-01-02 01:33:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:33:29.814434 | orchestrator | 2026-01-02 01:33:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:33:32.868138 | orchestrator | 2026-01-02 01:33:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:33:32.874388 | orchestrator | 2026-01-02 01:33:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:33:32.874463 | orchestrator | 2026-01-02 01:33:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:33:35.927275 | orchestrator | 2026-01-02 01:33:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:33:35.929266 | orchestrator | 2026-01-02 01:33:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:33:35.929345 | orchestrator | 2026-01-02 01:33:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:33:38.982279 | orchestrator | 2026-01-02 01:33:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:33:38.983203 | orchestrator | 2026-01-02 01:33:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:33:38.983229 | orchestrator | 2026-01-02 01:33:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:33:42.039188 | orchestrator | 2026-01-02 01:33:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:33:42.041512 | orchestrator | 2026-01-02 01:33:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:33:42.041561 | orchestrator | 2026-01-02 01:33:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:33:45.099755 | orchestrator | 2026-01-02 01:33:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:33:45.102077 | orchestrator | 2026-01-02 01:33:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:33:45.102156 | orchestrator | 2026-01-02 01:33:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:33:48.151415 | orchestrator | 2026-01-02 01:33:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:33:48.153840 | orchestrator | 2026-01-02 01:33:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:33:48.153893 | orchestrator | 2026-01-02 01:33:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:33:51.204144 | orchestrator | 2026-01-02 01:33:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:33:51.208617 | orchestrator | 2026-01-02 01:33:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:33:51.208821 | orchestrator | 2026-01-02 01:33:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:33:54.263896 | orchestrator | 2026-01-02 01:33:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:33:54.266116 | orchestrator | 2026-01-02 01:33:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:33:54.266617 | orchestrator | 2026-01-02 01:33:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:33:57.314838 | orchestrator | 2026-01-02 01:33:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:33:57.319269 | orchestrator | 2026-01-02 01:33:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:33:57.319379 | orchestrator | 2026-01-02 01:33:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:34:00.360314 | orchestrator | 2026-01-02 01:34:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:34:00.362433 | orchestrator | 2026-01-02 01:34:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:34:00.362484 | orchestrator | 2026-01-02 01:34:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:34:03.409526 | orchestrator | 2026-01-02 01:34:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:34:03.410600 | orchestrator | 2026-01-02 01:34:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:34:03.410619 | orchestrator | 2026-01-02 01:34:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:34:06.452427 | orchestrator | 2026-01-02 01:34:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:34:06.454155 | orchestrator | 2026-01-02 01:34:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:34:06.454287 | orchestrator | 2026-01-02 01:34:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:34:09.507875 | orchestrator | 2026-01-02 01:34:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:34:09.509515 | orchestrator | 2026-01-02 01:34:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:34:09.509557 | orchestrator | 2026-01-02 01:34:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:34:12.553170 | orchestrator | 2026-01-02 01:34:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:34:12.555045 | orchestrator | 2026-01-02 01:34:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:34:12.555121 | orchestrator | 2026-01-02 01:34:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:34:15.606967 | orchestrator | 2026-01-02 01:34:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:34:15.608910 | orchestrator | 2026-01-02 01:34:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:34:15.608959 | orchestrator | 2026-01-02 01:34:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:34:18.660690 | orchestrator | 2026-01-02 01:34:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:34:18.661790 | orchestrator | 2026-01-02 01:34:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:34:18.661823 | orchestrator | 2026-01-02 01:34:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:34:21.710422 | orchestrator | 2026-01-02 01:34:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:34:21.710606 | orchestrator | 2026-01-02 01:34:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:34:21.710674 | orchestrator | 2026-01-02 01:34:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:34:24.761840 | orchestrator | 2026-01-02 01:34:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:34:24.763043 | orchestrator | 2026-01-02 01:34:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:34:24.763258 | orchestrator | 2026-01-02 01:34:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:34:27.813515 | orchestrator | 2026-01-02 01:34:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:34:27.816261 | orchestrator | 2026-01-02 01:34:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:34:27.816358 | orchestrator | 2026-01-02 01:34:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:34:30.859685 | orchestrator | 2026-01-02 01:34:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:34:30.861231 | orchestrator | 2026-01-02 01:34:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:34:30.861423 | orchestrator | 2026-01-02 01:34:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:34:33.902791 | orchestrator | 2026-01-02 01:34:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:34:33.906309 | orchestrator | 2026-01-02 01:34:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:34:33.906426 | orchestrator | 2026-01-02 01:34:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:34:36.946472 | orchestrator | 2026-01-02 01:34:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:34:36.946755 | orchestrator | 2026-01-02 01:34:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:34:36.946782 | orchestrator | 2026-01-02 01:34:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:34:39.996411 | orchestrator | 2026-01-02 01:34:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:34:39.998192 | orchestrator | 2026-01-02 01:34:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:34:39.998247 | orchestrator | 2026-01-02 01:34:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:34:43.037431 | orchestrator | 2026-01-02 01:34:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:34:43.038339 | orchestrator | 2026-01-02 01:34:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:34:43.038405 | orchestrator | 2026-01-02 01:34:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:34:46.091438 | orchestrator | 2026-01-02 01:34:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:34:46.093745 | orchestrator | 2026-01-02 01:34:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:34:46.093805 | orchestrator | 2026-01-02 01:34:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:34:49.137082 | orchestrator | 2026-01-02 01:34:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:34:49.139395 | orchestrator | 2026-01-02 01:34:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:34:49.139468 | orchestrator | 2026-01-02 01:34:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:34:52.185699 | orchestrator | 2026-01-02 01:34:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:34:52.187368 | orchestrator | 2026-01-02 01:34:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:34:52.187428 | orchestrator | 2026-01-02 01:34:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:34:55.230707 | orchestrator | 2026-01-02 01:34:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:34:55.231519 | orchestrator | 2026-01-02 01:34:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:34:55.231549 | orchestrator | 2026-01-02 01:34:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:34:58.280064 | orchestrator | 2026-01-02 01:34:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:34:58.281149 | orchestrator | 2026-01-02 01:34:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:34:58.281274 | orchestrator | 2026-01-02 01:34:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:35:01.330488 | orchestrator | 2026-01-02 01:35:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:35:01.331671 | orchestrator | 2026-01-02 01:35:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:35:01.331725 | orchestrator | 2026-01-02 01:35:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:35:04.385125 | orchestrator | 2026-01-02 01:35:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:35:04.387694 | orchestrator | 2026-01-02 01:35:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:35:04.387754 | orchestrator | 2026-01-02 01:35:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:35:07.435688 | orchestrator | 2026-01-02 01:35:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:35:07.438133 | orchestrator | 2026-01-02 01:35:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:35:07.438259 | orchestrator | 2026-01-02 01:35:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:35:10.492755 | orchestrator | 2026-01-02 01:35:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:35:10.493912 | orchestrator | 2026-01-02 01:35:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:35:10.493952 | orchestrator | 2026-01-02 01:35:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:35:13.538290 | orchestrator | 2026-01-02 01:35:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:35:13.539474 | orchestrator | 2026-01-02 01:35:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:35:13.539520 | orchestrator | 2026-01-02 01:35:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:35:16.586416 | orchestrator | 2026-01-02 01:35:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:35:16.588726 | orchestrator | 2026-01-02 01:35:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:35:16.588868 | orchestrator | 2026-01-02 01:35:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:35:19.635328 | orchestrator | 2026-01-02 01:35:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:35:19.637220 | orchestrator | 2026-01-02 01:35:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:35:19.637262 | orchestrator | 2026-01-02 01:35:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:35:22.686966 | orchestrator | 2026-01-02 01:35:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:35:22.688552 | orchestrator | 2026-01-02 01:35:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:35:22.688588 | orchestrator | 2026-01-02 01:35:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:35:25.739105 | orchestrator | 2026-01-02 01:35:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:35:25.741188 | orchestrator | 2026-01-02 01:35:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:35:25.741246 | orchestrator | 2026-01-02 01:35:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:35:28.796841 | orchestrator | 2026-01-02 01:35:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:35:28.799377 | orchestrator | 2026-01-02 01:35:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:35:28.799412 | orchestrator | 2026-01-02 01:35:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:35:31.854298 | orchestrator | 2026-01-02 01:35:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:35:31.855121 | orchestrator | 2026-01-02 01:35:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:35:31.855312 | orchestrator | 2026-01-02 01:35:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:35:34.907530 | orchestrator | 2026-01-02 01:35:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:35:34.913354 | orchestrator | 2026-01-02 01:35:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:35:34.913448 | orchestrator | 2026-01-02 01:35:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:35:37.964236 | orchestrator | 2026-01-02 01:35:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:35:37.964345 | orchestrator | 2026-01-02 01:35:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:35:37.964360 | orchestrator | 2026-01-02 01:35:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:35:41.008792 | orchestrator | 2026-01-02 01:35:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:35:41.010406 | orchestrator | 2026-01-02 01:35:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:35:41.010447 | orchestrator | 2026-01-02 01:35:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:35:44.072370 | orchestrator | 2026-01-02 01:35:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:35:44.075960 | orchestrator | 2026-01-02 01:35:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:35:44.076113 | orchestrator | 2026-01-02 01:35:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:35:47.133366 | orchestrator | 2026-01-02 01:35:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:35:47.137114 | orchestrator | 2026-01-02 01:35:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:35:47.137149 | orchestrator | 2026-01-02 01:35:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:35:50.186727 | orchestrator | 2026-01-02 01:35:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:35:50.188258 | orchestrator | 2026-01-02 01:35:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:35:50.188293 | orchestrator | 2026-01-02 01:35:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:35:53.243628 | orchestrator | 2026-01-02 01:35:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:35:53.245861 | orchestrator | 2026-01-02 01:35:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:35:53.246076 | orchestrator | 2026-01-02 01:35:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:35:56.285031 | orchestrator | 2026-01-02 01:35:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:35:56.287434 | orchestrator | 2026-01-02 01:35:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:35:56.287475 | orchestrator | 2026-01-02 01:35:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:35:59.332099 | orchestrator | 2026-01-02 01:35:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:35:59.332538 | orchestrator | 2026-01-02 01:35:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:35:59.332569 | orchestrator | 2026-01-02 01:35:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:36:02.387220 | orchestrator | 2026-01-02 01:36:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:36:02.390275 | orchestrator | 2026-01-02 01:36:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:36:02.390399 | orchestrator | 2026-01-02 01:36:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:36:05.437737 | orchestrator | 2026-01-02 01:36:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:36:05.440413 | orchestrator | 2026-01-02 01:36:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:36:05.440557 | orchestrator | 2026-01-02 01:36:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:36:08.488228 | orchestrator | 2026-01-02 01:36:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:36:08.490656 | orchestrator | 2026-01-02 01:36:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:36:08.490732 | orchestrator | 2026-01-02 01:36:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:36:11.543997 | orchestrator | 2026-01-02 01:36:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:36:11.546363 | orchestrator | 2026-01-02 01:36:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:36:11.546430 | orchestrator | 2026-01-02 01:36:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:36:14.595929 | orchestrator | 2026-01-02 01:36:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:36:14.598100 | orchestrator | 2026-01-02 01:36:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:36:14.598146 | orchestrator | 2026-01-02 01:36:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:36:17.644228 | orchestrator | 2026-01-02 01:36:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:36:17.645396 | orchestrator | 2026-01-02 01:36:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:36:17.645466 | orchestrator | 2026-01-02 01:36:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:36:20.694433 | orchestrator | 2026-01-02 01:36:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:36:20.696744 | orchestrator | 2026-01-02 01:36:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:36:20.696816 | orchestrator | 2026-01-02 01:36:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:36:23.739702 | orchestrator | 2026-01-02 01:36:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:36:23.742537 | orchestrator | 2026-01-02 01:36:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:36:23.742638 | orchestrator | 2026-01-02 01:36:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:36:26.783408 | orchestrator | 2026-01-02 01:36:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:36:26.785628 | orchestrator | 2026-01-02 01:36:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:36:26.785696 | orchestrator | 2026-01-02 01:36:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:36:29.830961 | orchestrator | 2026-01-02 01:36:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:36:29.832973 | orchestrator | 2026-01-02 01:36:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:36:29.833002 | orchestrator | 2026-01-02 01:36:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:36:32.876905 | orchestrator | 2026-01-02 01:36:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:36:32.878750 | orchestrator | 2026-01-02 01:36:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:36:32.878830 | orchestrator | 2026-01-02 01:36:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:36:35.920488 | orchestrator | 2026-01-02 01:36:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:36:35.922932 | orchestrator | 2026-01-02 01:36:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:36:35.923047 | orchestrator | 2026-01-02 01:36:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:36:38.968589 | orchestrator | 2026-01-02 01:36:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:36:38.971139 | orchestrator | 2026-01-02 01:36:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:36:38.971218 | orchestrator | 2026-01-02 01:36:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:36:42.019122 | orchestrator | 2026-01-02 01:36:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:36:42.021523 | orchestrator | 2026-01-02 01:36:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:36:42.021557 | orchestrator | 2026-01-02 01:36:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:36:45.063439 | orchestrator | 2026-01-02 01:36:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:36:45.066193 | orchestrator | 2026-01-02 01:36:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:36:45.066248 | orchestrator | 2026-01-02 01:36:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:36:48.110541 | orchestrator | 2026-01-02 01:36:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:36:48.112523 | orchestrator | 2026-01-02 01:36:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:36:48.112561 | orchestrator | 2026-01-02 01:36:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:36:51.163851 | orchestrator | 2026-01-02 01:36:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:36:51.166218 | orchestrator | 2026-01-02 01:36:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:36:51.166280 | orchestrator | 2026-01-02 01:36:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:36:54.213097 | orchestrator | 2026-01-02 01:36:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:36:54.214592 | orchestrator | 2026-01-02 01:36:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:36:54.214718 | orchestrator | 2026-01-02 01:36:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:36:57.247586 | orchestrator | 2026-01-02 01:36:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:36:57.249009 | orchestrator | 2026-01-02 01:36:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:36:57.249076 | orchestrator | 2026-01-02 01:36:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:37:00.295238 | orchestrator | 2026-01-02 01:37:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:37:00.296677 | orchestrator | 2026-01-02 01:37:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:37:00.296752 | orchestrator | 2026-01-02 01:37:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:37:03.342379 | orchestrator | 2026-01-02 01:37:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:37:03.344425 | orchestrator | 2026-01-02 01:37:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:37:03.344553 | orchestrator | 2026-01-02 01:37:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:37:06.397307 | orchestrator | 2026-01-02 01:37:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:37:06.399240 | orchestrator | 2026-01-02 01:37:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:37:06.399506 | orchestrator | 2026-01-02 01:37:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:37:09.441696 | orchestrator | 2026-01-02 01:37:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:37:09.443187 | orchestrator | 2026-01-02 01:37:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:37:09.443231 | orchestrator | 2026-01-02 01:37:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:37:12.489946 | orchestrator | 2026-01-02 01:37:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:37:12.491824 | orchestrator | 2026-01-02 01:37:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:37:12.491881 | orchestrator | 2026-01-02 01:37:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:37:15.536958 | orchestrator | 2026-01-02 01:37:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:37:15.538472 | orchestrator | 2026-01-02 01:37:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:37:15.538656 | orchestrator | 2026-01-02 01:37:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:37:18.582364 | orchestrator | 2026-01-02 01:37:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:37:18.584434 | orchestrator | 2026-01-02 01:37:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:37:18.584489 | orchestrator | 2026-01-02 01:37:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:37:21.635353 | orchestrator | 2026-01-02 01:37:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:37:21.637879 | orchestrator | 2026-01-02 01:37:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:37:21.637967 | orchestrator | 2026-01-02 01:37:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:37:24.686071 | orchestrator | 2026-01-02 01:37:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:37:24.687283 | orchestrator | 2026-01-02 01:37:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:37:24.687332 | orchestrator | 2026-01-02 01:37:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:37:27.741036 | orchestrator | 2026-01-02 01:37:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:37:27.742672 | orchestrator | 2026-01-02 01:37:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:37:27.742811 | orchestrator | 2026-01-02 01:37:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:37:30.796750 | orchestrator | 2026-01-02 01:37:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:37:30.798639 | orchestrator | 2026-01-02 01:37:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:37:30.798671 | orchestrator | 2026-01-02 01:37:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:37:33.843014 | orchestrator | 2026-01-02 01:37:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:37:33.844649 | orchestrator | 2026-01-02 01:37:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:37:33.844708 | orchestrator | 2026-01-02 01:37:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:37:36.893170 | orchestrator | 2026-01-02 01:37:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:37:36.895070 | orchestrator | 2026-01-02 01:37:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:37:36.895105 | orchestrator | 2026-01-02 01:37:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:37:39.944084 | orchestrator | 2026-01-02 01:37:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:37:39.944893 | orchestrator | 2026-01-02 01:37:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:37:39.944944 | orchestrator | 2026-01-02 01:37:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:37:42.994223 | orchestrator | 2026-01-02 01:37:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:37:42.996836 | orchestrator | 2026-01-02 01:37:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:37:42.996909 | orchestrator | 2026-01-02 01:37:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:37:46.047190 | orchestrator | 2026-01-02 01:37:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:37:46.050369 | orchestrator | 2026-01-02 01:37:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:37:46.050431 | orchestrator | 2026-01-02 01:37:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:37:49.102750 | orchestrator | 2026-01-02 01:37:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:37:49.103726 | orchestrator | 2026-01-02 01:37:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:37:49.103777 | orchestrator | 2026-01-02 01:37:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:37:52.148374 | orchestrator | 2026-01-02 01:37:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:37:52.150703 | orchestrator | 2026-01-02 01:37:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:37:52.150816 | orchestrator | 2026-01-02 01:37:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:37:55.191356 | orchestrator | 2026-01-02 01:37:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:37:55.192862 | orchestrator | 2026-01-02 01:37:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:37:55.192926 | orchestrator | 2026-01-02 01:37:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:37:58.237392 | orchestrator | 2026-01-02 01:37:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:37:58.238934 | orchestrator | 2026-01-02 01:37:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:37:58.238974 | orchestrator | 2026-01-02 01:37:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:38:01.282607 | orchestrator | 2026-01-02 01:38:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:38:01.285384 | orchestrator | 2026-01-02 01:38:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:38:01.285529 | orchestrator | 2026-01-02 01:38:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:38:04.328534 | orchestrator | 2026-01-02 01:38:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:38:04.331085 | orchestrator | 2026-01-02 01:38:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:38:04.331119 | orchestrator | 2026-01-02 01:38:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:38:07.383897 | orchestrator | 2026-01-02 01:38:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:38:07.386317 | orchestrator | 2026-01-02 01:38:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:38:07.386382 | orchestrator | 2026-01-02 01:38:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:38:10.438889 | orchestrator | 2026-01-02 01:38:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:38:10.441424 | orchestrator | 2026-01-02 01:38:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:38:10.441870 | orchestrator | 2026-01-02 01:38:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:38:13.494233 | orchestrator | 2026-01-02 01:38:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:38:13.497097 | orchestrator | 2026-01-02 01:38:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:38:13.497288 | orchestrator | 2026-01-02 01:38:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:38:16.548569 | orchestrator | 2026-01-02 01:38:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:38:16.553041 | orchestrator | 2026-01-02 01:38:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:38:16.553104 | orchestrator | 2026-01-02 01:38:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:38:19.604336 | orchestrator | 2026-01-02 01:38:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:38:19.606801 | orchestrator | 2026-01-02 01:38:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:38:19.606847 | orchestrator | 2026-01-02 01:38:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:38:22.654366 | orchestrator | 2026-01-02 01:38:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:38:22.655087 | orchestrator | 2026-01-02 01:38:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:38:22.655141 | orchestrator | 2026-01-02 01:38:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:38:25.705311 | orchestrator | 2026-01-02 01:38:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:38:25.707356 | orchestrator | 2026-01-02 01:38:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:38:25.707431 | orchestrator | 2026-01-02 01:38:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:38:28.748585 | orchestrator | 2026-01-02 01:38:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:38:28.750798 | orchestrator | 2026-01-02 01:38:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:38:28.750857 | orchestrator | 2026-01-02 01:38:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:38:31.792648 | orchestrator | 2026-01-02 01:38:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:38:31.795205 | orchestrator | 2026-01-02 01:38:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:38:31.795273 | orchestrator | 2026-01-02 01:38:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:38:34.844198 | orchestrator | 2026-01-02 01:38:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:38:34.845788 | orchestrator | 2026-01-02 01:38:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:38:34.845814 | orchestrator | 2026-01-02 01:38:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:38:37.892983 | orchestrator | 2026-01-02 01:38:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:38:37.894379 | orchestrator | 2026-01-02 01:38:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:38:37.894449 | orchestrator | 2026-01-02 01:38:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:38:40.940737 | orchestrator | 2026-01-02 01:38:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:38:40.942193 | orchestrator | 2026-01-02 01:38:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:38:40.942268 | orchestrator | 2026-01-02 01:38:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:38:43.985223 | orchestrator | 2026-01-02 01:38:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:38:43.985803 | orchestrator | 2026-01-02 01:38:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:38:43.985888 | orchestrator | 2026-01-02 01:38:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:38:47.029267 | orchestrator | 2026-01-02 01:38:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:38:47.032299 | orchestrator | 2026-01-02 01:38:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:38:47.032369 | orchestrator | 2026-01-02 01:38:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:38:50.080169 | orchestrator | 2026-01-02 01:38:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:38:50.082907 | orchestrator | 2026-01-02 01:38:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:38:50.082950 | orchestrator | 2026-01-02 01:38:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:38:53.133733 | orchestrator | 2026-01-02 01:38:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:38:53.136884 | orchestrator | 2026-01-02 01:38:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:38:53.136973 | orchestrator | 2026-01-02 01:38:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:38:56.194341 | orchestrator | 2026-01-02 01:38:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:38:56.197932 | orchestrator | 2026-01-02 01:38:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:38:56.197987 | orchestrator | 2026-01-02 01:38:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:38:59.248717 | orchestrator | 2026-01-02 01:38:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:38:59.249164 | orchestrator | 2026-01-02 01:38:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:38:59.250749 | orchestrator | 2026-01-02 01:38:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:39:02.300049 | orchestrator | 2026-01-02 01:39:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:39:02.303091 | orchestrator | 2026-01-02 01:39:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:39:02.303145 | orchestrator | 2026-01-02 01:39:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:39:05.347996 | orchestrator | 2026-01-02 01:39:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:39:05.351397 | orchestrator | 2026-01-02 01:39:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:39:05.351458 | orchestrator | 2026-01-02 01:39:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:39:08.403635 | orchestrator | 2026-01-02 01:39:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:39:08.404873 | orchestrator | 2026-01-02 01:39:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:39:08.404942 | orchestrator | 2026-01-02 01:39:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:39:11.459734 | orchestrator | 2026-01-02 01:39:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:39:11.463374 | orchestrator | 2026-01-02 01:39:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:39:11.463998 | orchestrator | 2026-01-02 01:39:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:39:14.514000 | orchestrator | 2026-01-02 01:39:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:39:14.517269 | orchestrator | 2026-01-02 01:39:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:39:14.517583 | orchestrator | 2026-01-02 01:39:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:39:17.565067 | orchestrator | 2026-01-02 01:39:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:39:17.566807 | orchestrator | 2026-01-02 01:39:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:39:17.566904 | orchestrator | 2026-01-02 01:39:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:39:20.614388 | orchestrator | 2026-01-02 01:39:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:39:20.617081 | orchestrator | 2026-01-02 01:39:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:39:20.617158 | orchestrator | 2026-01-02 01:39:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:39:23.677098 | orchestrator | 2026-01-02 01:39:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:39:23.678925 | orchestrator | 2026-01-02 01:39:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:39:23.679000 | orchestrator | 2026-01-02 01:39:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:39:26.731621 | orchestrator | 2026-01-02 01:39:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:39:26.734302 | orchestrator | 2026-01-02 01:39:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:39:26.734364 | orchestrator | 2026-01-02 01:39:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:39:29.787318 | orchestrator | 2026-01-02 01:39:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:39:29.789769 | orchestrator | 2026-01-02 01:39:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:39:29.789820 | orchestrator | 2026-01-02 01:39:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:39:32.835067 | orchestrator | 2026-01-02 01:39:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:39:32.837690 | orchestrator | 2026-01-02 01:39:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:39:32.837787 | orchestrator | 2026-01-02 01:39:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:39:35.878953 | orchestrator | 2026-01-02 01:39:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:39:35.881144 | orchestrator | 2026-01-02 01:39:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:39:35.881212 | orchestrator | 2026-01-02 01:39:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:39:38.930235 | orchestrator | 2026-01-02 01:39:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:39:38.932119 | orchestrator | 2026-01-02 01:39:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:39:38.932237 | orchestrator | 2026-01-02 01:39:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:39:41.977374 | orchestrator | 2026-01-02 01:39:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:39:41.977536 | orchestrator | 2026-01-02 01:39:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:39:41.977617 | orchestrator | 2026-01-02 01:39:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:39:45.027272 | orchestrator | 2026-01-02 01:39:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:39:45.030688 | orchestrator | 2026-01-02 01:39:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:39:45.030704 | orchestrator | 2026-01-02 01:39:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:39:48.075799 | orchestrator | 2026-01-02 01:39:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:39:48.076636 | orchestrator | 2026-01-02 01:39:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:39:48.076858 | orchestrator | 2026-01-02 01:39:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:39:51.136112 | orchestrator | 2026-01-02 01:39:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:39:51.138643 | orchestrator | 2026-01-02 01:39:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:39:51.138726 | orchestrator | 2026-01-02 01:39:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:39:54.189113 | orchestrator | 2026-01-02 01:39:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:39:54.190267 | orchestrator | 2026-01-02 01:39:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:39:54.190555 | orchestrator | 2026-01-02 01:39:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:39:57.239030 | orchestrator | 2026-01-02 01:39:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:39:57.240701 | orchestrator | 2026-01-02 01:39:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:39:57.240722 | orchestrator | 2026-01-02 01:39:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:40:00.289577 | orchestrator | 2026-01-02 01:40:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:40:00.290714 | orchestrator | 2026-01-02 01:40:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:40:00.290743 | orchestrator | 2026-01-02 01:40:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:40:03.337166 | orchestrator | 2026-01-02 01:40:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:40:03.338976 | orchestrator | 2026-01-02 01:40:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:40:03.338995 | orchestrator | 2026-01-02 01:40:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:40:06.379901 | orchestrator | 2026-01-02 01:40:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:40:06.381340 | orchestrator | 2026-01-02 01:40:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:40:06.381353 | orchestrator | 2026-01-02 01:40:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:40:09.433693 | orchestrator | 2026-01-02 01:40:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:40:09.435780 | orchestrator | 2026-01-02 01:40:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:40:09.435848 | orchestrator | 2026-01-02 01:40:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:40:12.477079 | orchestrator | 2026-01-02 01:40:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:40:12.479269 | orchestrator | 2026-01-02 01:40:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:40:12.479321 | orchestrator | 2026-01-02 01:40:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:40:15.518339 | orchestrator | 2026-01-02 01:40:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:40:15.519819 | orchestrator | 2026-01-02 01:40:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:40:15.519853 | orchestrator | 2026-01-02 01:40:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:40:18.570579 | orchestrator | 2026-01-02 01:40:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:40:18.573026 | orchestrator | 2026-01-02 01:40:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:40:18.573061 | orchestrator | 2026-01-02 01:40:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:40:21.622136 | orchestrator | 2026-01-02 01:40:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:40:21.623596 | orchestrator | 2026-01-02 01:40:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:40:21.623805 | orchestrator | 2026-01-02 01:40:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:40:24.677376 | orchestrator | 2026-01-02 01:40:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:40:24.678910 | orchestrator | 2026-01-02 01:40:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:40:24.678956 | orchestrator | 2026-01-02 01:40:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:40:27.728875 | orchestrator | 2026-01-02 01:40:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:40:27.730534 | orchestrator | 2026-01-02 01:40:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:40:27.730797 | orchestrator | 2026-01-02 01:40:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:40:30.777240 | orchestrator | 2026-01-02 01:40:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:40:30.778828 | orchestrator | 2026-01-02 01:40:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:40:30.778859 | orchestrator | 2026-01-02 01:40:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:40:33.831402 | orchestrator | 2026-01-02 01:40:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:40:33.834121 | orchestrator | 2026-01-02 01:40:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:40:33.834220 | orchestrator | 2026-01-02 01:40:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:40:36.886426 | orchestrator | 2026-01-02 01:40:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:40:36.887952 | orchestrator | 2026-01-02 01:40:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:40:36.887983 | orchestrator | 2026-01-02 01:40:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:40:39.939758 | orchestrator | 2026-01-02 01:40:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:40:39.942059 | orchestrator | 2026-01-02 01:40:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:40:39.942078 | orchestrator | 2026-01-02 01:40:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:40:42.993277 | orchestrator | 2026-01-02 01:40:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:40:42.995341 | orchestrator | 2026-01-02 01:40:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:40:42.995381 | orchestrator | 2026-01-02 01:40:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:40:46.049426 | orchestrator | 2026-01-02 01:40:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:40:46.052746 | orchestrator | 2026-01-02 01:40:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:40:46.052802 | orchestrator | 2026-01-02 01:40:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:40:49.099338 | orchestrator | 2026-01-02 01:40:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:40:49.184153 | orchestrator | 2026-01-02 01:40:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:40:49.184222 | orchestrator | 2026-01-02 01:40:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:40:52.149298 | orchestrator | 2026-01-02 01:40:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:40:52.149430 | orchestrator | 2026-01-02 01:40:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:40:52.149445 | orchestrator | 2026-01-02 01:40:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:40:55.200005 | orchestrator | 2026-01-02 01:40:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:40:55.202479 | orchestrator | 2026-01-02 01:40:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:40:55.202506 | orchestrator | 2026-01-02 01:40:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:40:58.251165 | orchestrator | 2026-01-02 01:40:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:40:58.252880 | orchestrator | 2026-01-02 01:40:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:40:58.252949 | orchestrator | 2026-01-02 01:40:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:41:01.299146 | orchestrator | 2026-01-02 01:41:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:41:01.301433 | orchestrator | 2026-01-02 01:41:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:41:01.301460 | orchestrator | 2026-01-02 01:41:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:41:04.351973 | orchestrator | 2026-01-02 01:41:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:41:04.353358 | orchestrator | 2026-01-02 01:41:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:41:04.353867 | orchestrator | 2026-01-02 01:41:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:41:07.404125 | orchestrator | 2026-01-02 01:41:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:41:07.407403 | orchestrator | 2026-01-02 01:41:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:41:07.407426 | orchestrator | 2026-01-02 01:41:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:41:10.449720 | orchestrator | 2026-01-02 01:41:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:41:10.450928 | orchestrator | 2026-01-02 01:41:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:41:10.451184 | orchestrator | 2026-01-02 01:41:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:41:13.491778 | orchestrator | 2026-01-02 01:41:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:41:13.492639 | orchestrator | 2026-01-02 01:41:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:41:13.492654 | orchestrator | 2026-01-02 01:41:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:41:16.533584 | orchestrator | 2026-01-02 01:41:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:41:16.535405 | orchestrator | 2026-01-02 01:41:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:41:16.535462 | orchestrator | 2026-01-02 01:41:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:41:19.588269 | orchestrator | 2026-01-02 01:41:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:41:19.589580 | orchestrator | 2026-01-02 01:41:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:41:19.745425 | orchestrator | 2026-01-02 01:41:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:41:22.636281 | orchestrator | 2026-01-02 01:41:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:41:22.638990 | orchestrator | 2026-01-02 01:41:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:41:22.639104 | orchestrator | 2026-01-02 01:41:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:41:25.689959 | orchestrator | 2026-01-02 01:41:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:41:25.691842 | orchestrator | 2026-01-02 01:41:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:41:25.691929 | orchestrator | 2026-01-02 01:41:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:41:28.732170 | orchestrator | 2026-01-02 01:41:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:41:28.733310 | orchestrator | 2026-01-02 01:41:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:41:28.733344 | orchestrator | 2026-01-02 01:41:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:41:31.779258 | orchestrator | 2026-01-02 01:41:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:41:31.781339 | orchestrator | 2026-01-02 01:41:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:41:31.781396 | orchestrator | 2026-01-02 01:41:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:41:34.831210 | orchestrator | 2026-01-02 01:41:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:41:34.833275 | orchestrator | 2026-01-02 01:41:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:41:34.833410 | orchestrator | 2026-01-02 01:41:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:41:37.874792 | orchestrator | 2026-01-02 01:41:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:41:37.875512 | orchestrator | 2026-01-02 01:41:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:41:37.876071 | orchestrator | 2026-01-02 01:41:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:41:40.921568 | orchestrator | 2026-01-02 01:41:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:41:40.922750 | orchestrator | 2026-01-02 01:41:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:41:40.922807 | orchestrator | 2026-01-02 01:41:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:41:43.967120 | orchestrator | 2026-01-02 01:41:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:41:43.968788 | orchestrator | 2026-01-02 01:41:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:41:43.968867 | orchestrator | 2026-01-02 01:41:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:41:47.011534 | orchestrator | 2026-01-02 01:41:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:41:47.015253 | orchestrator | 2026-01-02 01:41:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:41:47.015326 | orchestrator | 2026-01-02 01:41:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:41:50.067483 | orchestrator | 2026-01-02 01:41:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:41:50.069471 | orchestrator | 2026-01-02 01:41:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:41:50.069812 | orchestrator | 2026-01-02 01:41:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:41:53.115421 | orchestrator | 2026-01-02 01:41:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:41:53.116210 | orchestrator | 2026-01-02 01:41:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:41:53.116231 | orchestrator | 2026-01-02 01:41:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:41:56.163782 | orchestrator | 2026-01-02 01:41:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:41:56.165290 | orchestrator | 2026-01-02 01:41:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:41:56.165323 | orchestrator | 2026-01-02 01:41:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:41:59.206195 | orchestrator | 2026-01-02 01:41:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:41:59.207938 | orchestrator | 2026-01-02 01:41:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:41:59.207975 | orchestrator | 2026-01-02 01:41:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:42:02.250327 | orchestrator | 2026-01-02 01:42:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:42:02.251907 | orchestrator | 2026-01-02 01:42:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:42:02.251947 | orchestrator | 2026-01-02 01:42:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:42:05.296781 | orchestrator | 2026-01-02 01:42:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:42:05.297990 | orchestrator | 2026-01-02 01:42:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:42:05.298104 | orchestrator | 2026-01-02 01:42:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:42:08.353243 | orchestrator | 2026-01-02 01:42:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:42:08.354597 | orchestrator | 2026-01-02 01:42:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:42:08.355337 | orchestrator | 2026-01-02 01:42:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:42:11.403471 | orchestrator | 2026-01-02 01:42:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:42:11.406077 | orchestrator | 2026-01-02 01:42:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:42:11.406143 | orchestrator | 2026-01-02 01:42:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:42:14.462005 | orchestrator | 2026-01-02 01:42:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:42:14.462143 | orchestrator | 2026-01-02 01:42:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:42:14.462160 | orchestrator | 2026-01-02 01:42:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:42:17.508671 | orchestrator | 2026-01-02 01:42:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:42:17.510380 | orchestrator | 2026-01-02 01:42:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:42:17.510438 | orchestrator | 2026-01-02 01:42:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:42:20.558599 | orchestrator | 2026-01-02 01:42:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:42:20.560238 | orchestrator | 2026-01-02 01:42:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:42:20.560270 | orchestrator | 2026-01-02 01:42:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:42:23.604789 | orchestrator | 2026-01-02 01:42:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:42:23.605944 | orchestrator | 2026-01-02 01:42:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:42:23.605987 | orchestrator | 2026-01-02 01:42:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:42:26.659904 | orchestrator | 2026-01-02 01:42:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:42:26.660895 | orchestrator | 2026-01-02 01:42:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:42:26.661174 | orchestrator | 2026-01-02 01:42:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:42:29.712916 | orchestrator | 2026-01-02 01:42:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:42:29.715270 | orchestrator | 2026-01-02 01:42:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:42:29.715321 | orchestrator | 2026-01-02 01:42:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:42:32.765073 | orchestrator | 2026-01-02 01:42:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:42:32.767865 | orchestrator | 2026-01-02 01:42:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:42:32.767913 | orchestrator | 2026-01-02 01:42:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:42:35.823348 | orchestrator | 2026-01-02 01:42:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:42:35.825775 | orchestrator | 2026-01-02 01:42:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:42:35.825812 | orchestrator | 2026-01-02 01:42:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:42:38.873480 | orchestrator | 2026-01-02 01:42:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:42:38.875364 | orchestrator | 2026-01-02 01:42:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:42:38.875421 | orchestrator | 2026-01-02 01:42:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:42:41.920578 | orchestrator | 2026-01-02 01:42:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:42:41.922467 | orchestrator | 2026-01-02 01:42:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:42:41.922508 | orchestrator | 2026-01-02 01:42:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:42:44.972278 | orchestrator | 2026-01-02 01:42:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:42:44.974180 | orchestrator | 2026-01-02 01:42:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:42:44.974375 | orchestrator | 2026-01-02 01:42:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:42:48.024812 | orchestrator | 2026-01-02 01:42:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:42:48.026614 | orchestrator | 2026-01-02 01:42:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:42:48.026632 | orchestrator | 2026-01-02 01:42:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:42:51.077276 | orchestrator | 2026-01-02 01:42:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:42:51.079386 | orchestrator | 2026-01-02 01:42:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:42:51.079453 | orchestrator | 2026-01-02 01:42:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:42:54.130202 | orchestrator | 2026-01-02 01:42:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:42:54.132845 | orchestrator | 2026-01-02 01:42:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:42:54.133309 | orchestrator | 2026-01-02 01:42:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:42:57.184334 | orchestrator | 2026-01-02 01:42:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:42:57.186163 | orchestrator | 2026-01-02 01:42:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:42:57.186225 | orchestrator | 2026-01-02 01:42:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:43:00.242523 | orchestrator | 2026-01-02 01:43:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:43:00.245356 | orchestrator | 2026-01-02 01:43:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:43:00.245440 | orchestrator | 2026-01-02 01:43:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:43:03.293500 | orchestrator | 2026-01-02 01:43:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:43:03.294451 | orchestrator | 2026-01-02 01:43:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:43:03.294474 | orchestrator | 2026-01-02 01:43:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:43:06.347849 | orchestrator | 2026-01-02 01:43:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:43:06.348259 | orchestrator | 2026-01-02 01:43:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:43:06.348417 | orchestrator | 2026-01-02 01:43:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:43:09.399773 | orchestrator | 2026-01-02 01:43:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:43:09.402100 | orchestrator | 2026-01-02 01:43:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:43:09.402144 | orchestrator | 2026-01-02 01:43:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:43:12.450422 | orchestrator | 2026-01-02 01:43:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:43:12.453265 | orchestrator | 2026-01-02 01:43:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:43:12.453422 | orchestrator | 2026-01-02 01:43:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:43:15.506934 | orchestrator | 2026-01-02 01:43:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:43:15.508183 | orchestrator | 2026-01-02 01:43:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:43:15.508297 | orchestrator | 2026-01-02 01:43:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:43:18.552998 | orchestrator | 2026-01-02 01:43:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:43:18.554402 | orchestrator | 2026-01-02 01:43:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:43:18.554440 | orchestrator | 2026-01-02 01:43:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:43:21.606323 | orchestrator | 2026-01-02 01:43:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:43:21.608127 | orchestrator | 2026-01-02 01:43:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:43:21.608196 | orchestrator | 2026-01-02 01:43:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:43:24.656030 | orchestrator | 2026-01-02 01:43:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:43:24.658163 | orchestrator | 2026-01-02 01:43:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:43:24.658216 | orchestrator | 2026-01-02 01:43:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:43:27.703502 | orchestrator | 2026-01-02 01:43:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:43:27.705885 | orchestrator | 2026-01-02 01:43:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:43:27.705935 | orchestrator | 2026-01-02 01:43:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:43:30.746272 | orchestrator | 2026-01-02 01:43:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:43:30.747189 | orchestrator | 2026-01-02 01:43:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:43:30.747379 | orchestrator | 2026-01-02 01:43:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:43:33.798111 | orchestrator | 2026-01-02 01:43:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:43:33.799923 | orchestrator | 2026-01-02 01:43:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:43:33.800006 | orchestrator | 2026-01-02 01:43:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:43:36.838689 | orchestrator | 2026-01-02 01:43:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:43:36.840852 | orchestrator | 2026-01-02 01:43:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:43:36.840887 | orchestrator | 2026-01-02 01:43:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:43:39.886565 | orchestrator | 2026-01-02 01:43:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:43:39.887991 | orchestrator | 2026-01-02 01:43:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:43:39.888038 | orchestrator | 2026-01-02 01:43:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:43:42.938965 | orchestrator | 2026-01-02 01:43:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:43:42.941853 | orchestrator | 2026-01-02 01:43:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:43:42.942684 | orchestrator | 2026-01-02 01:43:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:43:45.998452 | orchestrator | 2026-01-02 01:43:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:43:46.000756 | orchestrator | 2026-01-02 01:43:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:43:46.000799 | orchestrator | 2026-01-02 01:43:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:43:49.056534 | orchestrator | 2026-01-02 01:43:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:43:49.058524 | orchestrator | 2026-01-02 01:43:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:43:49.058577 | orchestrator | 2026-01-02 01:43:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:43:52.103335 | orchestrator | 2026-01-02 01:43:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:43:52.106570 | orchestrator | 2026-01-02 01:43:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:43:52.107095 | orchestrator | 2026-01-02 01:43:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:43:55.158976 | orchestrator | 2026-01-02 01:43:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:43:55.161467 | orchestrator | 2026-01-02 01:43:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:43:55.161542 | orchestrator | 2026-01-02 01:43:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:43:58.211418 | orchestrator | 2026-01-02 01:43:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:43:58.212789 | orchestrator | 2026-01-02 01:43:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:43:58.212887 | orchestrator | 2026-01-02 01:43:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:44:01.264722 | orchestrator | 2026-01-02 01:44:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:44:01.267027 | orchestrator | 2026-01-02 01:44:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:44:01.267070 | orchestrator | 2026-01-02 01:44:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:44:04.310142 | orchestrator | 2026-01-02 01:44:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:44:04.311845 | orchestrator | 2026-01-02 01:44:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:44:04.311915 | orchestrator | 2026-01-02 01:44:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:44:07.355478 | orchestrator | 2026-01-02 01:44:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:44:07.357215 | orchestrator | 2026-01-02 01:44:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:44:07.357254 | orchestrator | 2026-01-02 01:44:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:44:10.404902 | orchestrator | 2026-01-02 01:44:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:44:10.405521 | orchestrator | 2026-01-02 01:44:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:44:10.405601 | orchestrator | 2026-01-02 01:44:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:44:13.449889 | orchestrator | 2026-01-02 01:44:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:44:13.452337 | orchestrator | 2026-01-02 01:44:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:44:13.452423 | orchestrator | 2026-01-02 01:44:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:44:16.503326 | orchestrator | 2026-01-02 01:44:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:44:16.504886 | orchestrator | 2026-01-02 01:44:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:44:16.504940 | orchestrator | 2026-01-02 01:44:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:44:19.554454 | orchestrator | 2026-01-02 01:44:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:44:19.556334 | orchestrator | 2026-01-02 01:44:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:44:19.556436 | orchestrator | 2026-01-02 01:44:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:44:22.607458 | orchestrator | 2026-01-02 01:44:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:44:22.610242 | orchestrator | 2026-01-02 01:44:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:44:22.610319 | orchestrator | 2026-01-02 01:44:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:44:25.656479 | orchestrator | 2026-01-02 01:44:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:44:25.658373 | orchestrator | 2026-01-02 01:44:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:44:25.658583 | orchestrator | 2026-01-02 01:44:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:44:28.708317 | orchestrator | 2026-01-02 01:44:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:44:28.711010 | orchestrator | 2026-01-02 01:44:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:44:28.711066 | orchestrator | 2026-01-02 01:44:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:44:31.763137 | orchestrator | 2026-01-02 01:44:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:44:31.765787 | orchestrator | 2026-01-02 01:44:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:44:31.765849 | orchestrator | 2026-01-02 01:44:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:44:34.815235 | orchestrator | 2026-01-02 01:44:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:44:34.815637 | orchestrator | 2026-01-02 01:44:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:44:34.815660 | orchestrator | 2026-01-02 01:44:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:44:37.869664 | orchestrator | 2026-01-02 01:44:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:44:37.873098 | orchestrator | 2026-01-02 01:44:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:44:37.873166 | orchestrator | 2026-01-02 01:44:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:44:40.924702 | orchestrator | 2026-01-02 01:44:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:44:40.926328 | orchestrator | 2026-01-02 01:44:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:44:40.926463 | orchestrator | 2026-01-02 01:44:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:44:43.975975 | orchestrator | 2026-01-02 01:44:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:44:43.978494 | orchestrator | 2026-01-02 01:44:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:44:43.978548 | orchestrator | 2026-01-02 01:44:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:44:47.024668 | orchestrator | 2026-01-02 01:44:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:44:47.025655 | orchestrator | 2026-01-02 01:44:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:44:47.025699 | orchestrator | 2026-01-02 01:44:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:44:50.082492 | orchestrator | 2026-01-02 01:44:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:44:50.084516 | orchestrator | 2026-01-02 01:44:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:44:50.084561 | orchestrator | 2026-01-02 01:44:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:44:53.129954 | orchestrator | 2026-01-02 01:44:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:44:53.132172 | orchestrator | 2026-01-02 01:44:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:44:53.132218 | orchestrator | 2026-01-02 01:44:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:44:56.183735 | orchestrator | 2026-01-02 01:44:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:44:56.184600 | orchestrator | 2026-01-02 01:44:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:44:56.184636 | orchestrator | 2026-01-02 01:44:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:44:59.240895 | orchestrator | 2026-01-02 01:44:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:44:59.242220 | orchestrator | 2026-01-02 01:44:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:44:59.242304 | orchestrator | 2026-01-02 01:44:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:45:02.285918 | orchestrator | 2026-01-02 01:45:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:45:02.286624 | orchestrator | 2026-01-02 01:45:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:45:02.286706 | orchestrator | 2026-01-02 01:45:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:45:05.332413 | orchestrator | 2026-01-02 01:45:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:45:05.333942 | orchestrator | 2026-01-02 01:45:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:45:05.334004 | orchestrator | 2026-01-02 01:45:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:45:08.379407 | orchestrator | 2026-01-02 01:45:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:45:08.380024 | orchestrator | 2026-01-02 01:45:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:45:08.380059 | orchestrator | 2026-01-02 01:45:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:45:11.434955 | orchestrator | 2026-01-02 01:45:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:45:11.436679 | orchestrator | 2026-01-02 01:45:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:45:11.436747 | orchestrator | 2026-01-02 01:45:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:45:14.484856 | orchestrator | 2026-01-02 01:45:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:45:14.486415 | orchestrator | 2026-01-02 01:45:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:45:14.486540 | orchestrator | 2026-01-02 01:45:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:45:17.529174 | orchestrator | 2026-01-02 01:45:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:45:17.531214 | orchestrator | 2026-01-02 01:45:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:45:17.531259 | orchestrator | 2026-01-02 01:45:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:45:20.586143 | orchestrator | 2026-01-02 01:45:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:45:20.588527 | orchestrator | 2026-01-02 01:45:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:45:20.588578 | orchestrator | 2026-01-02 01:45:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:45:23.639439 | orchestrator | 2026-01-02 01:45:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:45:23.641528 | orchestrator | 2026-01-02 01:45:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:45:23.641564 | orchestrator | 2026-01-02 01:45:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:45:26.695148 | orchestrator | 2026-01-02 01:45:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:45:26.695889 | orchestrator | 2026-01-02 01:45:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:45:26.695927 | orchestrator | 2026-01-02 01:45:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:45:29.753480 | orchestrator | 2026-01-02 01:45:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:45:29.756654 | orchestrator | 2026-01-02 01:45:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:45:29.756790 | orchestrator | 2026-01-02 01:45:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:45:32.804401 | orchestrator | 2026-01-02 01:45:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:45:32.807375 | orchestrator | 2026-01-02 01:45:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:45:32.807430 | orchestrator | 2026-01-02 01:45:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:45:35.855479 | orchestrator | 2026-01-02 01:45:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:45:35.857107 | orchestrator | 2026-01-02 01:45:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:45:35.857194 | orchestrator | 2026-01-02 01:45:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:45:38.903789 | orchestrator | 2026-01-02 01:45:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:45:38.905447 | orchestrator | 2026-01-02 01:45:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:45:38.905583 | orchestrator | 2026-01-02 01:45:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:45:41.955284 | orchestrator | 2026-01-02 01:45:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:45:41.956782 | orchestrator | 2026-01-02 01:45:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:45:41.956815 | orchestrator | 2026-01-02 01:45:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:45:45.001930 | orchestrator | 2026-01-02 01:45:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:45:45.003622 | orchestrator | 2026-01-02 01:45:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:45:45.003675 | orchestrator | 2026-01-02 01:45:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:45:48.047129 | orchestrator | 2026-01-02 01:45:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:45:48.048230 | orchestrator | 2026-01-02 01:45:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:45:48.048281 | orchestrator | 2026-01-02 01:45:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:45:51.094945 | orchestrator | 2026-01-02 01:45:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:45:51.097091 | orchestrator | 2026-01-02 01:45:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:45:51.097164 | orchestrator | 2026-01-02 01:45:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:45:54.136942 | orchestrator | 2026-01-02 01:45:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:45:54.139336 | orchestrator | 2026-01-02 01:45:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:45:54.139384 | orchestrator | 2026-01-02 01:45:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:45:57.189941 | orchestrator | 2026-01-02 01:45:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:45:57.192131 | orchestrator | 2026-01-02 01:45:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:45:57.192175 | orchestrator | 2026-01-02 01:45:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:46:00.240904 | orchestrator | 2026-01-02 01:46:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:46:00.241882 | orchestrator | 2026-01-02 01:46:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:46:00.241914 | orchestrator | 2026-01-02 01:46:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:46:03.292500 | orchestrator | 2026-01-02 01:46:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:46:03.293657 | orchestrator | 2026-01-02 01:46:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:46:03.293684 | orchestrator | 2026-01-02 01:46:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:46:06.351421 | orchestrator | 2026-01-02 01:46:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:46:06.353522 | orchestrator | 2026-01-02 01:46:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:46:06.353561 | orchestrator | 2026-01-02 01:46:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:46:09.408077 | orchestrator | 2026-01-02 01:46:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:46:09.409249 | orchestrator | 2026-01-02 01:46:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:46:09.409283 | orchestrator | 2026-01-02 01:46:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:46:12.458189 | orchestrator | 2026-01-02 01:46:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:46:12.461360 | orchestrator | 2026-01-02 01:46:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:46:12.461776 | orchestrator | 2026-01-02 01:46:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:46:15.503228 | orchestrator | 2026-01-02 01:46:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:46:15.504495 | orchestrator | 2026-01-02 01:46:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:46:15.504611 | orchestrator | 2026-01-02 01:46:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:46:18.553082 | orchestrator | 2026-01-02 01:46:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:46:18.555425 | orchestrator | 2026-01-02 01:46:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:46:18.555589 | orchestrator | 2026-01-02 01:46:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:46:21.609877 | orchestrator | 2026-01-02 01:46:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:46:21.614238 | orchestrator | 2026-01-02 01:46:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:46:21.614323 | orchestrator | 2026-01-02 01:46:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:46:24.666596 | orchestrator | 2026-01-02 01:46:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:46:24.668946 | orchestrator | 2026-01-02 01:46:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:46:24.669000 | orchestrator | 2026-01-02 01:46:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:46:27.713631 | orchestrator | 2026-01-02 01:46:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:46:27.715041 | orchestrator | 2026-01-02 01:46:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:46:27.715099 | orchestrator | 2026-01-02 01:46:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:46:30.762619 | orchestrator | 2026-01-02 01:46:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:46:30.764841 | orchestrator | 2026-01-02 01:46:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:46:30.764890 | orchestrator | 2026-01-02 01:46:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:46:33.810465 | orchestrator | 2026-01-02 01:46:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:46:33.813027 | orchestrator | 2026-01-02 01:46:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:46:33.813060 | orchestrator | 2026-01-02 01:46:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:46:36.861966 | orchestrator | 2026-01-02 01:46:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:46:36.866829 | orchestrator | 2026-01-02 01:46:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:46:36.866875 | orchestrator | 2026-01-02 01:46:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:46:39.915545 | orchestrator | 2026-01-02 01:46:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:46:39.917155 | orchestrator | 2026-01-02 01:46:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:46:39.917377 | orchestrator | 2026-01-02 01:46:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:46:42.973257 | orchestrator | 2026-01-02 01:46:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:46:42.974140 | orchestrator | 2026-01-02 01:46:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:46:42.975693 | orchestrator | 2026-01-02 01:46:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:46:46.022415 | orchestrator | 2026-01-02 01:46:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:46:46.023852 | orchestrator | 2026-01-02 01:46:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:46:46.023970 | orchestrator | 2026-01-02 01:46:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:46:49.074004 | orchestrator | 2026-01-02 01:46:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:46:49.075792 | orchestrator | 2026-01-02 01:46:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:46:49.075836 | orchestrator | 2026-01-02 01:46:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:46:52.122156 | orchestrator | 2026-01-02 01:46:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:46:52.123103 | orchestrator | 2026-01-02 01:46:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:46:52.123595 | orchestrator | 2026-01-02 01:46:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:46:55.185303 | orchestrator | 2026-01-02 01:46:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:46:55.186617 | orchestrator | 2026-01-02 01:46:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:46:55.186665 | orchestrator | 2026-01-02 01:46:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:46:58.238697 | orchestrator | 2026-01-02 01:46:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:46:58.240162 | orchestrator | 2026-01-02 01:46:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:46:58.240331 | orchestrator | 2026-01-02 01:46:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:47:01.294394 | orchestrator | 2026-01-02 01:47:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:47:01.296064 | orchestrator | 2026-01-02 01:47:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:47:01.296097 | orchestrator | 2026-01-02 01:47:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:47:04.347599 | orchestrator | 2026-01-02 01:47:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:47:04.350261 | orchestrator | 2026-01-02 01:47:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:47:04.350325 | orchestrator | 2026-01-02 01:47:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:47:07.396013 | orchestrator | 2026-01-02 01:47:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:47:07.398815 | orchestrator | 2026-01-02 01:47:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:47:07.398900 | orchestrator | 2026-01-02 01:47:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:47:10.456626 | orchestrator | 2026-01-02 01:47:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:47:10.458368 | orchestrator | 2026-01-02 01:47:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:47:10.458406 | orchestrator | 2026-01-02 01:47:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:47:13.512603 | orchestrator | 2026-01-02 01:47:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:47:13.515329 | orchestrator | 2026-01-02 01:47:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:47:13.515360 | orchestrator | 2026-01-02 01:47:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:47:16.569668 | orchestrator | 2026-01-02 01:47:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:47:16.572259 | orchestrator | 2026-01-02 01:47:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:47:16.572541 | orchestrator | 2026-01-02 01:47:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:47:19.623963 | orchestrator | 2026-01-02 01:47:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:47:19.625347 | orchestrator | 2026-01-02 01:47:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:47:19.625589 | orchestrator | 2026-01-02 01:47:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:47:22.677402 | orchestrator | 2026-01-02 01:47:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:47:22.679608 | orchestrator | 2026-01-02 01:47:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:47:22.679653 | orchestrator | 2026-01-02 01:47:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:47:25.733540 | orchestrator | 2026-01-02 01:47:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:47:25.736020 | orchestrator | 2026-01-02 01:47:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:47:25.736526 | orchestrator | 2026-01-02 01:47:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:47:28.789322 | orchestrator | 2026-01-02 01:47:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:47:28.791050 | orchestrator | 2026-01-02 01:47:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:47:28.791228 | orchestrator | 2026-01-02 01:47:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:47:31.840986 | orchestrator | 2026-01-02 01:47:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:47:31.843023 | orchestrator | 2026-01-02 01:47:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:47:31.843144 | orchestrator | 2026-01-02 01:47:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:47:34.888152 | orchestrator | 2026-01-02 01:47:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:47:34.890231 | orchestrator | 2026-01-02 01:47:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:47:34.890324 | orchestrator | 2026-01-02 01:47:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:47:37.934421 | orchestrator | 2026-01-02 01:47:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:47:37.936052 | orchestrator | 2026-01-02 01:47:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:47:37.936188 | orchestrator | 2026-01-02 01:47:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:47:40.993087 | orchestrator | 2026-01-02 01:47:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:47:40.996125 | orchestrator | 2026-01-02 01:47:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:47:40.996181 | orchestrator | 2026-01-02 01:47:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:47:44.059100 | orchestrator | 2026-01-02 01:47:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:47:44.060485 | orchestrator | 2026-01-02 01:47:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:47:44.060524 | orchestrator | 2026-01-02 01:47:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:47:47.110580 | orchestrator | 2026-01-02 01:47:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:47:47.114580 | orchestrator | 2026-01-02 01:47:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:47:47.114712 | orchestrator | 2026-01-02 01:47:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:47:50.154199 | orchestrator | 2026-01-02 01:47:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:47:50.156217 | orchestrator | 2026-01-02 01:47:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:47:50.156269 | orchestrator | 2026-01-02 01:47:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:47:53.204554 | orchestrator | 2026-01-02 01:47:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:47:53.207024 | orchestrator | 2026-01-02 01:47:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:47:53.207098 | orchestrator | 2026-01-02 01:47:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:47:56.259326 | orchestrator | 2026-01-02 01:47:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:47:56.262910 | orchestrator | 2026-01-02 01:47:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:47:56.263572 | orchestrator | 2026-01-02 01:47:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:47:59.312344 | orchestrator | 2026-01-02 01:47:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:47:59.314151 | orchestrator | 2026-01-02 01:47:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:47:59.314177 | orchestrator | 2026-01-02 01:47:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:48:02.364257 | orchestrator | 2026-01-02 01:48:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:48:02.366141 | orchestrator | 2026-01-02 01:48:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:48:02.366218 | orchestrator | 2026-01-02 01:48:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:48:05.419270 | orchestrator | 2026-01-02 01:48:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:48:05.420731 | orchestrator | 2026-01-02 01:48:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:48:05.420933 | orchestrator | 2026-01-02 01:48:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:48:08.470979 | orchestrator | 2026-01-02 01:48:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:48:08.473432 | orchestrator | 2026-01-02 01:48:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:48:08.473480 | orchestrator | 2026-01-02 01:48:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:48:11.530550 | orchestrator | 2026-01-02 01:48:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:48:11.534420 | orchestrator | 2026-01-02 01:48:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:48:11.534472 | orchestrator | 2026-01-02 01:48:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:48:14.577064 | orchestrator | 2026-01-02 01:48:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:48:14.579978 | orchestrator | 2026-01-02 01:48:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:48:14.580117 | orchestrator | 2026-01-02 01:48:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:48:17.621898 | orchestrator | 2026-01-02 01:48:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:48:17.623886 | orchestrator | 2026-01-02 01:48:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:48:17.623969 | orchestrator | 2026-01-02 01:48:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:48:20.673150 | orchestrator | 2026-01-02 01:48:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:48:20.676049 | orchestrator | 2026-01-02 01:48:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:48:20.676558 | orchestrator | 2026-01-02 01:48:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:48:23.724522 | orchestrator | 2026-01-02 01:48:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:48:23.725762 | orchestrator | 2026-01-02 01:48:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:48:23.725822 | orchestrator | 2026-01-02 01:48:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:48:26.780004 | orchestrator | 2026-01-02 01:48:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:48:26.781775 | orchestrator | 2026-01-02 01:48:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:48:26.781834 | orchestrator | 2026-01-02 01:48:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:48:29.832483 | orchestrator | 2026-01-02 01:48:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:48:29.833778 | orchestrator | 2026-01-02 01:48:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:48:29.833915 | orchestrator | 2026-01-02 01:48:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:48:32.882510 | orchestrator | 2026-01-02 01:48:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:48:32.883620 | orchestrator | 2026-01-02 01:48:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:48:32.883650 | orchestrator | 2026-01-02 01:48:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:48:35.927479 | orchestrator | 2026-01-02 01:48:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:48:35.928873 | orchestrator | 2026-01-02 01:48:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:48:35.928919 | orchestrator | 2026-01-02 01:48:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:48:38.984434 | orchestrator | 2026-01-02 01:48:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:48:38.985997 | orchestrator | 2026-01-02 01:48:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:48:38.986103 | orchestrator | 2026-01-02 01:48:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:48:42.047148 | orchestrator | 2026-01-02 01:48:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:48:42.047989 | orchestrator | 2026-01-02 01:48:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:48:42.048013 | orchestrator | 2026-01-02 01:48:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:48:45.099204 | orchestrator | 2026-01-02 01:48:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:48:45.101325 | orchestrator | 2026-01-02 01:48:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:48:45.101393 | orchestrator | 2026-01-02 01:48:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:48:48.149464 | orchestrator | 2026-01-02 01:48:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:48:48.151242 | orchestrator | 2026-01-02 01:48:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:48:48.151321 | orchestrator | 2026-01-02 01:48:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:48:51.199108 | orchestrator | 2026-01-02 01:48:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:48:51.200455 | orchestrator | 2026-01-02 01:48:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:48:51.200558 | orchestrator | 2026-01-02 01:48:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:48:54.247490 | orchestrator | 2026-01-02 01:48:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:48:54.248184 | orchestrator | 2026-01-02 01:48:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:48:54.248315 | orchestrator | 2026-01-02 01:48:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:48:57.296639 | orchestrator | 2026-01-02 01:48:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:48:57.298342 | orchestrator | 2026-01-02 01:48:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:48:57.298425 | orchestrator | 2026-01-02 01:48:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:49:00.338650 | orchestrator | 2026-01-02 01:49:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:49:00.340549 | orchestrator | 2026-01-02 01:49:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:49:00.340726 | orchestrator | 2026-01-02 01:49:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:49:03.391646 | orchestrator | 2026-01-02 01:49:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:49:03.393646 | orchestrator | 2026-01-02 01:49:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:49:03.393698 | orchestrator | 2026-01-02 01:49:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:49:06.444553 | orchestrator | 2026-01-02 01:49:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:49:06.446725 | orchestrator | 2026-01-02 01:49:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:49:06.446762 | orchestrator | 2026-01-02 01:49:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:49:09.493533 | orchestrator | 2026-01-02 01:49:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:49:09.495333 | orchestrator | 2026-01-02 01:49:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:49:09.495389 | orchestrator | 2026-01-02 01:49:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:49:12.546881 | orchestrator | 2026-01-02 01:49:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:49:12.549644 | orchestrator | 2026-01-02 01:49:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:49:12.549720 | orchestrator | 2026-01-02 01:49:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:49:15.597429 | orchestrator | 2026-01-02 01:49:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:49:15.599839 | orchestrator | 2026-01-02 01:49:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:49:15.600014 | orchestrator | 2026-01-02 01:49:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:49:18.640890 | orchestrator | 2026-01-02 01:49:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:49:18.642328 | orchestrator | 2026-01-02 01:49:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:49:18.642382 | orchestrator | 2026-01-02 01:49:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:49:21.691353 | orchestrator | 2026-01-02 01:49:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:49:21.691934 | orchestrator | 2026-01-02 01:49:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:49:21.691980 | orchestrator | 2026-01-02 01:49:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:49:24.747271 | orchestrator | 2026-01-02 01:49:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:49:24.748788 | orchestrator | 2026-01-02 01:49:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:49:24.748935 | orchestrator | 2026-01-02 01:49:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:49:27.802317 | orchestrator | 2026-01-02 01:49:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:49:27.804861 | orchestrator | 2026-01-02 01:49:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:49:27.804912 | orchestrator | 2026-01-02 01:49:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:49:30.853782 | orchestrator | 2026-01-02 01:49:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:49:30.855435 | orchestrator | 2026-01-02 01:49:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:49:30.855513 | orchestrator | 2026-01-02 01:49:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:49:33.908131 | orchestrator | 2026-01-02 01:49:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:49:33.910810 | orchestrator | 2026-01-02 01:49:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:49:33.910903 | orchestrator | 2026-01-02 01:49:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:49:36.963437 | orchestrator | 2026-01-02 01:49:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:49:36.963735 | orchestrator | 2026-01-02 01:49:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:49:36.964261 | orchestrator | 2026-01-02 01:49:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:49:40.019182 | orchestrator | 2026-01-02 01:49:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:49:40.020968 | orchestrator | 2026-01-02 01:49:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:49:40.021036 | orchestrator | 2026-01-02 01:49:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:49:43.076249 | orchestrator | 2026-01-02 01:49:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:49:43.079401 | orchestrator | 2026-01-02 01:49:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:49:43.079484 | orchestrator | 2026-01-02 01:49:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:49:46.122643 | orchestrator | 2026-01-02 01:49:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:49:46.124307 | orchestrator | 2026-01-02 01:49:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:49:46.124361 | orchestrator | 2026-01-02 01:49:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:49:49.175973 | orchestrator | 2026-01-02 01:49:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:49:49.180203 | orchestrator | 2026-01-02 01:49:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:49:49.180250 | orchestrator | 2026-01-02 01:49:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:49:52.229869 | orchestrator | 2026-01-02 01:49:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:49:52.231502 | orchestrator | 2026-01-02 01:49:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:49:52.231537 | orchestrator | 2026-01-02 01:49:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:49:55.283894 | orchestrator | 2026-01-02 01:49:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:49:55.285027 | orchestrator | 2026-01-02 01:49:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:49:55.285113 | orchestrator | 2026-01-02 01:49:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:49:58.331660 | orchestrator | 2026-01-02 01:49:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:49:58.334697 | orchestrator | 2026-01-02 01:49:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:49:58.334973 | orchestrator | 2026-01-02 01:49:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:50:01.387012 | orchestrator | 2026-01-02 01:50:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:50:01.388538 | orchestrator | 2026-01-02 01:50:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:50:01.388665 | orchestrator | 2026-01-02 01:50:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:50:04.435209 | orchestrator | 2026-01-02 01:50:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:50:04.436567 | orchestrator | 2026-01-02 01:50:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:50:04.436603 | orchestrator | 2026-01-02 01:50:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:50:07.485408 | orchestrator | 2026-01-02 01:50:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:50:07.487579 | orchestrator | 2026-01-02 01:50:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:50:07.487615 | orchestrator | 2026-01-02 01:50:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:50:10.536813 | orchestrator | 2026-01-02 01:50:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:50:10.539473 | orchestrator | 2026-01-02 01:50:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:50:10.539517 | orchestrator | 2026-01-02 01:50:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:50:13.585495 | orchestrator | 2026-01-02 01:50:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:50:13.586126 | orchestrator | 2026-01-02 01:50:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:50:13.586217 | orchestrator | 2026-01-02 01:50:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:50:16.630954 | orchestrator | 2026-01-02 01:50:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:50:16.632567 | orchestrator | 2026-01-02 01:50:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:50:16.632648 | orchestrator | 2026-01-02 01:50:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:50:19.684210 | orchestrator | 2026-01-02 01:50:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:50:19.686292 | orchestrator | 2026-01-02 01:50:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:50:19.686492 | orchestrator | 2026-01-02 01:50:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:50:22.731140 | orchestrator | 2026-01-02 01:50:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:50:22.732731 | orchestrator | 2026-01-02 01:50:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:50:22.732765 | orchestrator | 2026-01-02 01:50:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:50:25.773874 | orchestrator | 2026-01-02 01:50:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:50:25.775132 | orchestrator | 2026-01-02 01:50:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:50:25.775189 | orchestrator | 2026-01-02 01:50:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:50:28.831319 | orchestrator | 2026-01-02 01:50:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:50:28.833223 | orchestrator | 2026-01-02 01:50:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:50:28.833257 | orchestrator | 2026-01-02 01:50:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:50:31.879064 | orchestrator | 2026-01-02 01:50:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:50:31.880160 | orchestrator | 2026-01-02 01:50:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:50:31.880189 | orchestrator | 2026-01-02 01:50:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:50:34.931795 | orchestrator | 2026-01-02 01:50:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:50:34.944301 | orchestrator | 2026-01-02 01:50:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:50:34.944352 | orchestrator | 2026-01-02 01:50:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:50:37.988039 | orchestrator | 2026-01-02 01:50:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:50:37.988943 | orchestrator | 2026-01-02 01:50:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:50:37.988985 | orchestrator | 2026-01-02 01:50:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:50:41.033126 | orchestrator | 2026-01-02 01:50:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:50:41.035202 | orchestrator | 2026-01-02 01:50:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:50:41.035252 | orchestrator | 2026-01-02 01:50:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:50:44.087346 | orchestrator | 2026-01-02 01:50:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:50:44.089102 | orchestrator | 2026-01-02 01:50:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:50:44.089274 | orchestrator | 2026-01-02 01:50:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:50:47.137406 | orchestrator | 2026-01-02 01:50:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:50:47.139332 | orchestrator | 2026-01-02 01:50:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:50:47.139375 | orchestrator | 2026-01-02 01:50:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:50:50.184949 | orchestrator | 2026-01-02 01:50:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:50:50.186243 | orchestrator | 2026-01-02 01:50:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:50:50.186313 | orchestrator | 2026-01-02 01:50:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:50:53.238330 | orchestrator | 2026-01-02 01:50:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:50:53.240508 | orchestrator | 2026-01-02 01:50:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:50:53.240651 | orchestrator | 2026-01-02 01:50:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:50:56.287352 | orchestrator | 2026-01-02 01:50:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:50:56.289150 | orchestrator | 2026-01-02 01:50:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:50:56.289189 | orchestrator | 2026-01-02 01:50:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:50:59.343228 | orchestrator | 2026-01-02 01:50:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:50:59.345487 | orchestrator | 2026-01-02 01:50:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:50:59.345524 | orchestrator | 2026-01-02 01:50:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:51:02.389546 | orchestrator | 2026-01-02 01:51:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:51:02.391213 | orchestrator | 2026-01-02 01:51:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:51:02.391252 | orchestrator | 2026-01-02 01:51:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:51:05.443345 | orchestrator | 2026-01-02 01:51:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:51:05.445762 | orchestrator | 2026-01-02 01:51:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:51:05.445832 | orchestrator | 2026-01-02 01:51:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:51:08.489342 | orchestrator | 2026-01-02 01:51:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:51:08.491118 | orchestrator | 2026-01-02 01:51:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:51:08.491151 | orchestrator | 2026-01-02 01:51:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:51:11.536036 | orchestrator | 2026-01-02 01:51:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:51:11.538499 | orchestrator | 2026-01-02 01:51:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:51:11.538543 | orchestrator | 2026-01-02 01:51:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:51:14.580438 | orchestrator | 2026-01-02 01:51:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:51:14.581399 | orchestrator | 2026-01-02 01:51:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:51:14.581438 | orchestrator | 2026-01-02 01:51:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:51:17.633466 | orchestrator | 2026-01-02 01:51:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:51:17.634791 | orchestrator | 2026-01-02 01:51:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:51:17.634985 | orchestrator | 2026-01-02 01:51:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:51:20.680761 | orchestrator | 2026-01-02 01:51:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:51:20.683014 | orchestrator | 2026-01-02 01:51:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:51:20.683097 | orchestrator | 2026-01-02 01:51:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:51:23.726664 | orchestrator | 2026-01-02 01:51:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:51:23.729510 | orchestrator | 2026-01-02 01:51:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:51:23.729567 | orchestrator | 2026-01-02 01:51:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:51:26.785215 | orchestrator | 2026-01-02 01:51:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:51:26.786417 | orchestrator | 2026-01-02 01:51:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:51:26.786541 | orchestrator | 2026-01-02 01:51:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:51:29.840738 | orchestrator | 2026-01-02 01:51:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:51:29.842787 | orchestrator | 2026-01-02 01:51:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:51:29.842824 | orchestrator | 2026-01-02 01:51:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:51:32.891073 | orchestrator | 2026-01-02 01:51:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:51:32.894457 | orchestrator | 2026-01-02 01:51:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:51:32.894509 | orchestrator | 2026-01-02 01:51:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:51:35.944369 | orchestrator | 2026-01-02 01:51:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:51:35.946162 | orchestrator | 2026-01-02 01:51:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:51:35.946186 | orchestrator | 2026-01-02 01:51:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:51:39.001216 | orchestrator | 2026-01-02 01:51:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:51:39.003202 | orchestrator | 2026-01-02 01:51:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:51:39.003594 | orchestrator | 2026-01-02 01:51:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:51:42.051123 | orchestrator | 2026-01-02 01:51:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:51:42.052266 | orchestrator | 2026-01-02 01:51:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:51:42.052299 | orchestrator | 2026-01-02 01:51:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:51:45.107028 | orchestrator | 2026-01-02 01:51:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:51:45.108831 | orchestrator | 2026-01-02 01:51:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:51:45.108955 | orchestrator | 2026-01-02 01:51:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:51:48.161051 | orchestrator | 2026-01-02 01:51:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:51:48.162448 | orchestrator | 2026-01-02 01:51:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:51:48.162489 | orchestrator | 2026-01-02 01:51:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:51:51.212826 | orchestrator | 2026-01-02 01:51:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:51:51.215692 | orchestrator | 2026-01-02 01:51:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:51:51.215789 | orchestrator | 2026-01-02 01:51:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:51:54.267235 | orchestrator | 2026-01-02 01:51:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:51:54.269670 | orchestrator | 2026-01-02 01:51:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:51:54.269741 | orchestrator | 2026-01-02 01:51:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:51:57.311943 | orchestrator | 2026-01-02 01:51:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:51:57.313567 | orchestrator | 2026-01-02 01:51:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:51:57.313630 | orchestrator | 2026-01-02 01:51:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:52:00.367165 | orchestrator | 2026-01-02 01:52:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:52:00.368329 | orchestrator | 2026-01-02 01:52:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:52:00.368366 | orchestrator | 2026-01-02 01:52:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:52:03.412155 | orchestrator | 2026-01-02 01:52:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:52:03.414753 | orchestrator | 2026-01-02 01:52:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:52:03.414845 | orchestrator | 2026-01-02 01:52:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:52:06.464168 | orchestrator | 2026-01-02 01:52:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:52:06.465772 | orchestrator | 2026-01-02 01:52:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:52:06.465806 | orchestrator | 2026-01-02 01:52:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:52:09.517978 | orchestrator | 2026-01-02 01:52:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:52:09.519328 | orchestrator | 2026-01-02 01:52:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:52:09.519359 | orchestrator | 2026-01-02 01:52:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:52:12.570367 | orchestrator | 2026-01-02 01:52:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:52:12.573679 | orchestrator | 2026-01-02 01:52:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:52:12.573790 | orchestrator | 2026-01-02 01:52:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:52:15.622737 | orchestrator | 2026-01-02 01:52:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:52:15.624664 | orchestrator | 2026-01-02 01:52:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:52:15.624712 | orchestrator | 2026-01-02 01:52:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:52:18.675622 | orchestrator | 2026-01-02 01:52:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:52:18.677593 | orchestrator | 2026-01-02 01:52:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:52:18.677634 | orchestrator | 2026-01-02 01:52:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:52:21.725716 | orchestrator | 2026-01-02 01:52:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:52:21.727019 | orchestrator | 2026-01-02 01:52:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:52:21.727047 | orchestrator | 2026-01-02 01:52:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:52:24.776284 | orchestrator | 2026-01-02 01:52:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:52:24.778130 | orchestrator | 2026-01-02 01:52:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:52:24.778179 | orchestrator | 2026-01-02 01:52:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:52:27.823117 | orchestrator | 2026-01-02 01:52:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:52:27.823800 | orchestrator | 2026-01-02 01:52:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:52:27.823965 | orchestrator | 2026-01-02 01:52:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:52:30.874078 | orchestrator | 2026-01-02 01:52:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:52:30.876997 | orchestrator | 2026-01-02 01:52:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:52:30.877085 | orchestrator | 2026-01-02 01:52:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:52:33.925683 | orchestrator | 2026-01-02 01:52:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:52:33.927837 | orchestrator | 2026-01-02 01:52:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:52:33.928013 | orchestrator | 2026-01-02 01:52:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:52:36.969339 | orchestrator | 2026-01-02 01:52:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:52:36.970336 | orchestrator | 2026-01-02 01:52:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:52:36.970375 | orchestrator | 2026-01-02 01:52:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:52:40.014444 | orchestrator | 2026-01-02 01:52:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:52:40.016462 | orchestrator | 2026-01-02 01:52:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:52:40.016680 | orchestrator | 2026-01-02 01:52:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:52:43.068041 | orchestrator | 2026-01-02 01:52:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:52:43.070764 | orchestrator | 2026-01-02 01:52:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:52:43.070821 | orchestrator | 2026-01-02 01:52:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:52:46.125633 | orchestrator | 2026-01-02 01:52:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:52:46.127826 | orchestrator | 2026-01-02 01:52:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:52:46.127908 | orchestrator | 2026-01-02 01:52:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:52:49.171396 | orchestrator | 2026-01-02 01:52:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:52:49.173318 | orchestrator | 2026-01-02 01:52:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:52:49.173390 | orchestrator | 2026-01-02 01:52:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:52:52.222284 | orchestrator | 2026-01-02 01:52:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:52:52.224625 | orchestrator | 2026-01-02 01:52:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:52:52.224677 | orchestrator | 2026-01-02 01:52:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:52:55.266958 | orchestrator | 2026-01-02 01:52:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:52:55.268358 | orchestrator | 2026-01-02 01:52:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:52:55.268447 | orchestrator | 2026-01-02 01:52:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:52:58.321340 | orchestrator | 2026-01-02 01:52:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:52:58.323053 | orchestrator | 2026-01-02 01:52:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:52:58.323111 | orchestrator | 2026-01-02 01:52:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:53:01.373285 | orchestrator | 2026-01-02 01:53:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:53:01.375775 | orchestrator | 2026-01-02 01:53:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:53:01.375824 | orchestrator | 2026-01-02 01:53:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:53:04.420800 | orchestrator | 2026-01-02 01:53:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:53:04.422975 | orchestrator | 2026-01-02 01:53:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:53:04.423068 | orchestrator | 2026-01-02 01:53:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:53:07.474147 | orchestrator | 2026-01-02 01:53:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:53:07.475896 | orchestrator | 2026-01-02 01:53:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:53:07.476063 | orchestrator | 2026-01-02 01:53:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:53:10.523740 | orchestrator | 2026-01-02 01:53:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:53:10.525561 | orchestrator | 2026-01-02 01:53:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:53:10.525596 | orchestrator | 2026-01-02 01:53:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:53:13.575669 | orchestrator | 2026-01-02 01:53:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:53:13.578100 | orchestrator | 2026-01-02 01:53:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:53:13.578145 | orchestrator | 2026-01-02 01:53:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:53:16.628802 | orchestrator | 2026-01-02 01:53:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:53:16.630288 | orchestrator | 2026-01-02 01:53:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:53:16.630950 | orchestrator | 2026-01-02 01:53:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:53:19.673059 | orchestrator | 2026-01-02 01:53:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:53:19.674660 | orchestrator | 2026-01-02 01:53:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:53:19.674714 | orchestrator | 2026-01-02 01:53:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:53:22.710305 | orchestrator | 2026-01-02 01:53:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:53:22.712737 | orchestrator | 2026-01-02 01:53:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:53:22.712793 | orchestrator | 2026-01-02 01:53:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:53:25.765461 | orchestrator | 2026-01-02 01:53:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:53:25.767695 | orchestrator | 2026-01-02 01:53:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:53:25.767815 | orchestrator | 2026-01-02 01:53:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:53:28.818150 | orchestrator | 2026-01-02 01:53:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:53:28.819592 | orchestrator | 2026-01-02 01:53:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:53:28.819639 | orchestrator | 2026-01-02 01:53:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:53:31.873054 | orchestrator | 2026-01-02 01:53:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:53:31.876955 | orchestrator | 2026-01-02 01:53:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:53:31.877002 | orchestrator | 2026-01-02 01:53:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:53:34.926476 | orchestrator | 2026-01-02 01:53:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:53:34.927940 | orchestrator | 2026-01-02 01:53:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:53:34.928070 | orchestrator | 2026-01-02 01:53:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:53:37.979947 | orchestrator | 2026-01-02 01:53:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:53:37.981918 | orchestrator | 2026-01-02 01:53:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:53:37.982740 | orchestrator | 2026-01-02 01:53:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:53:41.039354 | orchestrator | 2026-01-02 01:53:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:53:41.040819 | orchestrator | 2026-01-02 01:53:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:53:41.040977 | orchestrator | 2026-01-02 01:53:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:53:44.088328 | orchestrator | 2026-01-02 01:53:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:53:44.089619 | orchestrator | 2026-01-02 01:53:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:53:44.089655 | orchestrator | 2026-01-02 01:53:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:53:47.142258 | orchestrator | 2026-01-02 01:53:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:53:47.143720 | orchestrator | 2026-01-02 01:53:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:53:47.143759 | orchestrator | 2026-01-02 01:53:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:53:50.190721 | orchestrator | 2026-01-02 01:53:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:53:50.192835 | orchestrator | 2026-01-02 01:53:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:53:50.192898 | orchestrator | 2026-01-02 01:53:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:53:53.241438 | orchestrator | 2026-01-02 01:53:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:53:53.242559 | orchestrator | 2026-01-02 01:53:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:53:53.242598 | orchestrator | 2026-01-02 01:53:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:53:56.286091 | orchestrator | 2026-01-02 01:53:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:53:56.287433 | orchestrator | 2026-01-02 01:53:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:53:56.287534 | orchestrator | 2026-01-02 01:53:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:53:59.330835 | orchestrator | 2026-01-02 01:53:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:53:59.333359 | orchestrator | 2026-01-02 01:53:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:53:59.333717 | orchestrator | 2026-01-02 01:53:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:54:02.389090 | orchestrator | 2026-01-02 01:54:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:54:02.391028 | orchestrator | 2026-01-02 01:54:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:54:02.391112 | orchestrator | 2026-01-02 01:54:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:54:05.438111 | orchestrator | 2026-01-02 01:54:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:54:05.440600 | orchestrator | 2026-01-02 01:54:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:54:05.441062 | orchestrator | 2026-01-02 01:54:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:54:08.487522 | orchestrator | 2026-01-02 01:54:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:54:08.488148 | orchestrator | 2026-01-02 01:54:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:54:08.488183 | orchestrator | 2026-01-02 01:54:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:54:11.534488 | orchestrator | 2026-01-02 01:54:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:54:11.535914 | orchestrator | 2026-01-02 01:54:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:54:11.535999 | orchestrator | 2026-01-02 01:54:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:54:14.581724 | orchestrator | 2026-01-02 01:54:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:54:14.584176 | orchestrator | 2026-01-02 01:54:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:54:14.584229 | orchestrator | 2026-01-02 01:54:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:54:17.635301 | orchestrator | 2026-01-02 01:54:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:54:17.637948 | orchestrator | 2026-01-02 01:54:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:54:17.638198 | orchestrator | 2026-01-02 01:54:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:54:20.681529 | orchestrator | 2026-01-02 01:54:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:54:20.682292 | orchestrator | 2026-01-02 01:54:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:54:20.682334 | orchestrator | 2026-01-02 01:54:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:54:23.723641 | orchestrator | 2026-01-02 01:54:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:54:23.725391 | orchestrator | 2026-01-02 01:54:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:54:23.725489 | orchestrator | 2026-01-02 01:54:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:54:26.778096 | orchestrator | 2026-01-02 01:54:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:54:26.780252 | orchestrator | 2026-01-02 01:54:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:54:26.780289 | orchestrator | 2026-01-02 01:54:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:54:29.838854 | orchestrator | 2026-01-02 01:54:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:54:29.840910 | orchestrator | 2026-01-02 01:54:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:54:29.840975 | orchestrator | 2026-01-02 01:54:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:54:32.898598 | orchestrator | 2026-01-02 01:54:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:54:32.900678 | orchestrator | 2026-01-02 01:54:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:54:32.900731 | orchestrator | 2026-01-02 01:54:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:54:35.946249 | orchestrator | 2026-01-02 01:54:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:54:35.950669 | orchestrator | 2026-01-02 01:54:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:54:35.950782 | orchestrator | 2026-01-02 01:54:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:54:38.999282 | orchestrator | 2026-01-02 01:54:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:54:39.001373 | orchestrator | 2026-01-02 01:54:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:54:39.001466 | orchestrator | 2026-01-02 01:54:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:54:42.053864 | orchestrator | 2026-01-02 01:54:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:54:42.054219 | orchestrator | 2026-01-02 01:54:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:54:42.054260 | orchestrator | 2026-01-02 01:54:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:54:45.103753 | orchestrator | 2026-01-02 01:54:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:54:45.105509 | orchestrator | 2026-01-02 01:54:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:54:45.105555 | orchestrator | 2026-01-02 01:54:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:54:48.164337 | orchestrator | 2026-01-02 01:54:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:54:48.165225 | orchestrator | 2026-01-02 01:54:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:54:48.165333 | orchestrator | 2026-01-02 01:54:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:54:51.216013 | orchestrator | 2026-01-02 01:54:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:54:51.216647 | orchestrator | 2026-01-02 01:54:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:54:51.217197 | orchestrator | 2026-01-02 01:54:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:54:54.264357 | orchestrator | 2026-01-02 01:54:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:54:54.266442 | orchestrator | 2026-01-02 01:54:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:54:54.266480 | orchestrator | 2026-01-02 01:54:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:54:57.309533 | orchestrator | 2026-01-02 01:54:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:54:57.311853 | orchestrator | 2026-01-02 01:54:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:54:57.311913 | orchestrator | 2026-01-02 01:54:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:55:00.363394 | orchestrator | 2026-01-02 01:55:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:55:00.365818 | orchestrator | 2026-01-02 01:55:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:55:00.366132 | orchestrator | 2026-01-02 01:55:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:55:03.420190 | orchestrator | 2026-01-02 01:55:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:55:03.422932 | orchestrator | 2026-01-02 01:55:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:55:03.422987 | orchestrator | 2026-01-02 01:55:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:55:06.469568 | orchestrator | 2026-01-02 01:55:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:55:06.471072 | orchestrator | 2026-01-02 01:55:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:55:06.471111 | orchestrator | 2026-01-02 01:55:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:55:09.519597 | orchestrator | 2026-01-02 01:55:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:55:09.521279 | orchestrator | 2026-01-02 01:55:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:55:09.521385 | orchestrator | 2026-01-02 01:55:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:55:12.572494 | orchestrator | 2026-01-02 01:55:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:55:12.573944 | orchestrator | 2026-01-02 01:55:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:55:12.573979 | orchestrator | 2026-01-02 01:55:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:55:15.623717 | orchestrator | 2026-01-02 01:55:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:55:15.625537 | orchestrator | 2026-01-02 01:55:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:55:15.625575 | orchestrator | 2026-01-02 01:55:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:55:18.676003 | orchestrator | 2026-01-02 01:55:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:55:18.677463 | orchestrator | 2026-01-02 01:55:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:55:18.677571 | orchestrator | 2026-01-02 01:55:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:55:21.717962 | orchestrator | 2026-01-02 01:55:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:55:21.719227 | orchestrator | 2026-01-02 01:55:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:55:21.719255 | orchestrator | 2026-01-02 01:55:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:55:24.771010 | orchestrator | 2026-01-02 01:55:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:55:24.772443 | orchestrator | 2026-01-02 01:55:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:55:24.772672 | orchestrator | 2026-01-02 01:55:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:55:27.820457 | orchestrator | 2026-01-02 01:55:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:55:27.821750 | orchestrator | 2026-01-02 01:55:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:55:27.821800 | orchestrator | 2026-01-02 01:55:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:55:30.867883 | orchestrator | 2026-01-02 01:55:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:55:30.868116 | orchestrator | 2026-01-02 01:55:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:55:30.868588 | orchestrator | 2026-01-02 01:55:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:55:33.914008 | orchestrator | 2026-01-02 01:55:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:55:33.914853 | orchestrator | 2026-01-02 01:55:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:55:33.914975 | orchestrator | 2026-01-02 01:55:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:55:36.963000 | orchestrator | 2026-01-02 01:55:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:55:36.965120 | orchestrator | 2026-01-02 01:55:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:55:36.965153 | orchestrator | 2026-01-02 01:55:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:55:40.015605 | orchestrator | 2026-01-02 01:55:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:55:40.018086 | orchestrator | 2026-01-02 01:55:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:55:40.018122 | orchestrator | 2026-01-02 01:55:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:55:43.071305 | orchestrator | 2026-01-02 01:55:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:55:43.072578 | orchestrator | 2026-01-02 01:55:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:55:43.072626 | orchestrator | 2026-01-02 01:55:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:55:46.130455 | orchestrator | 2026-01-02 01:55:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:55:46.138465 | orchestrator | 2026-01-02 01:55:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:55:46.138521 | orchestrator | 2026-01-02 01:55:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:55:49.192180 | orchestrator | 2026-01-02 01:55:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:55:49.194124 | orchestrator | 2026-01-02 01:55:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:55:49.194162 | orchestrator | 2026-01-02 01:55:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:55:52.248098 | orchestrator | 2026-01-02 01:55:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:55:52.250803 | orchestrator | 2026-01-02 01:55:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:55:52.250855 | orchestrator | 2026-01-02 01:55:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:55:55.293365 | orchestrator | 2026-01-02 01:55:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:55:55.296130 | orchestrator | 2026-01-02 01:55:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:55:55.296181 | orchestrator | 2026-01-02 01:55:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:55:58.349401 | orchestrator | 2026-01-02 01:55:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:55:58.351150 | orchestrator | 2026-01-02 01:55:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:55:58.351193 | orchestrator | 2026-01-02 01:55:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:56:01.404126 | orchestrator | 2026-01-02 01:56:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:56:01.407373 | orchestrator | 2026-01-02 01:56:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:56:01.407417 | orchestrator | 2026-01-02 01:56:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:56:04.464866 | orchestrator | 2026-01-02 01:56:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:56:04.467818 | orchestrator | 2026-01-02 01:56:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:56:04.467907 | orchestrator | 2026-01-02 01:56:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:56:07.513577 | orchestrator | 2026-01-02 01:56:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:56:07.515373 | orchestrator | 2026-01-02 01:56:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:56:07.515694 | orchestrator | 2026-01-02 01:56:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:56:10.573670 | orchestrator | 2026-01-02 01:56:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:56:10.576478 | orchestrator | 2026-01-02 01:56:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:56:10.576552 | orchestrator | 2026-01-02 01:56:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:56:13.633872 | orchestrator | 2026-01-02 01:56:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:56:13.635815 | orchestrator | 2026-01-02 01:56:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:56:13.635974 | orchestrator | 2026-01-02 01:56:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:56:16.689581 | orchestrator | 2026-01-02 01:56:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:56:16.692691 | orchestrator | 2026-01-02 01:56:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:56:16.692740 | orchestrator | 2026-01-02 01:56:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:56:19.740589 | orchestrator | 2026-01-02 01:56:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:56:19.741588 | orchestrator | 2026-01-02 01:56:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:56:19.741999 | orchestrator | 2026-01-02 01:56:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:56:22.791907 | orchestrator | 2026-01-02 01:56:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:56:22.793354 | orchestrator | 2026-01-02 01:56:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:56:22.793405 | orchestrator | 2026-01-02 01:56:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:56:25.841668 | orchestrator | 2026-01-02 01:56:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:56:25.843815 | orchestrator | 2026-01-02 01:56:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:56:25.843892 | orchestrator | 2026-01-02 01:56:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:56:28.895053 | orchestrator | 2026-01-02 01:56:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:56:28.896520 | orchestrator | 2026-01-02 01:56:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:56:28.896557 | orchestrator | 2026-01-02 01:56:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:56:31.942413 | orchestrator | 2026-01-02 01:56:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:56:31.944967 | orchestrator | 2026-01-02 01:56:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:56:31.945007 | orchestrator | 2026-01-02 01:56:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:56:34.993814 | orchestrator | 2026-01-02 01:56:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:56:34.995478 | orchestrator | 2026-01-02 01:56:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:56:34.995614 | orchestrator | 2026-01-02 01:56:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:56:38.035895 | orchestrator | 2026-01-02 01:56:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:56:38.036432 | orchestrator | 2026-01-02 01:56:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:56:38.036499 | orchestrator | 2026-01-02 01:56:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:56:41.086087 | orchestrator | 2026-01-02 01:56:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:56:41.087157 | orchestrator | 2026-01-02 01:56:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:56:41.087431 | orchestrator | 2026-01-02 01:56:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:56:44.141035 | orchestrator | 2026-01-02 01:56:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:56:44.143229 | orchestrator | 2026-01-02 01:56:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:56:44.143695 | orchestrator | 2026-01-02 01:56:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:56:47.188490 | orchestrator | 2026-01-02 01:56:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:56:47.189072 | orchestrator | 2026-01-02 01:56:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:56:47.189118 | orchestrator | 2026-01-02 01:56:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:56:50.244038 | orchestrator | 2026-01-02 01:56:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:56:50.245298 | orchestrator | 2026-01-02 01:56:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:56:50.245340 | orchestrator | 2026-01-02 01:56:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:56:53.294404 | orchestrator | 2026-01-02 01:56:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:56:53.298129 | orchestrator | 2026-01-02 01:56:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:56:53.298470 | orchestrator | 2026-01-02 01:56:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:56:56.344209 | orchestrator | 2026-01-02 01:56:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:56:56.345422 | orchestrator | 2026-01-02 01:56:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:56:56.345495 | orchestrator | 2026-01-02 01:56:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:56:59.400094 | orchestrator | 2026-01-02 01:56:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:56:59.401552 | orchestrator | 2026-01-02 01:56:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:56:59.401718 | orchestrator | 2026-01-02 01:56:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:57:02.441432 | orchestrator | 2026-01-02 01:57:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:57:02.441965 | orchestrator | 2026-01-02 01:57:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:57:02.442010 | orchestrator | 2026-01-02 01:57:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:57:05.493508 | orchestrator | 2026-01-02 01:57:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:57:05.495276 | orchestrator | 2026-01-02 01:57:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:57:05.495355 | orchestrator | 2026-01-02 01:57:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:57:08.535394 | orchestrator | 2026-01-02 01:57:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:57:08.537499 | orchestrator | 2026-01-02 01:57:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:57:08.537589 | orchestrator | 2026-01-02 01:57:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:57:11.587610 | orchestrator | 2026-01-02 01:57:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:57:11.590875 | orchestrator | 2026-01-02 01:57:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:57:11.591027 | orchestrator | 2026-01-02 01:57:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:57:14.629821 | orchestrator | 2026-01-02 01:57:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:57:14.632376 | orchestrator | 2026-01-02 01:57:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:57:14.632414 | orchestrator | 2026-01-02 01:57:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:57:17.675091 | orchestrator | 2026-01-02 01:57:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:57:17.676786 | orchestrator | 2026-01-02 01:57:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:57:17.677019 | orchestrator | 2026-01-02 01:57:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:57:20.731359 | orchestrator | 2026-01-02 01:57:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:57:20.733400 | orchestrator | 2026-01-02 01:57:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:57:20.733536 | orchestrator | 2026-01-02 01:57:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:57:23.783010 | orchestrator | 2026-01-02 01:57:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:57:23.785681 | orchestrator | 2026-01-02 01:57:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:57:23.785729 | orchestrator | 2026-01-02 01:57:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:57:26.835103 | orchestrator | 2026-01-02 01:57:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:57:26.836634 | orchestrator | 2026-01-02 01:57:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:57:26.836709 | orchestrator | 2026-01-02 01:57:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:57:29.893145 | orchestrator | 2026-01-02 01:57:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:57:29.895051 | orchestrator | 2026-01-02 01:57:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:57:29.895289 | orchestrator | 2026-01-02 01:57:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:57:32.946190 | orchestrator | 2026-01-02 01:57:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:57:32.947432 | orchestrator | 2026-01-02 01:57:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:57:32.947507 | orchestrator | 2026-01-02 01:57:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:57:35.997572 | orchestrator | 2026-01-02 01:57:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:57:35.999627 | orchestrator | 2026-01-02 01:57:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:57:35.999663 | orchestrator | 2026-01-02 01:57:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:57:39.049643 | orchestrator | 2026-01-02 01:57:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:57:39.051462 | orchestrator | 2026-01-02 01:57:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:57:39.051498 | orchestrator | 2026-01-02 01:57:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:57:42.095923 | orchestrator | 2026-01-02 01:57:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:57:42.097462 | orchestrator | 2026-01-02 01:57:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:57:42.097549 | orchestrator | 2026-01-02 01:57:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:57:45.144154 | orchestrator | 2026-01-02 01:57:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:57:45.147028 | orchestrator | 2026-01-02 01:57:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:57:45.147182 | orchestrator | 2026-01-02 01:57:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:57:48.192645 | orchestrator | 2026-01-02 01:57:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:57:48.194117 | orchestrator | 2026-01-02 01:57:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:57:48.194154 | orchestrator | 2026-01-02 01:57:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:57:51.243994 | orchestrator | 2026-01-02 01:57:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:57:51.246415 | orchestrator | 2026-01-02 01:57:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:57:51.246533 | orchestrator | 2026-01-02 01:57:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:57:54.292810 | orchestrator | 2026-01-02 01:57:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:57:54.294823 | orchestrator | 2026-01-02 01:57:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:57:54.294923 | orchestrator | 2026-01-02 01:57:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:57:57.348577 | orchestrator | 2026-01-02 01:57:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:57:57.350326 | orchestrator | 2026-01-02 01:57:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:57:57.350387 | orchestrator | 2026-01-02 01:57:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:58:00.401525 | orchestrator | 2026-01-02 01:58:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:58:00.403584 | orchestrator | 2026-01-02 01:58:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:58:00.403756 | orchestrator | 2026-01-02 01:58:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:58:03.450752 | orchestrator | 2026-01-02 01:58:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:58:03.453174 | orchestrator | 2026-01-02 01:58:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:58:03.453260 | orchestrator | 2026-01-02 01:58:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:58:06.499070 | orchestrator | 2026-01-02 01:58:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:58:06.500749 | orchestrator | 2026-01-02 01:58:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:58:06.500832 | orchestrator | 2026-01-02 01:58:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:58:09.545062 | orchestrator | 2026-01-02 01:58:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:58:09.547483 | orchestrator | 2026-01-02 01:58:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:58:09.547550 | orchestrator | 2026-01-02 01:58:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:58:12.589442 | orchestrator | 2026-01-02 01:58:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:58:12.591873 | orchestrator | 2026-01-02 01:58:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:58:12.591921 | orchestrator | 2026-01-02 01:58:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:58:15.639353 | orchestrator | 2026-01-02 01:58:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:58:15.641128 | orchestrator | 2026-01-02 01:58:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:58:15.641530 | orchestrator | 2026-01-02 01:58:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:58:18.684251 | orchestrator | 2026-01-02 01:58:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:58:18.685922 | orchestrator | 2026-01-02 01:58:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:58:18.686082 | orchestrator | 2026-01-02 01:58:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:58:21.734332 | orchestrator | 2026-01-02 01:58:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:58:21.735785 | orchestrator | 2026-01-02 01:58:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:58:21.735827 | orchestrator | 2026-01-02 01:58:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:58:24.790572 | orchestrator | 2026-01-02 01:58:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:58:24.791899 | orchestrator | 2026-01-02 01:58:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:58:24.815415 | orchestrator | 2026-01-02 01:58:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:58:27.840650 | orchestrator | 2026-01-02 01:58:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:58:27.842201 | orchestrator | 2026-01-02 01:58:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:58:27.842243 | orchestrator | 2026-01-02 01:58:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:58:30.889602 | orchestrator | 2026-01-02 01:58:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:58:30.891738 | orchestrator | 2026-01-02 01:58:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:58:30.891773 | orchestrator | 2026-01-02 01:58:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:58:33.942528 | orchestrator | 2026-01-02 01:58:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:58:33.943614 | orchestrator | 2026-01-02 01:58:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:58:33.943634 | orchestrator | 2026-01-02 01:58:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:58:36.997522 | orchestrator | 2026-01-02 01:58:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:58:37.004024 | orchestrator | 2026-01-02 01:58:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:58:37.005411 | orchestrator | 2026-01-02 01:58:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:58:40.062419 | orchestrator | 2026-01-02 01:58:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:58:40.063643 | orchestrator | 2026-01-02 01:58:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:58:40.063679 | orchestrator | 2026-01-02 01:58:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:58:43.107245 | orchestrator | 2026-01-02 01:58:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:58:43.107755 | orchestrator | 2026-01-02 01:58:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:58:43.107806 | orchestrator | 2026-01-02 01:58:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:58:46.155928 | orchestrator | 2026-01-02 01:58:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:58:46.157309 | orchestrator | 2026-01-02 01:58:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:58:46.157546 | orchestrator | 2026-01-02 01:58:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:58:49.206816 | orchestrator | 2026-01-02 01:58:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:58:49.208772 | orchestrator | 2026-01-02 01:58:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:58:49.209258 | orchestrator | 2026-01-02 01:58:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:58:52.253936 | orchestrator | 2026-01-02 01:58:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:58:52.255388 | orchestrator | 2026-01-02 01:58:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:58:52.255436 | orchestrator | 2026-01-02 01:58:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:58:55.299260 | orchestrator | 2026-01-02 01:58:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:58:55.301678 | orchestrator | 2026-01-02 01:58:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:58:55.301780 | orchestrator | 2026-01-02 01:58:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:58:58.355578 | orchestrator | 2026-01-02 01:58:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:58:58.359550 | orchestrator | 2026-01-02 01:58:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:58:58.359604 | orchestrator | 2026-01-02 01:58:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:59:01.410415 | orchestrator | 2026-01-02 01:59:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:59:01.411190 | orchestrator | 2026-01-02 01:59:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:59:01.411310 | orchestrator | 2026-01-02 01:59:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:59:04.467228 | orchestrator | 2026-01-02 01:59:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:59:04.468858 | orchestrator | 2026-01-02 01:59:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:59:04.468909 | orchestrator | 2026-01-02 01:59:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:59:07.520011 | orchestrator | 2026-01-02 01:59:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:59:07.522312 | orchestrator | 2026-01-02 01:59:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:59:07.522343 | orchestrator | 2026-01-02 01:59:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:59:10.575851 | orchestrator | 2026-01-02 01:59:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:59:10.577982 | orchestrator | 2026-01-02 01:59:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:59:10.578167 | orchestrator | 2026-01-02 01:59:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:59:13.635398 | orchestrator | 2026-01-02 01:59:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:59:13.637049 | orchestrator | 2026-01-02 01:59:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:59:13.637174 | orchestrator | 2026-01-02 01:59:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:59:16.687298 | orchestrator | 2026-01-02 01:59:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:59:16.690276 | orchestrator | 2026-01-02 01:59:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:59:16.690320 | orchestrator | 2026-01-02 01:59:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:59:19.742370 | orchestrator | 2026-01-02 01:59:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:59:19.743445 | orchestrator | 2026-01-02 01:59:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:59:19.743493 | orchestrator | 2026-01-02 01:59:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:59:22.783815 | orchestrator | 2026-01-02 01:59:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:59:22.785670 | orchestrator | 2026-01-02 01:59:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:59:22.785758 | orchestrator | 2026-01-02 01:59:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:59:25.835328 | orchestrator | 2026-01-02 01:59:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:59:25.836766 | orchestrator | 2026-01-02 01:59:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:59:25.836797 | orchestrator | 2026-01-02 01:59:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:59:28.887606 | orchestrator | 2026-01-02 01:59:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:59:28.889773 | orchestrator | 2026-01-02 01:59:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:59:28.889810 | orchestrator | 2026-01-02 01:59:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:59:31.932236 | orchestrator | 2026-01-02 01:59:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:59:31.933243 | orchestrator | 2026-01-02 01:59:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:59:32.098205 | orchestrator | 2026-01-02 01:59:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:59:34.983594 | orchestrator | 2026-01-02 01:59:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:59:34.987141 | orchestrator | 2026-01-02 01:59:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:59:34.987236 | orchestrator | 2026-01-02 01:59:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:59:38.037911 | orchestrator | 2026-01-02 01:59:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:59:38.039491 | orchestrator | 2026-01-02 01:59:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:59:38.039593 | orchestrator | 2026-01-02 01:59:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:59:41.085609 | orchestrator | 2026-01-02 01:59:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:59:41.087689 | orchestrator | 2026-01-02 01:59:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:59:41.087735 | orchestrator | 2026-01-02 01:59:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:59:44.134277 | orchestrator | 2026-01-02 01:59:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:59:44.135554 | orchestrator | 2026-01-02 01:59:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:59:44.135573 | orchestrator | 2026-01-02 01:59:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:59:47.179440 | orchestrator | 2026-01-02 01:59:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:59:47.181070 | orchestrator | 2026-01-02 01:59:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:59:47.181119 | orchestrator | 2026-01-02 01:59:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:59:50.230784 | orchestrator | 2026-01-02 01:59:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:59:50.232793 | orchestrator | 2026-01-02 01:59:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:59:50.232827 | orchestrator | 2026-01-02 01:59:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:59:53.280786 | orchestrator | 2026-01-02 01:59:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:59:53.282624 | orchestrator | 2026-01-02 01:59:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:59:53.282672 | orchestrator | 2026-01-02 01:59:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:59:56.325904 | orchestrator | 2026-01-02 01:59:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:59:56.326864 | orchestrator | 2026-01-02 01:59:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:59:56.326897 | orchestrator | 2026-01-02 01:59:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:59:59.377502 | orchestrator | 2026-01-02 01:59:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 01:59:59.381034 | orchestrator | 2026-01-02 01:59:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 01:59:59.381082 | orchestrator | 2026-01-02 01:59:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:00:02.430737 | orchestrator | 2026-01-02 02:00:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:00:02.432731 | orchestrator | 2026-01-02 02:00:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:00:02.432816 | orchestrator | 2026-01-02 02:00:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:00:05.484161 | orchestrator | 2026-01-02 02:00:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:00:05.486098 | orchestrator | 2026-01-02 02:00:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:00:05.486674 | orchestrator | 2026-01-02 02:00:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:00:08.531758 | orchestrator | 2026-01-02 02:00:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:00:08.533635 | orchestrator | 2026-01-02 02:00:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:00:08.533756 | orchestrator | 2026-01-02 02:00:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:00:11.576463 | orchestrator | 2026-01-02 02:00:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:00:11.578651 | orchestrator | 2026-01-02 02:00:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:00:11.578761 | orchestrator | 2026-01-02 02:00:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:00:14.626871 | orchestrator | 2026-01-02 02:00:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:00:14.627215 | orchestrator | 2026-01-02 02:00:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:00:14.627242 | orchestrator | 2026-01-02 02:00:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:00:17.677552 | orchestrator | 2026-01-02 02:00:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:00:17.680321 | orchestrator | 2026-01-02 02:00:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:00:17.680644 | orchestrator | 2026-01-02 02:00:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:00:20.730474 | orchestrator | 2026-01-02 02:00:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:00:20.731817 | orchestrator | 2026-01-02 02:00:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:00:20.731945 | orchestrator | 2026-01-02 02:00:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:00:23.774597 | orchestrator | 2026-01-02 02:00:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:00:23.776803 | orchestrator | 2026-01-02 02:00:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:00:23.777030 | orchestrator | 2026-01-02 02:00:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:00:26.828823 | orchestrator | 2026-01-02 02:00:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:00:26.830593 | orchestrator | 2026-01-02 02:00:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:00:26.830644 | orchestrator | 2026-01-02 02:00:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:00:29.880918 | orchestrator | 2026-01-02 02:00:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:00:29.882800 | orchestrator | 2026-01-02 02:00:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:00:29.882844 | orchestrator | 2026-01-02 02:00:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:00:32.933545 | orchestrator | 2026-01-02 02:00:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:00:32.935616 | orchestrator | 2026-01-02 02:00:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:00:32.935677 | orchestrator | 2026-01-02 02:00:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:00:35.983536 | orchestrator | 2026-01-02 02:00:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:00:35.986340 | orchestrator | 2026-01-02 02:00:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:00:35.986480 | orchestrator | 2026-01-02 02:00:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:00:39.040820 | orchestrator | 2026-01-02 02:00:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:00:39.042860 | orchestrator | 2026-01-02 02:00:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:00:39.042910 | orchestrator | 2026-01-02 02:00:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:00:42.094143 | orchestrator | 2026-01-02 02:00:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:00:42.096002 | orchestrator | 2026-01-02 02:00:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:00:42.096063 | orchestrator | 2026-01-02 02:00:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:00:45.148319 | orchestrator | 2026-01-02 02:00:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:00:45.149624 | orchestrator | 2026-01-02 02:00:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:00:45.149658 | orchestrator | 2026-01-02 02:00:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:00:48.198420 | orchestrator | 2026-01-02 02:00:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:00:48.199930 | orchestrator | 2026-01-02 02:00:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:00:48.200124 | orchestrator | 2026-01-02 02:00:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:00:51.242206 | orchestrator | 2026-01-02 02:00:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:00:51.244923 | orchestrator | 2026-01-02 02:00:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:00:51.245229 | orchestrator | 2026-01-02 02:00:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:00:54.292246 | orchestrator | 2026-01-02 02:00:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:00:54.295125 | orchestrator | 2026-01-02 02:00:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:00:54.295171 | orchestrator | 2026-01-02 02:00:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:00:57.339949 | orchestrator | 2026-01-02 02:00:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:00:57.342301 | orchestrator | 2026-01-02 02:00:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:00:57.342350 | orchestrator | 2026-01-02 02:00:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:01:00.390132 | orchestrator | 2026-01-02 02:01:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:01:00.392444 | orchestrator | 2026-01-02 02:01:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:01:00.393177 | orchestrator | 2026-01-02 02:01:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:01:03.435471 | orchestrator | 2026-01-02 02:01:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:01:03.438070 | orchestrator | 2026-01-02 02:01:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:01:03.438220 | orchestrator | 2026-01-02 02:01:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:01:06.494556 | orchestrator | 2026-01-02 02:01:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:01:06.496173 | orchestrator | 2026-01-02 02:01:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:01:06.497044 | orchestrator | 2026-01-02 02:01:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:01:09.544452 | orchestrator | 2026-01-02 02:01:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:01:09.545554 | orchestrator | 2026-01-02 02:01:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:01:09.545624 | orchestrator | 2026-01-02 02:01:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:01:12.598397 | orchestrator | 2026-01-02 02:01:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:01:12.600435 | orchestrator | 2026-01-02 02:01:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:01:12.600534 | orchestrator | 2026-01-02 02:01:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:01:15.647122 | orchestrator | 2026-01-02 02:01:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:01:15.648898 | orchestrator | 2026-01-02 02:01:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:01:15.648937 | orchestrator | 2026-01-02 02:01:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:01:18.695746 | orchestrator | 2026-01-02 02:01:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:01:18.696700 | orchestrator | 2026-01-02 02:01:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:01:18.696732 | orchestrator | 2026-01-02 02:01:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:01:21.747359 | orchestrator | 2026-01-02 02:01:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:01:21.750395 | orchestrator | 2026-01-02 02:01:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:01:21.750479 | orchestrator | 2026-01-02 02:01:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:01:24.800401 | orchestrator | 2026-01-02 02:01:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:01:24.802891 | orchestrator | 2026-01-02 02:01:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:01:24.802928 | orchestrator | 2026-01-02 02:01:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:01:27.856621 | orchestrator | 2026-01-02 02:01:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:01:27.858568 | orchestrator | 2026-01-02 02:01:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:01:27.858768 | orchestrator | 2026-01-02 02:01:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:01:30.908985 | orchestrator | 2026-01-02 02:01:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:01:30.912153 | orchestrator | 2026-01-02 02:01:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:01:30.912197 | orchestrator | 2026-01-02 02:01:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:01:33.961869 | orchestrator | 2026-01-02 02:01:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:01:33.963468 | orchestrator | 2026-01-02 02:01:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:01:33.963712 | orchestrator | 2026-01-02 02:01:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:01:37.016389 | orchestrator | 2026-01-02 02:01:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:01:37.017702 | orchestrator | 2026-01-02 02:01:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:01:37.017741 | orchestrator | 2026-01-02 02:01:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:01:40.070298 | orchestrator | 2026-01-02 02:01:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:01:40.070886 | orchestrator | 2026-01-02 02:01:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:01:40.071166 | orchestrator | 2026-01-02 02:01:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:01:43.111596 | orchestrator | 2026-01-02 02:01:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:01:43.114251 | orchestrator | 2026-01-02 02:01:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:01:43.114285 | orchestrator | 2026-01-02 02:01:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:01:46.168623 | orchestrator | 2026-01-02 02:01:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:01:46.170271 | orchestrator | 2026-01-02 02:01:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:01:46.170382 | orchestrator | 2026-01-02 02:01:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:01:49.219381 | orchestrator | 2026-01-02 02:01:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:01:49.221499 | orchestrator | 2026-01-02 02:01:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:01:49.221642 | orchestrator | 2026-01-02 02:01:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:01:52.273175 | orchestrator | 2026-01-02 02:01:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:01:52.274859 | orchestrator | 2026-01-02 02:01:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:01:52.275095 | orchestrator | 2026-01-02 02:01:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:01:55.327025 | orchestrator | 2026-01-02 02:01:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:01:55.331092 | orchestrator | 2026-01-02 02:01:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:01:55.331165 | orchestrator | 2026-01-02 02:01:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:01:58.382400 | orchestrator | 2026-01-02 02:01:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:01:58.385469 | orchestrator | 2026-01-02 02:01:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:01:58.385523 | orchestrator | 2026-01-02 02:01:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:02:01.433006 | orchestrator | 2026-01-02 02:02:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:02:01.434319 | orchestrator | 2026-01-02 02:02:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:02:01.434367 | orchestrator | 2026-01-02 02:02:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:02:04.486865 | orchestrator | 2026-01-02 02:02:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:02:04.488147 | orchestrator | 2026-01-02 02:02:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:02:04.488189 | orchestrator | 2026-01-02 02:02:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:02:07.540877 | orchestrator | 2026-01-02 02:02:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:02:07.542892 | orchestrator | 2026-01-02 02:02:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:02:07.543378 | orchestrator | 2026-01-02 02:02:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:02:10.588506 | orchestrator | 2026-01-02 02:02:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:02:10.589330 | orchestrator | 2026-01-02 02:02:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:02:10.589423 | orchestrator | 2026-01-02 02:02:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:02:13.637356 | orchestrator | 2026-01-02 02:02:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:02:13.640160 | orchestrator | 2026-01-02 02:02:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:02:13.640216 | orchestrator | 2026-01-02 02:02:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:02:16.681409 | orchestrator | 2026-01-02 02:02:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:02:16.682478 | orchestrator | 2026-01-02 02:02:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:02:16.682552 | orchestrator | 2026-01-02 02:02:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:02:19.736798 | orchestrator | 2026-01-02 02:02:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:02:19.738675 | orchestrator | 2026-01-02 02:02:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:02:19.738719 | orchestrator | 2026-01-02 02:02:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:02:22.785491 | orchestrator | 2026-01-02 02:02:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:02:22.789228 | orchestrator | 2026-01-02 02:02:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:02:22.789263 | orchestrator | 2026-01-02 02:02:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:02:25.840893 | orchestrator | 2026-01-02 02:02:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:02:25.843508 | orchestrator | 2026-01-02 02:02:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:02:25.843603 | orchestrator | 2026-01-02 02:02:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:02:28.896434 | orchestrator | 2026-01-02 02:02:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:02:28.898364 | orchestrator | 2026-01-02 02:02:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:02:28.898456 | orchestrator | 2026-01-02 02:02:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:02:31.953696 | orchestrator | 2026-01-02 02:02:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:02:31.955348 | orchestrator | 2026-01-02 02:02:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:02:31.955434 | orchestrator | 2026-01-02 02:02:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:02:35.009780 | orchestrator | 2026-01-02 02:02:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:02:35.011638 | orchestrator | 2026-01-02 02:02:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:02:35.011930 | orchestrator | 2026-01-02 02:02:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:02:38.054310 | orchestrator | 2026-01-02 02:02:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:02:38.054957 | orchestrator | 2026-01-02 02:02:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:02:38.055017 | orchestrator | 2026-01-02 02:02:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:02:41.104247 | orchestrator | 2026-01-02 02:02:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:02:41.105837 | orchestrator | 2026-01-02 02:02:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:02:41.152148 | orchestrator | 2026-01-02 02:02:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:02:44.152041 | orchestrator | 2026-01-02 02:02:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:02:44.153830 | orchestrator | 2026-01-02 02:02:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:02:44.153867 | orchestrator | 2026-01-02 02:02:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:02:47.203642 | orchestrator | 2026-01-02 02:02:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:02:47.206200 | orchestrator | 2026-01-02 02:02:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:02:47.206904 | orchestrator | 2026-01-02 02:02:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:02:50.249756 | orchestrator | 2026-01-02 02:02:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:02:50.250288 | orchestrator | 2026-01-02 02:02:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:02:50.250333 | orchestrator | 2026-01-02 02:02:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:02:53.303235 | orchestrator | 2026-01-02 02:02:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:02:53.304775 | orchestrator | 2026-01-02 02:02:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:02:53.305009 | orchestrator | 2026-01-02 02:02:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:02:56.358889 | orchestrator | 2026-01-02 02:02:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:02:56.360613 | orchestrator | 2026-01-02 02:02:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:02:56.361219 | orchestrator | 2026-01-02 02:02:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:02:59.412698 | orchestrator | 2026-01-02 02:02:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:02:59.414708 | orchestrator | 2026-01-02 02:02:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:02:59.415194 | orchestrator | 2026-01-02 02:02:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:03:02.466877 | orchestrator | 2026-01-02 02:03:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:03:02.468565 | orchestrator | 2026-01-02 02:03:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:03:02.468611 | orchestrator | 2026-01-02 02:03:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:03:05.514216 | orchestrator | 2026-01-02 02:03:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:03:05.516651 | orchestrator | 2026-01-02 02:03:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:03:05.516738 | orchestrator | 2026-01-02 02:03:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:03:08.562757 | orchestrator | 2026-01-02 02:03:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:03:08.564748 | orchestrator | 2026-01-02 02:03:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:03:08.564824 | orchestrator | 2026-01-02 02:03:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:03:11.611757 | orchestrator | 2026-01-02 02:03:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:03:11.614294 | orchestrator | 2026-01-02 02:03:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:03:11.614334 | orchestrator | 2026-01-02 02:03:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:03:14.651080 | orchestrator | 2026-01-02 02:03:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:03:14.651190 | orchestrator | 2026-01-02 02:03:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:03:14.651205 | orchestrator | 2026-01-02 02:03:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:03:17.706791 | orchestrator | 2026-01-02 02:03:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:03:17.708573 | orchestrator | 2026-01-02 02:03:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:03:17.709517 | orchestrator | 2026-01-02 02:03:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:03:20.757446 | orchestrator | 2026-01-02 02:03:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:03:20.758846 | orchestrator | 2026-01-02 02:03:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:03:20.758895 | orchestrator | 2026-01-02 02:03:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:03:23.809662 | orchestrator | 2026-01-02 02:03:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:03:23.811159 | orchestrator | 2026-01-02 02:03:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:03:23.811252 | orchestrator | 2026-01-02 02:03:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:03:26.860608 | orchestrator | 2026-01-02 02:03:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:03:26.864268 | orchestrator | 2026-01-02 02:03:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:03:26.864346 | orchestrator | 2026-01-02 02:03:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:03:29.913784 | orchestrator | 2026-01-02 02:03:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:03:29.916688 | orchestrator | 2026-01-02 02:03:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:03:29.916741 | orchestrator | 2026-01-02 02:03:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:03:32.970546 | orchestrator | 2026-01-02 02:03:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:03:32.972633 | orchestrator | 2026-01-02 02:03:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:03:32.972675 | orchestrator | 2026-01-02 02:03:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:03:36.025695 | orchestrator | 2026-01-02 02:03:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:03:36.027577 | orchestrator | 2026-01-02 02:03:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:03:36.027615 | orchestrator | 2026-01-02 02:03:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:03:39.078884 | orchestrator | 2026-01-02 02:03:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:03:39.080819 | orchestrator | 2026-01-02 02:03:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:03:39.080859 | orchestrator | 2026-01-02 02:03:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:03:42.126253 | orchestrator | 2026-01-02 02:03:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:03:42.126691 | orchestrator | 2026-01-02 02:03:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:03:42.126720 | orchestrator | 2026-01-02 02:03:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:03:45.182197 | orchestrator | 2026-01-02 02:03:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:03:45.184828 | orchestrator | 2026-01-02 02:03:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:03:45.184873 | orchestrator | 2026-01-02 02:03:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:03:48.241670 | orchestrator | 2026-01-02 02:03:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:03:48.243358 | orchestrator | 2026-01-02 02:03:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:03:48.243389 | orchestrator | 2026-01-02 02:03:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:03:51.284089 | orchestrator | 2026-01-02 02:03:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:03:51.286368 | orchestrator | 2026-01-02 02:03:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:03:51.286460 | orchestrator | 2026-01-02 02:03:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:03:54.337207 | orchestrator | 2026-01-02 02:03:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:03:54.338270 | orchestrator | 2026-01-02 02:03:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:03:54.338339 | orchestrator | 2026-01-02 02:03:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:03:57.381748 | orchestrator | 2026-01-02 02:03:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:03:57.383176 | orchestrator | 2026-01-02 02:03:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:03:57.383308 | orchestrator | 2026-01-02 02:03:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:04:00.433909 | orchestrator | 2026-01-02 02:04:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:04:00.434609 | orchestrator | 2026-01-02 02:04:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:04:00.434642 | orchestrator | 2026-01-02 02:04:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:04:03.488166 | orchestrator | 2026-01-02 02:04:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:04:03.490306 | orchestrator | 2026-01-02 02:04:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:04:03.490395 | orchestrator | 2026-01-02 02:04:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:04:06.543621 | orchestrator | 2026-01-02 02:04:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:04:06.544457 | orchestrator | 2026-01-02 02:04:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:04:06.544993 | orchestrator | 2026-01-02 02:04:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:04:09.592964 | orchestrator | 2026-01-02 02:04:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:04:09.595933 | orchestrator | 2026-01-02 02:04:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:04:09.596026 | orchestrator | 2026-01-02 02:04:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:04:12.645131 | orchestrator | 2026-01-02 02:04:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:04:12.647376 | orchestrator | 2026-01-02 02:04:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:04:12.647428 | orchestrator | 2026-01-02 02:04:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:04:15.696384 | orchestrator | 2026-01-02 02:04:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:04:15.699138 | orchestrator | 2026-01-02 02:04:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:04:15.699187 | orchestrator | 2026-01-02 02:04:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:04:18.761616 | orchestrator | 2026-01-02 02:04:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:04:18.763464 | orchestrator | 2026-01-02 02:04:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:04:18.763508 | orchestrator | 2026-01-02 02:04:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:04:21.813092 | orchestrator | 2026-01-02 02:04:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:04:21.815140 | orchestrator | 2026-01-02 02:04:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:04:21.815187 | orchestrator | 2026-01-02 02:04:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:04:24.864222 | orchestrator | 2026-01-02 02:04:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:04:24.865714 | orchestrator | 2026-01-02 02:04:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:04:24.865756 | orchestrator | 2026-01-02 02:04:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:04:27.913140 | orchestrator | 2026-01-02 02:04:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:04:27.915769 | orchestrator | 2026-01-02 02:04:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:04:27.915832 | orchestrator | 2026-01-02 02:04:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:04:30.963856 | orchestrator | 2026-01-02 02:04:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:04:30.966205 | orchestrator | 2026-01-02 02:04:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:04:30.966380 | orchestrator | 2026-01-02 02:04:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:04:34.015636 | orchestrator | 2026-01-02 02:04:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:04:34.017169 | orchestrator | 2026-01-02 02:04:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:04:34.017234 | orchestrator | 2026-01-02 02:04:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:04:37.066271 | orchestrator | 2026-01-02 02:04:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:04:37.067389 | orchestrator | 2026-01-02 02:04:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:04:37.067429 | orchestrator | 2026-01-02 02:04:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:04:40.113503 | orchestrator | 2026-01-02 02:04:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:04:40.115330 | orchestrator | 2026-01-02 02:04:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:04:40.115362 | orchestrator | 2026-01-02 02:04:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:04:43.154395 | orchestrator | 2026-01-02 02:04:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:04:43.155832 | orchestrator | 2026-01-02 02:04:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:04:43.155850 | orchestrator | 2026-01-02 02:04:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:04:46.211591 | orchestrator | 2026-01-02 02:04:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:04:46.214163 | orchestrator | 2026-01-02 02:04:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:04:46.214213 | orchestrator | 2026-01-02 02:04:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:04:49.269150 | orchestrator | 2026-01-02 02:04:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:04:49.271194 | orchestrator | 2026-01-02 02:04:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:04:49.271237 | orchestrator | 2026-01-02 02:04:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:04:52.322296 | orchestrator | 2026-01-02 02:04:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:04:52.325435 | orchestrator | 2026-01-02 02:04:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:04:52.325512 | orchestrator | 2026-01-02 02:04:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:04:55.374140 | orchestrator | 2026-01-02 02:04:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:04:55.377736 | orchestrator | 2026-01-02 02:04:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:04:55.377812 | orchestrator | 2026-01-02 02:04:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:04:58.426567 | orchestrator | 2026-01-02 02:04:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:04:58.429075 | orchestrator | 2026-01-02 02:04:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:04:58.429189 | orchestrator | 2026-01-02 02:04:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:05:01.480083 | orchestrator | 2026-01-02 02:05:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:05:01.482970 | orchestrator | 2026-01-02 02:05:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:05:01.483140 | orchestrator | 2026-01-02 02:05:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:05:04.527411 | orchestrator | 2026-01-02 02:05:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:05:04.529493 | orchestrator | 2026-01-02 02:05:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:05:04.529541 | orchestrator | 2026-01-02 02:05:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:05:07.571458 | orchestrator | 2026-01-02 02:05:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:05:07.573511 | orchestrator | 2026-01-02 02:05:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:05:07.573559 | orchestrator | 2026-01-02 02:05:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:05:10.625228 | orchestrator | 2026-01-02 02:05:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:05:10.627094 | orchestrator | 2026-01-02 02:05:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:05:10.627191 | orchestrator | 2026-01-02 02:05:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:05:13.679416 | orchestrator | 2026-01-02 02:05:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:05:13.682469 | orchestrator | 2026-01-02 02:05:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:05:13.682559 | orchestrator | 2026-01-02 02:05:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:05:16.730272 | orchestrator | 2026-01-02 02:05:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:05:16.731549 | orchestrator | 2026-01-02 02:05:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:05:16.731585 | orchestrator | 2026-01-02 02:05:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:05:19.772713 | orchestrator | 2026-01-02 02:05:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:05:19.775357 | orchestrator | 2026-01-02 02:05:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:05:19.775530 | orchestrator | 2026-01-02 02:05:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:05:22.824218 | orchestrator | 2026-01-02 02:05:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:05:22.826862 | orchestrator | 2026-01-02 02:05:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:05:22.826955 | orchestrator | 2026-01-02 02:05:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:05:25.878131 | orchestrator | 2026-01-02 02:05:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:05:25.880166 | orchestrator | 2026-01-02 02:05:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:05:25.880392 | orchestrator | 2026-01-02 02:05:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:05:28.927361 | orchestrator | 2026-01-02 02:05:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:05:28.929245 | orchestrator | 2026-01-02 02:05:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:05:28.929588 | orchestrator | 2026-01-02 02:05:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:05:31.980282 | orchestrator | 2026-01-02 02:05:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:05:31.982886 | orchestrator | 2026-01-02 02:05:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:05:31.982931 | orchestrator | 2026-01-02 02:05:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:05:35.036623 | orchestrator | 2026-01-02 02:05:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:05:35.040406 | orchestrator | 2026-01-02 02:05:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:05:35.040448 | orchestrator | 2026-01-02 02:05:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:05:38.078581 | orchestrator | 2026-01-02 02:05:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:05:38.079736 | orchestrator | 2026-01-02 02:05:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:05:38.079768 | orchestrator | 2026-01-02 02:05:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:05:41.126725 | orchestrator | 2026-01-02 02:05:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:05:41.128245 | orchestrator | 2026-01-02 02:05:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:05:41.128292 | orchestrator | 2026-01-02 02:05:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:05:44.181204 | orchestrator | 2026-01-02 02:05:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:05:44.185723 | orchestrator | 2026-01-02 02:05:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:05:44.185799 | orchestrator | 2026-01-02 02:05:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:05:47.237525 | orchestrator | 2026-01-02 02:05:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:05:47.240624 | orchestrator | 2026-01-02 02:05:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:05:47.240736 | orchestrator | 2026-01-02 02:05:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:05:50.288738 | orchestrator | 2026-01-02 02:05:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:05:50.290839 | orchestrator | 2026-01-02 02:05:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:05:50.290909 | orchestrator | 2026-01-02 02:05:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:05:53.335965 | orchestrator | 2026-01-02 02:05:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:05:53.338290 | orchestrator | 2026-01-02 02:05:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:05:53.338339 | orchestrator | 2026-01-02 02:05:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:05:56.386188 | orchestrator | 2026-01-02 02:05:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:05:56.388964 | orchestrator | 2026-01-02 02:05:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:05:56.389137 | orchestrator | 2026-01-02 02:05:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:05:59.436585 | orchestrator | 2026-01-02 02:05:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:05:59.438534 | orchestrator | 2026-01-02 02:05:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:05:59.438624 | orchestrator | 2026-01-02 02:05:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:06:02.487299 | orchestrator | 2026-01-02 02:06:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:06:02.492498 | orchestrator | 2026-01-02 02:06:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:06:02.492580 | orchestrator | 2026-01-02 02:06:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:06:05.549893 | orchestrator | 2026-01-02 02:06:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:06:05.551600 | orchestrator | 2026-01-02 02:06:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:06:05.551650 | orchestrator | 2026-01-02 02:06:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:06:08.603061 | orchestrator | 2026-01-02 02:06:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:06:08.606353 | orchestrator | 2026-01-02 02:06:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:06:08.606447 | orchestrator | 2026-01-02 02:06:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:06:11.651437 | orchestrator | 2026-01-02 02:06:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:06:11.653593 | orchestrator | 2026-01-02 02:06:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:06:11.653639 | orchestrator | 2026-01-02 02:06:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:06:14.698422 | orchestrator | 2026-01-02 02:06:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:06:14.699815 | orchestrator | 2026-01-02 02:06:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:06:14.699874 | orchestrator | 2026-01-02 02:06:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:06:17.740612 | orchestrator | 2026-01-02 02:06:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:06:17.741743 | orchestrator | 2026-01-02 02:06:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:06:17.742147 | orchestrator | 2026-01-02 02:06:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:06:20.789472 | orchestrator | 2026-01-02 02:06:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:06:20.792166 | orchestrator | 2026-01-02 02:06:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:06:20.792462 | orchestrator | 2026-01-02 02:06:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:06:23.838164 | orchestrator | 2026-01-02 02:06:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:06:23.839755 | orchestrator | 2026-01-02 02:06:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:06:23.840177 | orchestrator | 2026-01-02 02:06:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:06:26.889416 | orchestrator | 2026-01-02 02:06:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:06:26.893847 | orchestrator | 2026-01-02 02:06:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:06:26.893946 | orchestrator | 2026-01-02 02:06:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:06:29.946213 | orchestrator | 2026-01-02 02:06:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:06:29.948471 | orchestrator | 2026-01-02 02:06:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:06:29.948530 | orchestrator | 2026-01-02 02:06:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:06:33.001503 | orchestrator | 2026-01-02 02:06:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:06:33.003751 | orchestrator | 2026-01-02 02:06:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:06:33.003827 | orchestrator | 2026-01-02 02:06:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:06:36.055649 | orchestrator | 2026-01-02 02:06:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:06:36.057392 | orchestrator | 2026-01-02 02:06:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:06:36.057430 | orchestrator | 2026-01-02 02:06:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:06:39.098601 | orchestrator | 2026-01-02 02:06:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:06:39.100158 | orchestrator | 2026-01-02 02:06:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:06:39.100185 | orchestrator | 2026-01-02 02:06:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:06:42.147427 | orchestrator | 2026-01-02 02:06:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:06:42.149750 | orchestrator | 2026-01-02 02:06:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:06:42.149811 | orchestrator | 2026-01-02 02:06:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:06:45.202707 | orchestrator | 2026-01-02 02:06:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:06:45.204185 | orchestrator | 2026-01-02 02:06:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:06:45.204220 | orchestrator | 2026-01-02 02:06:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:06:48.253377 | orchestrator | 2026-01-02 02:06:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:06:48.253897 | orchestrator | 2026-01-02 02:06:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:06:48.254138 | orchestrator | 2026-01-02 02:06:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:06:51.302087 | orchestrator | 2026-01-02 02:06:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:06:51.303488 | orchestrator | 2026-01-02 02:06:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:06:51.303579 | orchestrator | 2026-01-02 02:06:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:06:54.354640 | orchestrator | 2026-01-02 02:06:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:06:54.356087 | orchestrator | 2026-01-02 02:06:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:06:54.356124 | orchestrator | 2026-01-02 02:06:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:06:57.410464 | orchestrator | 2026-01-02 02:06:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:06:57.412684 | orchestrator | 2026-01-02 02:06:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:06:57.412840 | orchestrator | 2026-01-02 02:06:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:07:00.456289 | orchestrator | 2026-01-02 02:07:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:07:00.457696 | orchestrator | 2026-01-02 02:07:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:07:00.457751 | orchestrator | 2026-01-02 02:07:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:07:03.510326 | orchestrator | 2026-01-02 02:07:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:07:03.512493 | orchestrator | 2026-01-02 02:07:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:07:03.512535 | orchestrator | 2026-01-02 02:07:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:07:06.559305 | orchestrator | 2026-01-02 02:07:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:07:06.560557 | orchestrator | 2026-01-02 02:07:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:07:06.560598 | orchestrator | 2026-01-02 02:07:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:07:09.607765 | orchestrator | 2026-01-02 02:07:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:07:09.610388 | orchestrator | 2026-01-02 02:07:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:07:09.610605 | orchestrator | 2026-01-02 02:07:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:07:12.658255 | orchestrator | 2026-01-02 02:07:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:07:12.660068 | orchestrator | 2026-01-02 02:07:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:07:12.660116 | orchestrator | 2026-01-02 02:07:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:07:15.703124 | orchestrator | 2026-01-02 02:07:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:07:15.704594 | orchestrator | 2026-01-02 02:07:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:07:15.704626 | orchestrator | 2026-01-02 02:07:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:07:18.744423 | orchestrator | 2026-01-02 02:07:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:07:18.747164 | orchestrator | 2026-01-02 02:07:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:07:18.747218 | orchestrator | 2026-01-02 02:07:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:07:21.791660 | orchestrator | 2026-01-02 02:07:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:07:21.793226 | orchestrator | 2026-01-02 02:07:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:07:21.793265 | orchestrator | 2026-01-02 02:07:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:07:24.851762 | orchestrator | 2026-01-02 02:07:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:07:24.853245 | orchestrator | 2026-01-02 02:07:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:07:24.853295 | orchestrator | 2026-01-02 02:07:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:07:27.906284 | orchestrator | 2026-01-02 02:07:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:07:27.908888 | orchestrator | 2026-01-02 02:07:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:07:27.909251 | orchestrator | 2026-01-02 02:07:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:07:30.961860 | orchestrator | 2026-01-02 02:07:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:07:30.963075 | orchestrator | 2026-01-02 02:07:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:07:30.963111 | orchestrator | 2026-01-02 02:07:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:07:34.009426 | orchestrator | 2026-01-02 02:07:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:07:34.011118 | orchestrator | 2026-01-02 02:07:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:07:34.011203 | orchestrator | 2026-01-02 02:07:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:07:37.064625 | orchestrator | 2026-01-02 02:07:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:07:37.066644 | orchestrator | 2026-01-02 02:07:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:07:37.066718 | orchestrator | 2026-01-02 02:07:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:07:40.116370 | orchestrator | 2026-01-02 02:07:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:07:40.118326 | orchestrator | 2026-01-02 02:07:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:07:40.118449 | orchestrator | 2026-01-02 02:07:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:07:43.169613 | orchestrator | 2026-01-02 02:07:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:07:43.172382 | orchestrator | 2026-01-02 02:07:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:07:43.172442 | orchestrator | 2026-01-02 02:07:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:07:46.224535 | orchestrator | 2026-01-02 02:07:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:07:46.226880 | orchestrator | 2026-01-02 02:07:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:07:46.226961 | orchestrator | 2026-01-02 02:07:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:07:49.272788 | orchestrator | 2026-01-02 02:07:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:07:49.273912 | orchestrator | 2026-01-02 02:07:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:07:49.274086 | orchestrator | 2026-01-02 02:07:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:07:52.317407 | orchestrator | 2026-01-02 02:07:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:07:52.318877 | orchestrator | 2026-01-02 02:07:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:07:52.318918 | orchestrator | 2026-01-02 02:07:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:07:55.368534 | orchestrator | 2026-01-02 02:07:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:07:55.369479 | orchestrator | 2026-01-02 02:07:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:07:55.369545 | orchestrator | 2026-01-02 02:07:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:07:58.414753 | orchestrator | 2026-01-02 02:07:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:07:58.417211 | orchestrator | 2026-01-02 02:07:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:07:58.417605 | orchestrator | 2026-01-02 02:07:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:08:01.467770 | orchestrator | 2026-01-02 02:08:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:08:01.469533 | orchestrator | 2026-01-02 02:08:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:08:01.469579 | orchestrator | 2026-01-02 02:08:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:08:04.521528 | orchestrator | 2026-01-02 02:08:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:08:04.523361 | orchestrator | 2026-01-02 02:08:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:08:04.523419 | orchestrator | 2026-01-02 02:08:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:08:07.568686 | orchestrator | 2026-01-02 02:08:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:08:07.572566 | orchestrator | 2026-01-02 02:08:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:08:07.572660 | orchestrator | 2026-01-02 02:08:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:08:10.616615 | orchestrator | 2026-01-02 02:08:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:08:10.618660 | orchestrator | 2026-01-02 02:08:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:08:10.618712 | orchestrator | 2026-01-02 02:08:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:08:13.662468 | orchestrator | 2026-01-02 02:08:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:08:13.664920 | orchestrator | 2026-01-02 02:08:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:08:13.666629 | orchestrator | 2026-01-02 02:08:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:08:16.710322 | orchestrator | 2026-01-02 02:08:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:08:16.711919 | orchestrator | 2026-01-02 02:08:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:08:16.712034 | orchestrator | 2026-01-02 02:08:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:08:19.766111 | orchestrator | 2026-01-02 02:08:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:08:19.768496 | orchestrator | 2026-01-02 02:08:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:08:19.768548 | orchestrator | 2026-01-02 02:08:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:08:22.814886 | orchestrator | 2026-01-02 02:08:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:08:22.816551 | orchestrator | 2026-01-02 02:08:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:08:22.816587 | orchestrator | 2026-01-02 02:08:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:08:25.871264 | orchestrator | 2026-01-02 02:08:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:08:25.873776 | orchestrator | 2026-01-02 02:08:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:08:25.873897 | orchestrator | 2026-01-02 02:08:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:08:28.918700 | orchestrator | 2026-01-02 02:08:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:08:28.920512 | orchestrator | 2026-01-02 02:08:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:08:28.920595 | orchestrator | 2026-01-02 02:08:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:08:31.970704 | orchestrator | 2026-01-02 02:08:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:08:31.973506 | orchestrator | 2026-01-02 02:08:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:08:31.973582 | orchestrator | 2026-01-02 02:08:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:08:35.025209 | orchestrator | 2026-01-02 02:08:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:08:35.026727 | orchestrator | 2026-01-02 02:08:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:08:35.026760 | orchestrator | 2026-01-02 02:08:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:08:38.074774 | orchestrator | 2026-01-02 02:08:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:08:38.074965 | orchestrator | 2026-01-02 02:08:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:08:38.075036 | orchestrator | 2026-01-02 02:08:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:08:41.124531 | orchestrator | 2026-01-02 02:08:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:08:41.126100 | orchestrator | 2026-01-02 02:08:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:08:41.126150 | orchestrator | 2026-01-02 02:08:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:08:44.172801 | orchestrator | 2026-01-02 02:08:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:08:44.175098 | orchestrator | 2026-01-02 02:08:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:08:44.175140 | orchestrator | 2026-01-02 02:08:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:08:47.219040 | orchestrator | 2026-01-02 02:08:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:08:47.220318 | orchestrator | 2026-01-02 02:08:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:08:47.220352 | orchestrator | 2026-01-02 02:08:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:08:50.272985 | orchestrator | 2026-01-02 02:08:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:08:50.274838 | orchestrator | 2026-01-02 02:08:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:08:50.274881 | orchestrator | 2026-01-02 02:08:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:08:53.321190 | orchestrator | 2026-01-02 02:08:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:08:53.322820 | orchestrator | 2026-01-02 02:08:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:08:53.322912 | orchestrator | 2026-01-02 02:08:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:08:56.375409 | orchestrator | 2026-01-02 02:08:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:08:56.376707 | orchestrator | 2026-01-02 02:08:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:08:56.376889 | orchestrator | 2026-01-02 02:08:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:08:59.425472 | orchestrator | 2026-01-02 02:08:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:08:59.427032 | orchestrator | 2026-01-02 02:08:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:08:59.427083 | orchestrator | 2026-01-02 02:08:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:09:02.475325 | orchestrator | 2026-01-02 02:09:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:09:02.476913 | orchestrator | 2026-01-02 02:09:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:09:02.477068 | orchestrator | 2026-01-02 02:09:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:09:05.527329 | orchestrator | 2026-01-02 02:09:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:09:05.529373 | orchestrator | 2026-01-02 02:09:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:09:05.529438 | orchestrator | 2026-01-02 02:09:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:09:08.579800 | orchestrator | 2026-01-02 02:09:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:09:08.580485 | orchestrator | 2026-01-02 02:09:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:09:08.580512 | orchestrator | 2026-01-02 02:09:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:09:11.633971 | orchestrator | 2026-01-02 02:09:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:09:11.635095 | orchestrator | 2026-01-02 02:09:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:09:11.635364 | orchestrator | 2026-01-02 02:09:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:09:14.680217 | orchestrator | 2026-01-02 02:09:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:09:14.682215 | orchestrator | 2026-01-02 02:09:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:09:14.682258 | orchestrator | 2026-01-02 02:09:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:09:17.733466 | orchestrator | 2026-01-02 02:09:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:09:17.735195 | orchestrator | 2026-01-02 02:09:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:09:17.735250 | orchestrator | 2026-01-02 02:09:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:09:20.779694 | orchestrator | 2026-01-02 02:09:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:09:20.782834 | orchestrator | 2026-01-02 02:09:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:09:20.782892 | orchestrator | 2026-01-02 02:09:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:09:23.839747 | orchestrator | 2026-01-02 02:09:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:09:23.842267 | orchestrator | 2026-01-02 02:09:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:09:23.842308 | orchestrator | 2026-01-02 02:09:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:09:26.896143 | orchestrator | 2026-01-02 02:09:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:09:26.898574 | orchestrator | 2026-01-02 02:09:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:09:26.898644 | orchestrator | 2026-01-02 02:09:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:09:29.947802 | orchestrator | 2026-01-02 02:09:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:09:29.948899 | orchestrator | 2026-01-02 02:09:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:09:29.948918 | orchestrator | 2026-01-02 02:09:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:09:32.992057 | orchestrator | 2026-01-02 02:09:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:09:32.993755 | orchestrator | 2026-01-02 02:09:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:09:32.994110 | orchestrator | 2026-01-02 02:09:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:09:36.052469 | orchestrator | 2026-01-02 02:09:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:09:36.054176 | orchestrator | 2026-01-02 02:09:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:09:36.054203 | orchestrator | 2026-01-02 02:09:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:09:39.094700 | orchestrator | 2026-01-02 02:09:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:09:39.096129 | orchestrator | 2026-01-02 02:09:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:09:39.096215 | orchestrator | 2026-01-02 02:09:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:09:42.142926 | orchestrator | 2026-01-02 02:09:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:09:42.146602 | orchestrator | 2026-01-02 02:09:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:09:42.146660 | orchestrator | 2026-01-02 02:09:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:09:45.206606 | orchestrator | 2026-01-02 02:09:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:09:45.208172 | orchestrator | 2026-01-02 02:09:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:09:45.208208 | orchestrator | 2026-01-02 02:09:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:09:48.257038 | orchestrator | 2026-01-02 02:09:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:09:48.258912 | orchestrator | 2026-01-02 02:09:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:09:48.259175 | orchestrator | 2026-01-02 02:09:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:09:51.299136 | orchestrator | 2026-01-02 02:09:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:09:51.300829 | orchestrator | 2026-01-02 02:09:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:09:51.300855 | orchestrator | 2026-01-02 02:09:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:09:54.350285 | orchestrator | 2026-01-02 02:09:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:09:54.350524 | orchestrator | 2026-01-02 02:09:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:09:54.350550 | orchestrator | 2026-01-02 02:09:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:09:57.400215 | orchestrator | 2026-01-02 02:09:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:09:57.401542 | orchestrator | 2026-01-02 02:09:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:09:57.401600 | orchestrator | 2026-01-02 02:09:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:10:00.446941 | orchestrator | 2026-01-02 02:10:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:10:00.450837 | orchestrator | 2026-01-02 02:10:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:10:00.450875 | orchestrator | 2026-01-02 02:10:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:10:03.497440 | orchestrator | 2026-01-02 02:10:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:10:03.499555 | orchestrator | 2026-01-02 02:10:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:10:03.499603 | orchestrator | 2026-01-02 02:10:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:10:06.547745 | orchestrator | 2026-01-02 02:10:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:10:06.550981 | orchestrator | 2026-01-02 02:10:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:10:06.551108 | orchestrator | 2026-01-02 02:10:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:10:09.597642 | orchestrator | 2026-01-02 02:10:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:10:09.599862 | orchestrator | 2026-01-02 02:10:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:10:09.599897 | orchestrator | 2026-01-02 02:10:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:10:12.646382 | orchestrator | 2026-01-02 02:10:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:10:12.648980 | orchestrator | 2026-01-02 02:10:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:10:12.649059 | orchestrator | 2026-01-02 02:10:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:10:15.698226 | orchestrator | 2026-01-02 02:10:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:10:15.701741 | orchestrator | 2026-01-02 02:10:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:10:15.701815 | orchestrator | 2026-01-02 02:10:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:10:18.746353 | orchestrator | 2026-01-02 02:10:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:10:18.748486 | orchestrator | 2026-01-02 02:10:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:10:18.748522 | orchestrator | 2026-01-02 02:10:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:10:21.797377 | orchestrator | 2026-01-02 02:10:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:10:21.799061 | orchestrator | 2026-01-02 02:10:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:10:21.799208 | orchestrator | 2026-01-02 02:10:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:10:24.844864 | orchestrator | 2026-01-02 02:10:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:10:24.847362 | orchestrator | 2026-01-02 02:10:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:10:24.847548 | orchestrator | 2026-01-02 02:10:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:10:27.896873 | orchestrator | 2026-01-02 02:10:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:10:27.898478 | orchestrator | 2026-01-02 02:10:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:10:27.898567 | orchestrator | 2026-01-02 02:10:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:10:30.958600 | orchestrator | 2026-01-02 02:10:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:10:30.960247 | orchestrator | 2026-01-02 02:10:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:10:30.960284 | orchestrator | 2026-01-02 02:10:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:10:34.007156 | orchestrator | 2026-01-02 02:10:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:10:34.008930 | orchestrator | 2026-01-02 02:10:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:10:34.009164 | orchestrator | 2026-01-02 02:10:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:10:37.055689 | orchestrator | 2026-01-02 02:10:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:10:37.062118 | orchestrator | 2026-01-02 02:10:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:10:37.062191 | orchestrator | 2026-01-02 02:10:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:10:40.103481 | orchestrator | 2026-01-02 02:10:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:10:40.105811 | orchestrator | 2026-01-02 02:10:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:10:40.106099 | orchestrator | 2026-01-02 02:10:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:10:43.152788 | orchestrator | 2026-01-02 02:10:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:10:43.154747 | orchestrator | 2026-01-02 02:10:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:10:43.154825 | orchestrator | 2026-01-02 02:10:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:10:46.212654 | orchestrator | 2026-01-02 02:10:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:10:46.213995 | orchestrator | 2026-01-02 02:10:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:10:46.214173 | orchestrator | 2026-01-02 02:10:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:10:49.261392 | orchestrator | 2026-01-02 02:10:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:10:49.263531 | orchestrator | 2026-01-02 02:10:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:10:49.263594 | orchestrator | 2026-01-02 02:10:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:10:52.309922 | orchestrator | 2026-01-02 02:10:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:10:52.312424 | orchestrator | 2026-01-02 02:10:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:10:52.312518 | orchestrator | 2026-01-02 02:10:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:10:55.364611 | orchestrator | 2026-01-02 02:10:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:10:55.365978 | orchestrator | 2026-01-02 02:10:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:10:55.366153 | orchestrator | 2026-01-02 02:10:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:10:58.416641 | orchestrator | 2026-01-02 02:10:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:10:58.418154 | orchestrator | 2026-01-02 02:10:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:10:58.418190 | orchestrator | 2026-01-02 02:10:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:11:01.469807 | orchestrator | 2026-01-02 02:11:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:11:01.470937 | orchestrator | 2026-01-02 02:11:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:11:01.471081 | orchestrator | 2026-01-02 02:11:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:11:04.517466 | orchestrator | 2026-01-02 02:11:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:11:04.519996 | orchestrator | 2026-01-02 02:11:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:11:04.520066 | orchestrator | 2026-01-02 02:11:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:11:07.567690 | orchestrator | 2026-01-02 02:11:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:11:07.568881 | orchestrator | 2026-01-02 02:11:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:11:07.568998 | orchestrator | 2026-01-02 02:11:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:11:10.615276 | orchestrator | 2026-01-02 02:11:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:11:10.618364 | orchestrator | 2026-01-02 02:11:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:11:10.618415 | orchestrator | 2026-01-02 02:11:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:11:13.661294 | orchestrator | 2026-01-02 02:11:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:11:13.663469 | orchestrator | 2026-01-02 02:11:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:11:13.663545 | orchestrator | 2026-01-02 02:11:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:11:16.702444 | orchestrator | 2026-01-02 02:11:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:11:16.703736 | orchestrator | 2026-01-02 02:11:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:11:16.703768 | orchestrator | 2026-01-02 02:11:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:11:19.756233 | orchestrator | 2026-01-02 02:11:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:11:19.756312 | orchestrator | 2026-01-02 02:11:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:11:19.756322 | orchestrator | 2026-01-02 02:11:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:11:22.808932 | orchestrator | 2026-01-02 02:11:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:11:22.810674 | orchestrator | 2026-01-02 02:11:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:11:22.810784 | orchestrator | 2026-01-02 02:11:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:11:25.862696 | orchestrator | 2026-01-02 02:11:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:11:25.863985 | orchestrator | 2026-01-02 02:11:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:11:25.864302 | orchestrator | 2026-01-02 02:11:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:11:28.920012 | orchestrator | 2026-01-02 02:11:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:11:28.921144 | orchestrator | 2026-01-02 02:11:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:11:28.921211 | orchestrator | 2026-01-02 02:11:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:11:31.977979 | orchestrator | 2026-01-02 02:11:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:11:31.978509 | orchestrator | 2026-01-02 02:11:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:11:31.978544 | orchestrator | 2026-01-02 02:11:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:11:35.026660 | orchestrator | 2026-01-02 02:11:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:11:35.028209 | orchestrator | 2026-01-02 02:11:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:11:35.028248 | orchestrator | 2026-01-02 02:11:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:11:38.075301 | orchestrator | 2026-01-02 02:11:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:11:38.075486 | orchestrator | 2026-01-02 02:11:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:11:38.075507 | orchestrator | 2026-01-02 02:11:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:11:41.132401 | orchestrator | 2026-01-02 02:11:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:11:41.134887 | orchestrator | 2026-01-02 02:11:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:11:41.135103 | orchestrator | 2026-01-02 02:11:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:11:44.180491 | orchestrator | 2026-01-02 02:11:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:11:44.181440 | orchestrator | 2026-01-02 02:11:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:11:44.181489 | orchestrator | 2026-01-02 02:11:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:11:47.236974 | orchestrator | 2026-01-02 02:11:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:11:47.238451 | orchestrator | 2026-01-02 02:11:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:11:47.238502 | orchestrator | 2026-01-02 02:11:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:11:50.285860 | orchestrator | 2026-01-02 02:11:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:11:50.286359 | orchestrator | 2026-01-02 02:11:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:11:50.286811 | orchestrator | 2026-01-02 02:11:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:11:53.336312 | orchestrator | 2026-01-02 02:11:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:11:53.339155 | orchestrator | 2026-01-02 02:11:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:11:53.339196 | orchestrator | 2026-01-02 02:11:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:11:56.387529 | orchestrator | 2026-01-02 02:11:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:11:56.390008 | orchestrator | 2026-01-02 02:11:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:11:56.390141 | orchestrator | 2026-01-02 02:11:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:11:59.437861 | orchestrator | 2026-01-02 02:11:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:11:59.438864 | orchestrator | 2026-01-02 02:11:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:11:59.438944 | orchestrator | 2026-01-02 02:11:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:12:02.498377 | orchestrator | 2026-01-02 02:12:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:12:02.499565 | orchestrator | 2026-01-02 02:12:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:12:02.499606 | orchestrator | 2026-01-02 02:12:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:12:05.543365 | orchestrator | 2026-01-02 02:12:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:12:05.545469 | orchestrator | 2026-01-02 02:12:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:12:05.545512 | orchestrator | 2026-01-02 02:12:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:12:08.592681 | orchestrator | 2026-01-02 02:12:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:12:08.594362 | orchestrator | 2026-01-02 02:12:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:12:08.594524 | orchestrator | 2026-01-02 02:12:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:12:11.639763 | orchestrator | 2026-01-02 02:12:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:12:11.642797 | orchestrator | 2026-01-02 02:12:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:12:11.643448 | orchestrator | 2026-01-02 02:12:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:12:14.685743 | orchestrator | 2026-01-02 02:12:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:12:14.687983 | orchestrator | 2026-01-02 02:12:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:12:14.688569 | orchestrator | 2026-01-02 02:12:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:12:17.740438 | orchestrator | 2026-01-02 02:12:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:12:17.741823 | orchestrator | 2026-01-02 02:12:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:12:17.741949 | orchestrator | 2026-01-02 02:12:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:12:20.787000 | orchestrator | 2026-01-02 02:12:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:12:20.788981 | orchestrator | 2026-01-02 02:12:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:12:20.789004 | orchestrator | 2026-01-02 02:12:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:12:23.834389 | orchestrator | 2026-01-02 02:12:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:12:23.835672 | orchestrator | 2026-01-02 02:12:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:12:23.835732 | orchestrator | 2026-01-02 02:12:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:12:26.892530 | orchestrator | 2026-01-02 02:12:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:12:26.894558 | orchestrator | 2026-01-02 02:12:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:12:26.894705 | orchestrator | 2026-01-02 02:12:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:12:29.947129 | orchestrator | 2026-01-02 02:12:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:12:29.950583 | orchestrator | 2026-01-02 02:12:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:12:29.950650 | orchestrator | 2026-01-02 02:12:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:12:33.003703 | orchestrator | 2026-01-02 02:12:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:12:33.006995 | orchestrator | 2026-01-02 02:12:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:12:33.007111 | orchestrator | 2026-01-02 02:12:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:12:36.062132 | orchestrator | 2026-01-02 02:12:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:12:36.064442 | orchestrator | 2026-01-02 02:12:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:12:36.064488 | orchestrator | 2026-01-02 02:12:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:12:39.102217 | orchestrator | 2026-01-02 02:12:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:12:39.104152 | orchestrator | 2026-01-02 02:12:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:12:39.104210 | orchestrator | 2026-01-02 02:12:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:12:42.155002 | orchestrator | 2026-01-02 02:12:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:12:42.156182 | orchestrator | 2026-01-02 02:12:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:12:42.156277 | orchestrator | 2026-01-02 02:12:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:12:45.209466 | orchestrator | 2026-01-02 02:12:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:12:45.211372 | orchestrator | 2026-01-02 02:12:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:12:45.211407 | orchestrator | 2026-01-02 02:12:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:12:48.259967 | orchestrator | 2026-01-02 02:12:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:12:48.261621 | orchestrator | 2026-01-02 02:12:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:12:48.261729 | orchestrator | 2026-01-02 02:12:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:12:51.311140 | orchestrator | 2026-01-02 02:12:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:12:51.313762 | orchestrator | 2026-01-02 02:12:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:12:51.313812 | orchestrator | 2026-01-02 02:12:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:12:54.363813 | orchestrator | 2026-01-02 02:12:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:12:54.364680 | orchestrator | 2026-01-02 02:12:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:12:54.364957 | orchestrator | 2026-01-02 02:12:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:12:57.412688 | orchestrator | 2026-01-02 02:12:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:12:57.414794 | orchestrator | 2026-01-02 02:12:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:12:57.414902 | orchestrator | 2026-01-02 02:12:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:13:00.462895 | orchestrator | 2026-01-02 02:13:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:13:00.464831 | orchestrator | 2026-01-02 02:13:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:13:00.464893 | orchestrator | 2026-01-02 02:13:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:13:03.513767 | orchestrator | 2026-01-02 02:13:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:13:03.515514 | orchestrator | 2026-01-02 02:13:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:13:03.516262 | orchestrator | 2026-01-02 02:13:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:13:06.561397 | orchestrator | 2026-01-02 02:13:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:13:06.562650 | orchestrator | 2026-01-02 02:13:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:13:06.562688 | orchestrator | 2026-01-02 02:13:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:13:09.613648 | orchestrator | 2026-01-02 02:13:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:13:09.617078 | orchestrator | 2026-01-02 02:13:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:13:09.617185 | orchestrator | 2026-01-02 02:13:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:13:12.667658 | orchestrator | 2026-01-02 02:13:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:13:12.669058 | orchestrator | 2026-01-02 02:13:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:13:12.669092 | orchestrator | 2026-01-02 02:13:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:13:15.718642 | orchestrator | 2026-01-02 02:13:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:13:15.721514 | orchestrator | 2026-01-02 02:13:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:13:15.721602 | orchestrator | 2026-01-02 02:13:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:13:18.762927 | orchestrator | 2026-01-02 02:13:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:13:18.764690 | orchestrator | 2026-01-02 02:13:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:13:18.764803 | orchestrator | 2026-01-02 02:13:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:13:21.808833 | orchestrator | 2026-01-02 02:13:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:13:21.809254 | orchestrator | 2026-01-02 02:13:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:13:21.809540 | orchestrator | 2026-01-02 02:13:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:13:24.857365 | orchestrator | 2026-01-02 02:13:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:13:24.858982 | orchestrator | 2026-01-02 02:13:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:13:24.859339 | orchestrator | 2026-01-02 02:13:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:13:27.915031 | orchestrator | 2026-01-02 02:13:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:13:27.917280 | orchestrator | 2026-01-02 02:13:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:13:27.917329 | orchestrator | 2026-01-02 02:13:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:13:30.967004 | orchestrator | 2026-01-02 02:13:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:13:30.970339 | orchestrator | 2026-01-02 02:13:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:13:30.970398 | orchestrator | 2026-01-02 02:13:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:13:34.024618 | orchestrator | 2026-01-02 02:13:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:13:34.025758 | orchestrator | 2026-01-02 02:13:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:13:34.026174 | orchestrator | 2026-01-02 02:13:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:13:37.077561 | orchestrator | 2026-01-02 02:13:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:13:37.078466 | orchestrator | 2026-01-02 02:13:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:13:37.078499 | orchestrator | 2026-01-02 02:13:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:13:40.119587 | orchestrator | 2026-01-02 02:13:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:13:40.121604 | orchestrator | 2026-01-02 02:13:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:13:40.121704 | orchestrator | 2026-01-02 02:13:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:13:43.172936 | orchestrator | 2026-01-02 02:13:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:13:43.175285 | orchestrator | 2026-01-02 02:13:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:13:43.175411 | orchestrator | 2026-01-02 02:13:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:13:46.231023 | orchestrator | 2026-01-02 02:13:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:13:46.233396 | orchestrator | 2026-01-02 02:13:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:13:46.233525 | orchestrator | 2026-01-02 02:13:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:13:49.280451 | orchestrator | 2026-01-02 02:13:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:13:49.281584 | orchestrator | 2026-01-02 02:13:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:13:49.281634 | orchestrator | 2026-01-02 02:13:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:13:52.329396 | orchestrator | 2026-01-02 02:13:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:13:52.330740 | orchestrator | 2026-01-02 02:13:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:13:52.330861 | orchestrator | 2026-01-02 02:13:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:13:55.373004 | orchestrator | 2026-01-02 02:13:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:13:55.374424 | orchestrator | 2026-01-02 02:13:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:13:55.374623 | orchestrator | 2026-01-02 02:13:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:13:58.426382 | orchestrator | 2026-01-02 02:13:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:13:58.427768 | orchestrator | 2026-01-02 02:13:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:13:58.428093 | orchestrator | 2026-01-02 02:13:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:14:01.473781 | orchestrator | 2026-01-02 02:14:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:14:01.475380 | orchestrator | 2026-01-02 02:14:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:14:01.475439 | orchestrator | 2026-01-02 02:14:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:14:04.528866 | orchestrator | 2026-01-02 02:14:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:14:04.530506 | orchestrator | 2026-01-02 02:14:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:14:04.530610 | orchestrator | 2026-01-02 02:14:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:14:07.581066 | orchestrator | 2026-01-02 02:14:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:14:07.582170 | orchestrator | 2026-01-02 02:14:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:14:07.582210 | orchestrator | 2026-01-02 02:14:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:14:10.630362 | orchestrator | 2026-01-02 02:14:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:14:10.632168 | orchestrator | 2026-01-02 02:14:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:14:10.632220 | orchestrator | 2026-01-02 02:14:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:14:13.683096 | orchestrator | 2026-01-02 02:14:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:14:13.686079 | orchestrator | 2026-01-02 02:14:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:14:13.686177 | orchestrator | 2026-01-02 02:14:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:14:16.739344 | orchestrator | 2026-01-02 02:14:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:14:16.741742 | orchestrator | 2026-01-02 02:14:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:14:16.741801 | orchestrator | 2026-01-02 02:14:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:14:19.794838 | orchestrator | 2026-01-02 02:14:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:14:19.796664 | orchestrator | 2026-01-02 02:14:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:14:19.796692 | orchestrator | 2026-01-02 02:14:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:14:22.850811 | orchestrator | 2026-01-02 02:14:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:14:22.852527 | orchestrator | 2026-01-02 02:14:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:14:22.852569 | orchestrator | 2026-01-02 02:14:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:14:25.901902 | orchestrator | 2026-01-02 02:14:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:14:25.903568 | orchestrator | 2026-01-02 02:14:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:14:25.903628 | orchestrator | 2026-01-02 02:14:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:14:28.955308 | orchestrator | 2026-01-02 02:14:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:14:28.957005 | orchestrator | 2026-01-02 02:14:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:14:28.957130 | orchestrator | 2026-01-02 02:14:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:14:32.019957 | orchestrator | 2026-01-02 02:14:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:14:32.021468 | orchestrator | 2026-01-02 02:14:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:14:32.021528 | orchestrator | 2026-01-02 02:14:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:14:35.067488 | orchestrator | 2026-01-02 02:14:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:14:35.069335 | orchestrator | 2026-01-02 02:14:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:14:35.069663 | orchestrator | 2026-01-02 02:14:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:14:38.117039 | orchestrator | 2026-01-02 02:14:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:14:38.117552 | orchestrator | 2026-01-02 02:14:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:14:38.117578 | orchestrator | 2026-01-02 02:14:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:14:41.166447 | orchestrator | 2026-01-02 02:14:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:14:41.168338 | orchestrator | 2026-01-02 02:14:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:14:41.168431 | orchestrator | 2026-01-02 02:14:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:14:44.208604 | orchestrator | 2026-01-02 02:14:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:14:44.209657 | orchestrator | 2026-01-02 02:14:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:14:44.209704 | orchestrator | 2026-01-02 02:14:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:14:47.257393 | orchestrator | 2026-01-02 02:14:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:14:47.259046 | orchestrator | 2026-01-02 02:14:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:14:47.259095 | orchestrator | 2026-01-02 02:14:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:14:50.301543 | orchestrator | 2026-01-02 02:14:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:14:50.303095 | orchestrator | 2026-01-02 02:14:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:14:50.303137 | orchestrator | 2026-01-02 02:14:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:14:53.356056 | orchestrator | 2026-01-02 02:14:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:14:53.358328 | orchestrator | 2026-01-02 02:14:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:14:53.358371 | orchestrator | 2026-01-02 02:14:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:14:56.411867 | orchestrator | 2026-01-02 02:14:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:14:56.413070 | orchestrator | 2026-01-02 02:14:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:14:56.413487 | orchestrator | 2026-01-02 02:14:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:14:59.462474 | orchestrator | 2026-01-02 02:14:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:14:59.463905 | orchestrator | 2026-01-02 02:14:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:14:59.463955 | orchestrator | 2026-01-02 02:14:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:15:02.514241 | orchestrator | 2026-01-02 02:15:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:15:02.516104 | orchestrator | 2026-01-02 02:15:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:15:02.516133 | orchestrator | 2026-01-02 02:15:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:15:05.561899 | orchestrator | 2026-01-02 02:15:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:15:05.563599 | orchestrator | 2026-01-02 02:15:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:15:05.563638 | orchestrator | 2026-01-02 02:15:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:15:08.611883 | orchestrator | 2026-01-02 02:15:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:15:08.615009 | orchestrator | 2026-01-02 02:15:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:15:08.615090 | orchestrator | 2026-01-02 02:15:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:15:11.665631 | orchestrator | 2026-01-02 02:15:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:15:11.668066 | orchestrator | 2026-01-02 02:15:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:15:11.668163 | orchestrator | 2026-01-02 02:15:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:15:14.709913 | orchestrator | 2026-01-02 02:15:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:15:14.711578 | orchestrator | 2026-01-02 02:15:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:15:14.711628 | orchestrator | 2026-01-02 02:15:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:15:17.755379 | orchestrator | 2026-01-02 02:15:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:15:17.757400 | orchestrator | 2026-01-02 02:15:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:15:17.757489 | orchestrator | 2026-01-02 02:15:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:15:20.815846 | orchestrator | 2026-01-02 02:15:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:15:20.817328 | orchestrator | 2026-01-02 02:15:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:15:20.817465 | orchestrator | 2026-01-02 02:15:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:15:23.867000 | orchestrator | 2026-01-02 02:15:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:15:23.868968 | orchestrator | 2026-01-02 02:15:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:15:23.869021 | orchestrator | 2026-01-02 02:15:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:15:26.918163 | orchestrator | 2026-01-02 02:15:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:15:26.919873 | orchestrator | 2026-01-02 02:15:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:15:26.919956 | orchestrator | 2026-01-02 02:15:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:15:29.966505 | orchestrator | 2026-01-02 02:15:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:15:29.967383 | orchestrator | 2026-01-02 02:15:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:15:29.967724 | orchestrator | 2026-01-02 02:15:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:15:33.035941 | orchestrator | 2026-01-02 02:15:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:15:33.037032 | orchestrator | 2026-01-02 02:15:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:15:33.037063 | orchestrator | 2026-01-02 02:15:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:15:36.081538 | orchestrator | 2026-01-02 02:15:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:15:36.082800 | orchestrator | 2026-01-02 02:15:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:15:36.082939 | orchestrator | 2026-01-02 02:15:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:15:39.133272 | orchestrator | 2026-01-02 02:15:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:15:39.135762 | orchestrator | 2026-01-02 02:15:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:15:39.135814 | orchestrator | 2026-01-02 02:15:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:15:42.194615 | orchestrator | 2026-01-02 02:15:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:15:42.197677 | orchestrator | 2026-01-02 02:15:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:15:42.197732 | orchestrator | 2026-01-02 02:15:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:15:45.242886 | orchestrator | 2026-01-02 02:15:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:15:45.244326 | orchestrator | 2026-01-02 02:15:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:15:45.244362 | orchestrator | 2026-01-02 02:15:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:15:48.294815 | orchestrator | 2026-01-02 02:15:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:15:48.295963 | orchestrator | 2026-01-02 02:15:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:15:48.296013 | orchestrator | 2026-01-02 02:15:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:15:51.347491 | orchestrator | 2026-01-02 02:15:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:15:51.349246 | orchestrator | 2026-01-02 02:15:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:15:51.349509 | orchestrator | 2026-01-02 02:15:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:15:54.405159 | orchestrator | 2026-01-02 02:15:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:15:54.407336 | orchestrator | 2026-01-02 02:15:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:15:54.407649 | orchestrator | 2026-01-02 02:15:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:15:57.456932 | orchestrator | 2026-01-02 02:15:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:15:57.459035 | orchestrator | 2026-01-02 02:15:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:15:57.459087 | orchestrator | 2026-01-02 02:15:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:16:00.504588 | orchestrator | 2026-01-02 02:16:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:16:00.507495 | orchestrator | 2026-01-02 02:16:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:16:00.507566 | orchestrator | 2026-01-02 02:16:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:16:03.553725 | orchestrator | 2026-01-02 02:16:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:16:03.555040 | orchestrator | 2026-01-02 02:16:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:16:03.555116 | orchestrator | 2026-01-02 02:16:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:16:06.598115 | orchestrator | 2026-01-02 02:16:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:16:06.599910 | orchestrator | 2026-01-02 02:16:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:16:06.599943 | orchestrator | 2026-01-02 02:16:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:16:09.647036 | orchestrator | 2026-01-02 02:16:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:16:09.649597 | orchestrator | 2026-01-02 02:16:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:16:09.649638 | orchestrator | 2026-01-02 02:16:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:16:12.692376 | orchestrator | 2026-01-02 02:16:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:16:12.693427 | orchestrator | 2026-01-02 02:16:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:16:12.693468 | orchestrator | 2026-01-02 02:16:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:16:15.745011 | orchestrator | 2026-01-02 02:16:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:16:15.747315 | orchestrator | 2026-01-02 02:16:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:16:15.747806 | orchestrator | 2026-01-02 02:16:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:16:18.795891 | orchestrator | 2026-01-02 02:16:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:16:18.797797 | orchestrator | 2026-01-02 02:16:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:16:18.797853 | orchestrator | 2026-01-02 02:16:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:16:21.853630 | orchestrator | 2026-01-02 02:16:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:16:21.854955 | orchestrator | 2026-01-02 02:16:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:16:21.855445 | orchestrator | 2026-01-02 02:16:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:16:24.909477 | orchestrator | 2026-01-02 02:16:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:16:24.911186 | orchestrator | 2026-01-02 02:16:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:16:24.911274 | orchestrator | 2026-01-02 02:16:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:16:27.966312 | orchestrator | 2026-01-02 02:16:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:16:27.968546 | orchestrator | 2026-01-02 02:16:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:16:27.968846 | orchestrator | 2026-01-02 02:16:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:16:31.024324 | orchestrator | 2026-01-02 02:16:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:16:31.026651 | orchestrator | 2026-01-02 02:16:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:16:31.026725 | orchestrator | 2026-01-02 02:16:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:16:34.082939 | orchestrator | 2026-01-02 02:16:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:16:34.085537 | orchestrator | 2026-01-02 02:16:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:16:34.085591 | orchestrator | 2026-01-02 02:16:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:16:37.137107 | orchestrator | 2026-01-02 02:16:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:16:37.137394 | orchestrator | 2026-01-02 02:16:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:16:37.137422 | orchestrator | 2026-01-02 02:16:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:16:40.181654 | orchestrator | 2026-01-02 02:16:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:16:40.182453 | orchestrator | 2026-01-02 02:16:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:16:40.182487 | orchestrator | 2026-01-02 02:16:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:16:43.238082 | orchestrator | 2026-01-02 02:16:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:16:43.239397 | orchestrator | 2026-01-02 02:16:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:16:43.239493 | orchestrator | 2026-01-02 02:16:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:16:46.292351 | orchestrator | 2026-01-02 02:16:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:16:46.295550 | orchestrator | 2026-01-02 02:16:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:16:46.295613 | orchestrator | 2026-01-02 02:16:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:16:49.347577 | orchestrator | 2026-01-02 02:16:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:16:49.351842 | orchestrator | 2026-01-02 02:16:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:16:49.351960 | orchestrator | 2026-01-02 02:16:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:16:52.397901 | orchestrator | 2026-01-02 02:16:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:16:52.402328 | orchestrator | 2026-01-02 02:16:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:16:52.402384 | orchestrator | 2026-01-02 02:16:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:16:55.447515 | orchestrator | 2026-01-02 02:16:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:16:55.449254 | orchestrator | 2026-01-02 02:16:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:16:55.449328 | orchestrator | 2026-01-02 02:16:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:16:58.500062 | orchestrator | 2026-01-02 02:16:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:16:58.502873 | orchestrator | 2026-01-02 02:16:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:16:58.503323 | orchestrator | 2026-01-02 02:16:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:17:01.548092 | orchestrator | 2026-01-02 02:17:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:17:01.549200 | orchestrator | 2026-01-02 02:17:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:17:01.549762 | orchestrator | 2026-01-02 02:17:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:17:04.601099 | orchestrator | 2026-01-02 02:17:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:17:04.603183 | orchestrator | 2026-01-02 02:17:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:17:04.603397 | orchestrator | 2026-01-02 02:17:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:17:07.656258 | orchestrator | 2026-01-02 02:17:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:17:07.657589 | orchestrator | 2026-01-02 02:17:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:17:07.657630 | orchestrator | 2026-01-02 02:17:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:17:10.706786 | orchestrator | 2026-01-02 02:17:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:17:10.709681 | orchestrator | 2026-01-02 02:17:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:17:10.710083 | orchestrator | 2026-01-02 02:17:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:17:13.763023 | orchestrator | 2026-01-02 02:17:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:17:13.764253 | orchestrator | 2026-01-02 02:17:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:17:13.764408 | orchestrator | 2026-01-02 02:17:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:17:16.812362 | orchestrator | 2026-01-02 02:17:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:17:16.813368 | orchestrator | 2026-01-02 02:17:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:17:16.813403 | orchestrator | 2026-01-02 02:17:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:17:19.859431 | orchestrator | 2026-01-02 02:17:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:17:19.861452 | orchestrator | 2026-01-02 02:17:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:17:19.861488 | orchestrator | 2026-01-02 02:17:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:17:22.914596 | orchestrator | 2026-01-02 02:17:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:17:22.916177 | orchestrator | 2026-01-02 02:17:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:17:22.916218 | orchestrator | 2026-01-02 02:17:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:17:25.965643 | orchestrator | 2026-01-02 02:17:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:17:25.967267 | orchestrator | 2026-01-02 02:17:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:17:25.967309 | orchestrator | 2026-01-02 02:17:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:17:29.025957 | orchestrator | 2026-01-02 02:17:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:17:29.027855 | orchestrator | 2026-01-02 02:17:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:17:29.027897 | orchestrator | 2026-01-02 02:17:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:17:32.074608 | orchestrator | 2026-01-02 02:17:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:17:32.076180 | orchestrator | 2026-01-02 02:17:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:17:32.076199 | orchestrator | 2026-01-02 02:17:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:17:35.123716 | orchestrator | 2026-01-02 02:17:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:17:35.125606 | orchestrator | 2026-01-02 02:17:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:17:35.125641 | orchestrator | 2026-01-02 02:17:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:17:38.171921 | orchestrator | 2026-01-02 02:17:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:17:38.172591 | orchestrator | 2026-01-02 02:17:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:17:38.172627 | orchestrator | 2026-01-02 02:17:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:17:41.212802 | orchestrator | 2026-01-02 02:17:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:17:41.215043 | orchestrator | 2026-01-02 02:17:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:17:41.215122 | orchestrator | 2026-01-02 02:17:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:17:44.265729 | orchestrator | 2026-01-02 02:17:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:17:44.266754 | orchestrator | 2026-01-02 02:17:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:17:44.266874 | orchestrator | 2026-01-02 02:17:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:17:47.312042 | orchestrator | 2026-01-02 02:17:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:17:47.314583 | orchestrator | 2026-01-02 02:17:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:17:47.314760 | orchestrator | 2026-01-02 02:17:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:17:50.356736 | orchestrator | 2026-01-02 02:17:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:17:50.357872 | orchestrator | 2026-01-02 02:17:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:17:50.357916 | orchestrator | 2026-01-02 02:17:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:17:53.400070 | orchestrator | 2026-01-02 02:17:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:17:53.401819 | orchestrator | 2026-01-02 02:17:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:17:53.402006 | orchestrator | 2026-01-02 02:17:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:17:56.457602 | orchestrator | 2026-01-02 02:17:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:17:56.459813 | orchestrator | 2026-01-02 02:17:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:17:56.459855 | orchestrator | 2026-01-02 02:17:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:17:59.515182 | orchestrator | 2026-01-02 02:17:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:17:59.518109 | orchestrator | 2026-01-02 02:17:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:17:59.518243 | orchestrator | 2026-01-02 02:17:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:18:02.567093 | orchestrator | 2026-01-02 02:18:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:18:02.569701 | orchestrator | 2026-01-02 02:18:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:18:02.569741 | orchestrator | 2026-01-02 02:18:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:18:05.611165 | orchestrator | 2026-01-02 02:18:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:18:05.612513 | orchestrator | 2026-01-02 02:18:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:18:05.612566 | orchestrator | 2026-01-02 02:18:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:18:08.662178 | orchestrator | 2026-01-02 02:18:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:18:08.663384 | orchestrator | 2026-01-02 02:18:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:18:08.663458 | orchestrator | 2026-01-02 02:18:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:18:11.718390 | orchestrator | 2026-01-02 02:18:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:18:11.721036 | orchestrator | 2026-01-02 02:18:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:18:11.721185 | orchestrator | 2026-01-02 02:18:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:18:14.774783 | orchestrator | 2026-01-02 02:18:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:18:14.777542 | orchestrator | 2026-01-02 02:18:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:18:14.777591 | orchestrator | 2026-01-02 02:18:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:18:17.827237 | orchestrator | 2026-01-02 02:18:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:18:17.829752 | orchestrator | 2026-01-02 02:18:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:18:17.829826 | orchestrator | 2026-01-02 02:18:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:18:20.874086 | orchestrator | 2026-01-02 02:18:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:18:20.875143 | orchestrator | 2026-01-02 02:18:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:18:20.875179 | orchestrator | 2026-01-02 02:18:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:18:23.919860 | orchestrator | 2026-01-02 02:18:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:18:23.921173 | orchestrator | 2026-01-02 02:18:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:18:23.922847 | orchestrator | 2026-01-02 02:18:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:18:26.966882 | orchestrator | 2026-01-02 02:18:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:18:26.968357 | orchestrator | 2026-01-02 02:18:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:18:26.968461 | orchestrator | 2026-01-02 02:18:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:18:30.018963 | orchestrator | 2026-01-02 02:18:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:18:30.020813 | orchestrator | 2026-01-02 02:18:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:18:30.021421 | orchestrator | 2026-01-02 02:18:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:18:33.066472 | orchestrator | 2026-01-02 02:18:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:18:33.069350 | orchestrator | 2026-01-02 02:18:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:18:33.069378 | orchestrator | 2026-01-02 02:18:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:18:36.113906 | orchestrator | 2026-01-02 02:18:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:18:36.115554 | orchestrator | 2026-01-02 02:18:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:18:36.115694 | orchestrator | 2026-01-02 02:18:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:18:39.157714 | orchestrator | 2026-01-02 02:18:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:18:39.159104 | orchestrator | 2026-01-02 02:18:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:18:39.159146 | orchestrator | 2026-01-02 02:18:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:18:42.209491 | orchestrator | 2026-01-02 02:18:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:18:42.211530 | orchestrator | 2026-01-02 02:18:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:18:42.211558 | orchestrator | 2026-01-02 02:18:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:18:45.257606 | orchestrator | 2026-01-02 02:18:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:18:45.259230 | orchestrator | 2026-01-02 02:18:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:18:45.259410 | orchestrator | 2026-01-02 02:18:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:18:48.306009 | orchestrator | 2026-01-02 02:18:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:18:48.308473 | orchestrator | 2026-01-02 02:18:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:18:48.308589 | orchestrator | 2026-01-02 02:18:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:18:51.363224 | orchestrator | 2026-01-02 02:18:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:18:51.365656 | orchestrator | 2026-01-02 02:18:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:18:51.365771 | orchestrator | 2026-01-02 02:18:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:18:54.417907 | orchestrator | 2026-01-02 02:18:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:18:54.419501 | orchestrator | 2026-01-02 02:18:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:18:54.419578 | orchestrator | 2026-01-02 02:18:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:18:57.470915 | orchestrator | 2026-01-02 02:18:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:18:57.473904 | orchestrator | 2026-01-02 02:18:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:18:57.473935 | orchestrator | 2026-01-02 02:18:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:19:00.530618 | orchestrator | 2026-01-02 02:19:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:19:00.532486 | orchestrator | 2026-01-02 02:19:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:19:00.532576 | orchestrator | 2026-01-02 02:19:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:19:03.583027 | orchestrator | 2026-01-02 02:19:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:19:03.584317 | orchestrator | 2026-01-02 02:19:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:19:03.584366 | orchestrator | 2026-01-02 02:19:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:19:06.639714 | orchestrator | 2026-01-02 02:19:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:19:06.641559 | orchestrator | 2026-01-02 02:19:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:19:06.641615 | orchestrator | 2026-01-02 02:19:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:19:09.691840 | orchestrator | 2026-01-02 02:19:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:19:09.693612 | orchestrator | 2026-01-02 02:19:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:19:09.693677 | orchestrator | 2026-01-02 02:19:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:19:12.748915 | orchestrator | 2026-01-02 02:19:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:19:12.751458 | orchestrator | 2026-01-02 02:19:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:19:12.751566 | orchestrator | 2026-01-02 02:19:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:19:15.803005 | orchestrator | 2026-01-02 02:19:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:19:15.806633 | orchestrator | 2026-01-02 02:19:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:19:15.806685 | orchestrator | 2026-01-02 02:19:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:19:18.862245 | orchestrator | 2026-01-02 02:19:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:19:18.865906 | orchestrator | 2026-01-02 02:19:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:19:18.866093 | orchestrator | 2026-01-02 02:19:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:19:21.914508 | orchestrator | 2026-01-02 02:19:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:19:21.917998 | orchestrator | 2026-01-02 02:19:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:19:21.918202 | orchestrator | 2026-01-02 02:19:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:19:24.974938 | orchestrator | 2026-01-02 02:19:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:19:24.978669 | orchestrator | 2026-01-02 02:19:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:19:24.978783 | orchestrator | 2026-01-02 02:19:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:19:28.033503 | orchestrator | 2026-01-02 02:19:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:19:28.036909 | orchestrator | 2026-01-02 02:19:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:19:28.037245 | orchestrator | 2026-01-02 02:19:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:19:31.085247 | orchestrator | 2026-01-02 02:19:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:19:31.086647 | orchestrator | 2026-01-02 02:19:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:19:31.086676 | orchestrator | 2026-01-02 02:19:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:19:34.135955 | orchestrator | 2026-01-02 02:19:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:19:34.139694 | orchestrator | 2026-01-02 02:19:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:19:34.139758 | orchestrator | 2026-01-02 02:19:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:19:37.193038 | orchestrator | 2026-01-02 02:19:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:19:37.194529 | orchestrator | 2026-01-02 02:19:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:19:37.194575 | orchestrator | 2026-01-02 02:19:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:19:40.236576 | orchestrator | 2026-01-02 02:19:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:19:40.238119 | orchestrator | 2026-01-02 02:19:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:19:40.238199 | orchestrator | 2026-01-02 02:19:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:19:43.289723 | orchestrator | 2026-01-02 02:19:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:19:43.291433 | orchestrator | 2026-01-02 02:19:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:19:43.291627 | orchestrator | 2026-01-02 02:19:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:19:46.337642 | orchestrator | 2026-01-02 02:19:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:19:46.339773 | orchestrator | 2026-01-02 02:19:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:19:46.339803 | orchestrator | 2026-01-02 02:19:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:19:49.389645 | orchestrator | 2026-01-02 02:19:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:19:49.391366 | orchestrator | 2026-01-02 02:19:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:19:49.391636 | orchestrator | 2026-01-02 02:19:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:19:52.436740 | orchestrator | 2026-01-02 02:19:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:19:52.439436 | orchestrator | 2026-01-02 02:19:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:19:52.440236 | orchestrator | 2026-01-02 02:19:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:19:55.492392 | orchestrator | 2026-01-02 02:19:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:19:55.494075 | orchestrator | 2026-01-02 02:19:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:19:55.494124 | orchestrator | 2026-01-02 02:19:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:19:58.543003 | orchestrator | 2026-01-02 02:19:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:19:58.544942 | orchestrator | 2026-01-02 02:19:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:19:58.544980 | orchestrator | 2026-01-02 02:19:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:20:01.587122 | orchestrator | 2026-01-02 02:20:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:20:01.589297 | orchestrator | 2026-01-02 02:20:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:20:01.589407 | orchestrator | 2026-01-02 02:20:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:20:04.637130 | orchestrator | 2026-01-02 02:20:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:20:04.638183 | orchestrator | 2026-01-02 02:20:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:20:04.638241 | orchestrator | 2026-01-02 02:20:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:20:07.691301 | orchestrator | 2026-01-02 02:20:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:20:07.693070 | orchestrator | 2026-01-02 02:20:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:20:07.693170 | orchestrator | 2026-01-02 02:20:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:20:10.742362 | orchestrator | 2026-01-02 02:20:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:20:10.743662 | orchestrator | 2026-01-02 02:20:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:20:10.743695 | orchestrator | 2026-01-02 02:20:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:20:13.792616 | orchestrator | 2026-01-02 02:20:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:20:13.795203 | orchestrator | 2026-01-02 02:20:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:20:13.795255 | orchestrator | 2026-01-02 02:20:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:20:16.839885 | orchestrator | 2026-01-02 02:20:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:20:16.841898 | orchestrator | 2026-01-02 02:20:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:20:16.841972 | orchestrator | 2026-01-02 02:20:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:20:19.884511 | orchestrator | 2026-01-02 02:20:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:20:19.886648 | orchestrator | 2026-01-02 02:20:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:20:19.886790 | orchestrator | 2026-01-02 02:20:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:20:22.933080 | orchestrator | 2026-01-02 02:20:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:20:22.934887 | orchestrator | 2026-01-02 02:20:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:20:22.935253 | orchestrator | 2026-01-02 02:20:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:20:25.985918 | orchestrator | 2026-01-02 02:20:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:20:25.987180 | orchestrator | 2026-01-02 02:20:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:20:25.987241 | orchestrator | 2026-01-02 02:20:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:20:29.036575 | orchestrator | 2026-01-02 02:20:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:20:29.038173 | orchestrator | 2026-01-02 02:20:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:20:29.038296 | orchestrator | 2026-01-02 02:20:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:20:32.084242 | orchestrator | 2026-01-02 02:20:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:20:32.085097 | orchestrator | 2026-01-02 02:20:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:20:32.085156 | orchestrator | 2026-01-02 02:20:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:20:35.132761 | orchestrator | 2026-01-02 02:20:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:20:35.133675 | orchestrator | 2026-01-02 02:20:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:20:35.133723 | orchestrator | 2026-01-02 02:20:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:20:38.182410 | orchestrator | 2026-01-02 02:20:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:20:38.183436 | orchestrator | 2026-01-02 02:20:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:20:38.183482 | orchestrator | 2026-01-02 02:20:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:20:41.232411 | orchestrator | 2026-01-02 02:20:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:20:41.233964 | orchestrator | 2026-01-02 02:20:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:20:41.234437 | orchestrator | 2026-01-02 02:20:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:20:44.288096 | orchestrator | 2026-01-02 02:20:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:20:44.290096 | orchestrator | 2026-01-02 02:20:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:20:44.290177 | orchestrator | 2026-01-02 02:20:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:20:47.326995 | orchestrator | 2026-01-02 02:20:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:20:47.327602 | orchestrator | 2026-01-02 02:20:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:20:47.327641 | orchestrator | 2026-01-02 02:20:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:20:50.369327 | orchestrator | 2026-01-02 02:20:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:20:50.372693 | orchestrator | 2026-01-02 02:20:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:20:50.372726 | orchestrator | 2026-01-02 02:20:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:20:53.409429 | orchestrator | 2026-01-02 02:20:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:20:53.410599 | orchestrator | 2026-01-02 02:20:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:20:53.410685 | orchestrator | 2026-01-02 02:20:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:20:56.455938 | orchestrator | 2026-01-02 02:20:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:20:56.457744 | orchestrator | 2026-01-02 02:20:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:20:56.457785 | orchestrator | 2026-01-02 02:20:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:20:59.509718 | orchestrator | 2026-01-02 02:20:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:20:59.511407 | orchestrator | 2026-01-02 02:20:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:20:59.511440 | orchestrator | 2026-01-02 02:20:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:21:02.555841 | orchestrator | 2026-01-02 02:21:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:21:02.557490 | orchestrator | 2026-01-02 02:21:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:21:02.557719 | orchestrator | 2026-01-02 02:21:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:21:05.607373 | orchestrator | 2026-01-02 02:21:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:21:05.609132 | orchestrator | 2026-01-02 02:21:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:21:05.609426 | orchestrator | 2026-01-02 02:21:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:21:08.657097 | orchestrator | 2026-01-02 02:21:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:21:08.658858 | orchestrator | 2026-01-02 02:21:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:21:08.658905 | orchestrator | 2026-01-02 02:21:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:21:11.702878 | orchestrator | 2026-01-02 02:21:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:21:11.705196 | orchestrator | 2026-01-02 02:21:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:21:11.705236 | orchestrator | 2026-01-02 02:21:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:21:14.756450 | orchestrator | 2026-01-02 02:21:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:21:14.757952 | orchestrator | 2026-01-02 02:21:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:21:14.757994 | orchestrator | 2026-01-02 02:21:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:21:17.807019 | orchestrator | 2026-01-02 02:21:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:21:17.810146 | orchestrator | 2026-01-02 02:21:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:21:17.810205 | orchestrator | 2026-01-02 02:21:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:21:20.859145 | orchestrator | 2026-01-02 02:21:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:21:20.859804 | orchestrator | 2026-01-02 02:21:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:21:20.859854 | orchestrator | 2026-01-02 02:21:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:21:23.910201 | orchestrator | 2026-01-02 02:21:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:21:23.913236 | orchestrator | 2026-01-02 02:21:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:21:23.913278 | orchestrator | 2026-01-02 02:21:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:21:26.974247 | orchestrator | 2026-01-02 02:21:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:21:26.978754 | orchestrator | 2026-01-02 02:21:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:21:26.979135 | orchestrator | 2026-01-02 02:21:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:21:30.030504 | orchestrator | 2026-01-02 02:21:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:21:30.032399 | orchestrator | 2026-01-02 02:21:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:21:30.032497 | orchestrator | 2026-01-02 02:21:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:21:33.072705 | orchestrator | 2026-01-02 02:21:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:21:33.074389 | orchestrator | 2026-01-02 02:21:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:21:33.074851 | orchestrator | 2026-01-02 02:21:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:21:36.117938 | orchestrator | 2026-01-02 02:21:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:21:36.119933 | orchestrator | 2026-01-02 02:21:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:21:36.119976 | orchestrator | 2026-01-02 02:21:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:21:39.167604 | orchestrator | 2026-01-02 02:21:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:21:39.168564 | orchestrator | 2026-01-02 02:21:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:21:39.168608 | orchestrator | 2026-01-02 02:21:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:21:42.217425 | orchestrator | 2026-01-02 02:21:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:21:42.219066 | orchestrator | 2026-01-02 02:21:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:21:42.219291 | orchestrator | 2026-01-02 02:21:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:21:45.265773 | orchestrator | 2026-01-02 02:21:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:21:45.266650 | orchestrator | 2026-01-02 02:21:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:21:45.266748 | orchestrator | 2026-01-02 02:21:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:21:48.318075 | orchestrator | 2026-01-02 02:21:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:21:48.319181 | orchestrator | 2026-01-02 02:21:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:21:48.319201 | orchestrator | 2026-01-02 02:21:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:21:51.368850 | orchestrator | 2026-01-02 02:21:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:21:51.370189 | orchestrator | 2026-01-02 02:21:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:21:51.370250 | orchestrator | 2026-01-02 02:21:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:21:54.419163 | orchestrator | 2026-01-02 02:21:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:21:54.421017 | orchestrator | 2026-01-02 02:21:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:21:54.421101 | orchestrator | 2026-01-02 02:21:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:21:57.467641 | orchestrator | 2026-01-02 02:21:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:21:57.469425 | orchestrator | 2026-01-02 02:21:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:21:57.469470 | orchestrator | 2026-01-02 02:21:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:22:00.524962 | orchestrator | 2026-01-02 02:22:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:22:00.526832 | orchestrator | 2026-01-02 02:22:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:22:00.527079 | orchestrator | 2026-01-02 02:22:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:22:03.580952 | orchestrator | 2026-01-02 02:22:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:22:03.582339 | orchestrator | 2026-01-02 02:22:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:22:03.582442 | orchestrator | 2026-01-02 02:22:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:22:06.631990 | orchestrator | 2026-01-02 02:22:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:22:06.634957 | orchestrator | 2026-01-02 02:22:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:22:06.635038 | orchestrator | 2026-01-02 02:22:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:22:09.675565 | orchestrator | 2026-01-02 02:22:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:22:09.676752 | orchestrator | 2026-01-02 02:22:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:22:09.676796 | orchestrator | 2026-01-02 02:22:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:22:12.723507 | orchestrator | 2026-01-02 02:22:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:22:12.724438 | orchestrator | 2026-01-02 02:22:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:22:12.724468 | orchestrator | 2026-01-02 02:22:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:22:15.767557 | orchestrator | 2026-01-02 02:22:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:22:15.771236 | orchestrator | 2026-01-02 02:22:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:22:15.771346 | orchestrator | 2026-01-02 02:22:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:22:18.814924 | orchestrator | 2026-01-02 02:22:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:22:18.817582 | orchestrator | 2026-01-02 02:22:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:22:18.817616 | orchestrator | 2026-01-02 02:22:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:22:21.863217 | orchestrator | 2026-01-02 02:22:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:22:21.864696 | orchestrator | 2026-01-02 02:22:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:22:21.864723 | orchestrator | 2026-01-02 02:22:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:22:24.917437 | orchestrator | 2026-01-02 02:22:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:22:24.919148 | orchestrator | 2026-01-02 02:22:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:22:24.919202 | orchestrator | 2026-01-02 02:22:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:22:27.966888 | orchestrator | 2026-01-02 02:22:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:22:27.968629 | orchestrator | 2026-01-02 02:22:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:22:27.968656 | orchestrator | 2026-01-02 02:22:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:22:31.014576 | orchestrator | 2026-01-02 02:22:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:22:31.015659 | orchestrator | 2026-01-02 02:22:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:22:31.015686 | orchestrator | 2026-01-02 02:22:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:22:34.063886 | orchestrator | 2026-01-02 02:22:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:22:34.065183 | orchestrator | 2026-01-02 02:22:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:22:34.065232 | orchestrator | 2026-01-02 02:22:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:22:37.116145 | orchestrator | 2026-01-02 02:22:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:22:37.117879 | orchestrator | 2026-01-02 02:22:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:22:37.117910 | orchestrator | 2026-01-02 02:22:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:22:40.163294 | orchestrator | 2026-01-02 02:22:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:22:40.165341 | orchestrator | 2026-01-02 02:22:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:22:40.165474 | orchestrator | 2026-01-02 02:22:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:22:43.208949 | orchestrator | 2026-01-02 02:22:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:22:43.210338 | orchestrator | 2026-01-02 02:22:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:22:43.210433 | orchestrator | 2026-01-02 02:22:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:22:46.264464 | orchestrator | 2026-01-02 02:22:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:22:46.266998 | orchestrator | 2026-01-02 02:22:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:22:46.267067 | orchestrator | 2026-01-02 02:22:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:22:49.319385 | orchestrator | 2026-01-02 02:22:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:22:49.321324 | orchestrator | 2026-01-02 02:22:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:22:49.321350 | orchestrator | 2026-01-02 02:22:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:22:52.373584 | orchestrator | 2026-01-02 02:22:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:22:52.375959 | orchestrator | 2026-01-02 02:22:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:22:52.376022 | orchestrator | 2026-01-02 02:22:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:22:55.426940 | orchestrator | 2026-01-02 02:22:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:22:55.428788 | orchestrator | 2026-01-02 02:22:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:22:55.428814 | orchestrator | 2026-01-02 02:22:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:22:58.483653 | orchestrator | 2026-01-02 02:22:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:22:58.485242 | orchestrator | 2026-01-02 02:22:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:22:58.485379 | orchestrator | 2026-01-02 02:22:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:23:01.526161 | orchestrator | 2026-01-02 02:23:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:23:01.528881 | orchestrator | 2026-01-02 02:23:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:23:01.528960 | orchestrator | 2026-01-02 02:23:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:23:04.579082 | orchestrator | 2026-01-02 02:23:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:23:04.581204 | orchestrator | 2026-01-02 02:23:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:23:04.581282 | orchestrator | 2026-01-02 02:23:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:23:07.630737 | orchestrator | 2026-01-02 02:23:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:23:07.632590 | orchestrator | 2026-01-02 02:23:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:23:07.632708 | orchestrator | 2026-01-02 02:23:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:23:10.679853 | orchestrator | 2026-01-02 02:23:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:23:10.682574 | orchestrator | 2026-01-02 02:23:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:23:10.682616 | orchestrator | 2026-01-02 02:23:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:23:13.727848 | orchestrator | 2026-01-02 02:23:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:23:13.728885 | orchestrator | 2026-01-02 02:23:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:23:13.729292 | orchestrator | 2026-01-02 02:23:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:23:16.772955 | orchestrator | 2026-01-02 02:23:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:23:16.775221 | orchestrator | 2026-01-02 02:23:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:23:16.775245 | orchestrator | 2026-01-02 02:23:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:23:19.825294 | orchestrator | 2026-01-02 02:23:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:23:19.827528 | orchestrator | 2026-01-02 02:23:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:23:19.827582 | orchestrator | 2026-01-02 02:23:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:23:22.871143 | orchestrator | 2026-01-02 02:23:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:23:22.874088 | orchestrator | 2026-01-02 02:23:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:23:22.874226 | orchestrator | 2026-01-02 02:23:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:23:25.929573 | orchestrator | 2026-01-02 02:23:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:23:25.932176 | orchestrator | 2026-01-02 02:23:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:23:25.932542 | orchestrator | 2026-01-02 02:23:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:23:28.975849 | orchestrator | 2026-01-02 02:23:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:23:28.977620 | orchestrator | 2026-01-02 02:23:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:23:28.977677 | orchestrator | 2026-01-02 02:23:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:23:32.027707 | orchestrator | 2026-01-02 02:23:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:23:32.030067 | orchestrator | 2026-01-02 02:23:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:23:32.030121 | orchestrator | 2026-01-02 02:23:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:23:35.077590 | orchestrator | 2026-01-02 02:23:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:23:35.078292 | orchestrator | 2026-01-02 02:23:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:23:35.078490 | orchestrator | 2026-01-02 02:23:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:23:38.126929 | orchestrator | 2026-01-02 02:23:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:23:38.129417 | orchestrator | 2026-01-02 02:23:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:23:38.129482 | orchestrator | 2026-01-02 02:23:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:23:41.171787 | orchestrator | 2026-01-02 02:23:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:23:41.172359 | orchestrator | 2026-01-02 02:23:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:23:41.172414 | orchestrator | 2026-01-02 02:23:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:23:44.222561 | orchestrator | 2026-01-02 02:23:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:23:44.225333 | orchestrator | 2026-01-02 02:23:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:23:44.225380 | orchestrator | 2026-01-02 02:23:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:23:47.275279 | orchestrator | 2026-01-02 02:23:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:23:47.276911 | orchestrator | 2026-01-02 02:23:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:23:47.277054 | orchestrator | 2026-01-02 02:23:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:23:50.326883 | orchestrator | 2026-01-02 02:23:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:23:50.329835 | orchestrator | 2026-01-02 02:23:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:23:50.329956 | orchestrator | 2026-01-02 02:23:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:23:53.374488 | orchestrator | 2026-01-02 02:23:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:23:53.374684 | orchestrator | 2026-01-02 02:23:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:23:53.374718 | orchestrator | 2026-01-02 02:23:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:23:56.423028 | orchestrator | 2026-01-02 02:23:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:23:56.424229 | orchestrator | 2026-01-02 02:23:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:23:56.424608 | orchestrator | 2026-01-02 02:23:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:23:59.466313 | orchestrator | 2026-01-02 02:23:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:23:59.468151 | orchestrator | 2026-01-02 02:23:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:23:59.468254 | orchestrator | 2026-01-02 02:23:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:24:02.516898 | orchestrator | 2026-01-02 02:24:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:24:02.518197 | orchestrator | 2026-01-02 02:24:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:24:02.518225 | orchestrator | 2026-01-02 02:24:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:24:05.571503 | orchestrator | 2026-01-02 02:24:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:24:05.573571 | orchestrator | 2026-01-02 02:24:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:24:05.573767 | orchestrator | 2026-01-02 02:24:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:24:08.616605 | orchestrator | 2026-01-02 02:24:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:24:08.619058 | orchestrator | 2026-01-02 02:24:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:24:08.619624 | orchestrator | 2026-01-02 02:24:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:24:11.661709 | orchestrator | 2026-01-02 02:24:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:24:11.663066 | orchestrator | 2026-01-02 02:24:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:24:11.663385 | orchestrator | 2026-01-02 02:24:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:24:14.713569 | orchestrator | 2026-01-02 02:24:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:24:14.715635 | orchestrator | 2026-01-02 02:24:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:24:14.715747 | orchestrator | 2026-01-02 02:24:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:24:17.764945 | orchestrator | 2026-01-02 02:24:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:24:17.765974 | orchestrator | 2026-01-02 02:24:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:24:17.766063 | orchestrator | 2026-01-02 02:24:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:24:20.808228 | orchestrator | 2026-01-02 02:24:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:24:20.810650 | orchestrator | 2026-01-02 02:24:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:24:20.810766 | orchestrator | 2026-01-02 02:24:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:24:23.860608 | orchestrator | 2026-01-02 02:24:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:24:23.866635 | orchestrator | 2026-01-02 02:24:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:24:23.866753 | orchestrator | 2026-01-02 02:24:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:24:26.910840 | orchestrator | 2026-01-02 02:24:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:24:26.912524 | orchestrator | 2026-01-02 02:24:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:24:26.912674 | orchestrator | 2026-01-02 02:24:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:24:29.961408 | orchestrator | 2026-01-02 02:24:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:24:29.962354 | orchestrator | 2026-01-02 02:24:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:24:29.962405 | orchestrator | 2026-01-02 02:24:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:24:33.009032 | orchestrator | 2026-01-02 02:24:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:24:33.011629 | orchestrator | 2026-01-02 02:24:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:24:33.011746 | orchestrator | 2026-01-02 02:24:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:24:36.057307 | orchestrator | 2026-01-02 02:24:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:24:36.059212 | orchestrator | 2026-01-02 02:24:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:24:36.059240 | orchestrator | 2026-01-02 02:24:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:24:39.107091 | orchestrator | 2026-01-02 02:24:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:24:39.108069 | orchestrator | 2026-01-02 02:24:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:24:39.108103 | orchestrator | 2026-01-02 02:24:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:24:42.154216 | orchestrator | 2026-01-02 02:24:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:24:42.155142 | orchestrator | 2026-01-02 02:24:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:24:42.155281 | orchestrator | 2026-01-02 02:24:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:24:45.204983 | orchestrator | 2026-01-02 02:24:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:24:45.210872 | orchestrator | 2026-01-02 02:24:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:24:45.211018 | orchestrator | 2026-01-02 02:24:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:24:48.253402 | orchestrator | 2026-01-02 02:24:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:24:48.254883 | orchestrator | 2026-01-02 02:24:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:24:48.255252 | orchestrator | 2026-01-02 02:24:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:24:51.299811 | orchestrator | 2026-01-02 02:24:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:24:51.300446 | orchestrator | 2026-01-02 02:24:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:24:51.300568 | orchestrator | 2026-01-02 02:24:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:24:54.340079 | orchestrator | 2026-01-02 02:24:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:24:54.341441 | orchestrator | 2026-01-02 02:24:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:24:54.341560 | orchestrator | 2026-01-02 02:24:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:24:57.388018 | orchestrator | 2026-01-02 02:24:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:24:57.389776 | orchestrator | 2026-01-02 02:24:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:24:57.389810 | orchestrator | 2026-01-02 02:24:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:25:00.441162 | orchestrator | 2026-01-02 02:25:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:25:00.443178 | orchestrator | 2026-01-02 02:25:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:25:00.443539 | orchestrator | 2026-01-02 02:25:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:25:03.491068 | orchestrator | 2026-01-02 02:25:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:25:03.494336 | orchestrator | 2026-01-02 02:25:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:25:03.494631 | orchestrator | 2026-01-02 02:25:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:25:06.543452 | orchestrator | 2026-01-02 02:25:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:25:06.545654 | orchestrator | 2026-01-02 02:25:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:25:06.545710 | orchestrator | 2026-01-02 02:25:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:25:09.585938 | orchestrator | 2026-01-02 02:25:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:25:09.587992 | orchestrator | 2026-01-02 02:25:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:25:09.588012 | orchestrator | 2026-01-02 02:25:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:25:12.634817 | orchestrator | 2026-01-02 02:25:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:25:12.636172 | orchestrator | 2026-01-02 02:25:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:25:12.636253 | orchestrator | 2026-01-02 02:25:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:25:15.685310 | orchestrator | 2026-01-02 02:25:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:25:15.687409 | orchestrator | 2026-01-02 02:25:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:25:15.687541 | orchestrator | 2026-01-02 02:25:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:25:18.733724 | orchestrator | 2026-01-02 02:25:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:25:18.733904 | orchestrator | 2026-01-02 02:25:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:25:18.733926 | orchestrator | 2026-01-02 02:25:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:25:21.780141 | orchestrator | 2026-01-02 02:25:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:25:21.782282 | orchestrator | 2026-01-02 02:25:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:25:21.782358 | orchestrator | 2026-01-02 02:25:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:25:24.827326 | orchestrator | 2026-01-02 02:25:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:25:24.830559 | orchestrator | 2026-01-02 02:25:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:25:24.830629 | orchestrator | 2026-01-02 02:25:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:25:27.876373 | orchestrator | 2026-01-02 02:25:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:25:27.878465 | orchestrator | 2026-01-02 02:25:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:25:27.878553 | orchestrator | 2026-01-02 02:25:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:25:30.927805 | orchestrator | 2026-01-02 02:25:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:25:30.931361 | orchestrator | 2026-01-02 02:25:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:25:30.931423 | orchestrator | 2026-01-02 02:25:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:25:33.979154 | orchestrator | 2026-01-02 02:25:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:25:33.981133 | orchestrator | 2026-01-02 02:25:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:25:33.981230 | orchestrator | 2026-01-02 02:25:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:25:37.026439 | orchestrator | 2026-01-02 02:25:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:25:37.028609 | orchestrator | 2026-01-02 02:25:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:25:37.028643 | orchestrator | 2026-01-02 02:25:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:25:40.073069 | orchestrator | 2026-01-02 02:25:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:25:40.075400 | orchestrator | 2026-01-02 02:25:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:25:40.075473 | orchestrator | 2026-01-02 02:25:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:25:43.121770 | orchestrator | 2026-01-02 02:25:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:25:43.123258 | orchestrator | 2026-01-02 02:25:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:25:43.123295 | orchestrator | 2026-01-02 02:25:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:25:46.169058 | orchestrator | 2026-01-02 02:25:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:25:46.170127 | orchestrator | 2026-01-02 02:25:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:25:46.170163 | orchestrator | 2026-01-02 02:25:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:25:49.223160 | orchestrator | 2026-01-02 02:25:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:25:49.224960 | orchestrator | 2026-01-02 02:25:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:25:49.225056 | orchestrator | 2026-01-02 02:25:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:25:52.271636 | orchestrator | 2026-01-02 02:25:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:25:52.273383 | orchestrator | 2026-01-02 02:25:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:25:52.273433 | orchestrator | 2026-01-02 02:25:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:25:55.322546 | orchestrator | 2026-01-02 02:25:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:25:55.324275 | orchestrator | 2026-01-02 02:25:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:25:55.324319 | orchestrator | 2026-01-02 02:25:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:25:58.374619 | orchestrator | 2026-01-02 02:25:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:25:58.376112 | orchestrator | 2026-01-02 02:25:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:25:58.376166 | orchestrator | 2026-01-02 02:25:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:26:01.421373 | orchestrator | 2026-01-02 02:26:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:26:01.424192 | orchestrator | 2026-01-02 02:26:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:26:01.424261 | orchestrator | 2026-01-02 02:26:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:26:04.473470 | orchestrator | 2026-01-02 02:26:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:26:04.475674 | orchestrator | 2026-01-02 02:26:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:26:04.475715 | orchestrator | 2026-01-02 02:26:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:26:07.526782 | orchestrator | 2026-01-02 02:26:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:26:07.528448 | orchestrator | 2026-01-02 02:26:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:26:07.528637 | orchestrator | 2026-01-02 02:26:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:26:10.579338 | orchestrator | 2026-01-02 02:26:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:26:10.581687 | orchestrator | 2026-01-02 02:26:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:26:10.581751 | orchestrator | 2026-01-02 02:26:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:26:13.627746 | orchestrator | 2026-01-02 02:26:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:26:13.628929 | orchestrator | 2026-01-02 02:26:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:26:13.628963 | orchestrator | 2026-01-02 02:26:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:26:16.677383 | orchestrator | 2026-01-02 02:26:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:26:16.679408 | orchestrator | 2026-01-02 02:26:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:26:16.679442 | orchestrator | 2026-01-02 02:26:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:26:19.725734 | orchestrator | 2026-01-02 02:26:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:26:19.727575 | orchestrator | 2026-01-02 02:26:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:26:19.727613 | orchestrator | 2026-01-02 02:26:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:26:22.767126 | orchestrator | 2026-01-02 02:26:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:26:22.768274 | orchestrator | 2026-01-02 02:26:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:26:22.768309 | orchestrator | 2026-01-02 02:26:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:26:25.816603 | orchestrator | 2026-01-02 02:26:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:26:25.820340 | orchestrator | 2026-01-02 02:26:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:26:25.821173 | orchestrator | 2026-01-02 02:26:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:26:28.878752 | orchestrator | 2026-01-02 02:26:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:26:28.880672 | orchestrator | 2026-01-02 02:26:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:26:28.880753 | orchestrator | 2026-01-02 02:26:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:26:31.928363 | orchestrator | 2026-01-02 02:26:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:26:31.932554 | orchestrator | 2026-01-02 02:26:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:26:31.932603 | orchestrator | 2026-01-02 02:26:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:26:34.981153 | orchestrator | 2026-01-02 02:26:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:26:34.982937 | orchestrator | 2026-01-02 02:26:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:26:34.982997 | orchestrator | 2026-01-02 02:26:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:26:38.030943 | orchestrator | 2026-01-02 02:26:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:26:38.031048 | orchestrator | 2026-01-02 02:26:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:26:38.031129 | orchestrator | 2026-01-02 02:26:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:26:41.081420 | orchestrator | 2026-01-02 02:26:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:26:41.082559 | orchestrator | 2026-01-02 02:26:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:26:41.082619 | orchestrator | 2026-01-02 02:26:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:26:44.126639 | orchestrator | 2026-01-02 02:26:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:26:44.128486 | orchestrator | 2026-01-02 02:26:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:26:44.128627 | orchestrator | 2026-01-02 02:26:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:26:47.175576 | orchestrator | 2026-01-02 02:26:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:26:47.178766 | orchestrator | 2026-01-02 02:26:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:26:47.178819 | orchestrator | 2026-01-02 02:26:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:26:50.228963 | orchestrator | 2026-01-02 02:26:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:26:50.229310 | orchestrator | 2026-01-02 02:26:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:26:50.229420 | orchestrator | 2026-01-02 02:26:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:26:53.277166 | orchestrator | 2026-01-02 02:26:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:26:53.279232 | orchestrator | 2026-01-02 02:26:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:26:53.279606 | orchestrator | 2026-01-02 02:26:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:26:56.325355 | orchestrator | 2026-01-02 02:26:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:26:56.326834 | orchestrator | 2026-01-02 02:26:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:26:56.326871 | orchestrator | 2026-01-02 02:26:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:26:59.374844 | orchestrator | 2026-01-02 02:26:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:26:59.375938 | orchestrator | 2026-01-02 02:26:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:26:59.376098 | orchestrator | 2026-01-02 02:26:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:27:02.430149 | orchestrator | 2026-01-02 02:27:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:27:02.430929 | orchestrator | 2026-01-02 02:27:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:27:02.430987 | orchestrator | 2026-01-02 02:27:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:27:05.485940 | orchestrator | 2026-01-02 02:27:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:27:05.490113 | orchestrator | 2026-01-02 02:27:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:27:05.490156 | orchestrator | 2026-01-02 02:27:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:27:08.535392 | orchestrator | 2026-01-02 02:27:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:27:08.536990 | orchestrator | 2026-01-02 02:27:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:27:08.537105 | orchestrator | 2026-01-02 02:27:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:27:11.583483 | orchestrator | 2026-01-02 02:27:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:27:11.584939 | orchestrator | 2026-01-02 02:27:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:27:11.584985 | orchestrator | 2026-01-02 02:27:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:27:14.629708 | orchestrator | 2026-01-02 02:27:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:27:14.631576 | orchestrator | 2026-01-02 02:27:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:27:14.631639 | orchestrator | 2026-01-02 02:27:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:27:17.682339 | orchestrator | 2026-01-02 02:27:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:27:17.684167 | orchestrator | 2026-01-02 02:27:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:27:17.684562 | orchestrator | 2026-01-02 02:27:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:27:20.740856 | orchestrator | 2026-01-02 02:27:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:27:20.743129 | orchestrator | 2026-01-02 02:27:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:27:20.743188 | orchestrator | 2026-01-02 02:27:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:27:23.786591 | orchestrator | 2026-01-02 02:27:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:27:23.787935 | orchestrator | 2026-01-02 02:27:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:27:23.788031 | orchestrator | 2026-01-02 02:27:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:27:26.836972 | orchestrator | 2026-01-02 02:27:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:27:26.839502 | orchestrator | 2026-01-02 02:27:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:27:26.839592 | orchestrator | 2026-01-02 02:27:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:27:29.886621 | orchestrator | 2026-01-02 02:27:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:27:29.888636 | orchestrator | 2026-01-02 02:27:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:27:29.888717 | orchestrator | 2026-01-02 02:27:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:27:32.940488 | orchestrator | 2026-01-02 02:27:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:27:32.941815 | orchestrator | 2026-01-02 02:27:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:27:32.941851 | orchestrator | 2026-01-02 02:27:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:27:35.990383 | orchestrator | 2026-01-02 02:27:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:27:35.992443 | orchestrator | 2026-01-02 02:27:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:27:35.992523 | orchestrator | 2026-01-02 02:27:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:27:39.044359 | orchestrator | 2026-01-02 02:27:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:27:39.046474 | orchestrator | 2026-01-02 02:27:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:27:39.046620 | orchestrator | 2026-01-02 02:27:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:27:42.092092 | orchestrator | 2026-01-02 02:27:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:27:42.093237 | orchestrator | 2026-01-02 02:27:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:27:42.093295 | orchestrator | 2026-01-02 02:27:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:27:45.142227 | orchestrator | 2026-01-02 02:27:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:27:45.144179 | orchestrator | 2026-01-02 02:27:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:27:45.144202 | orchestrator | 2026-01-02 02:27:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:27:48.193264 | orchestrator | 2026-01-02 02:27:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:27:48.194503 | orchestrator | 2026-01-02 02:27:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:27:48.194623 | orchestrator | 2026-01-02 02:27:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:27:51.250606 | orchestrator | 2026-01-02 02:27:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:27:51.254868 | orchestrator | 2026-01-02 02:27:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:27:51.255092 | orchestrator | 2026-01-02 02:27:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:27:54.305264 | orchestrator | 2026-01-02 02:27:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:27:54.307079 | orchestrator | 2026-01-02 02:27:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:27:54.307120 | orchestrator | 2026-01-02 02:27:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:27:57.356203 | orchestrator | 2026-01-02 02:27:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:27:57.356429 | orchestrator | 2026-01-02 02:27:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:27:57.356467 | orchestrator | 2026-01-02 02:27:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:28:00.401233 | orchestrator | 2026-01-02 02:28:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:28:00.401978 | orchestrator | 2026-01-02 02:28:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:28:00.402013 | orchestrator | 2026-01-02 02:28:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:28:03.457302 | orchestrator | 2026-01-02 02:28:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:28:03.458355 | orchestrator | 2026-01-02 02:28:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:28:03.458398 | orchestrator | 2026-01-02 02:28:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:28:06.499778 | orchestrator | 2026-01-02 02:28:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:28:06.502249 | orchestrator | 2026-01-02 02:28:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:28:06.502301 | orchestrator | 2026-01-02 02:28:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:28:09.544392 | orchestrator | 2026-01-02 02:28:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:28:09.546090 | orchestrator | 2026-01-02 02:28:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:28:09.546115 | orchestrator | 2026-01-02 02:28:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:28:12.593384 | orchestrator | 2026-01-02 02:28:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:28:12.594931 | orchestrator | 2026-01-02 02:28:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:28:12.594979 | orchestrator | 2026-01-02 02:28:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:28:15.640473 | orchestrator | 2026-01-02 02:28:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:28:15.641756 | orchestrator | 2026-01-02 02:28:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:28:15.641804 | orchestrator | 2026-01-02 02:28:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:28:18.694633 | orchestrator | 2026-01-02 02:28:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:28:18.699051 | orchestrator | 2026-01-02 02:28:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:28:18.699106 | orchestrator | 2026-01-02 02:28:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:28:21.749770 | orchestrator | 2026-01-02 02:28:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:28:21.752118 | orchestrator | 2026-01-02 02:28:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:28:21.752159 | orchestrator | 2026-01-02 02:28:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:28:24.802348 | orchestrator | 2026-01-02 02:28:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:28:24.803714 | orchestrator | 2026-01-02 02:28:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:28:24.803746 | orchestrator | 2026-01-02 02:28:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:28:27.859187 | orchestrator | 2026-01-02 02:28:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:28:27.861056 | orchestrator | 2026-01-02 02:28:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:28:27.861231 | orchestrator | 2026-01-02 02:28:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:28:30.911129 | orchestrator | 2026-01-02 02:28:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:28:30.913060 | orchestrator | 2026-01-02 02:28:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:28:30.913229 | orchestrator | 2026-01-02 02:28:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:28:33.970755 | orchestrator | 2026-01-02 02:28:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:28:33.972373 | orchestrator | 2026-01-02 02:28:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:28:33.972515 | orchestrator | 2026-01-02 02:28:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:28:37.021747 | orchestrator | 2026-01-02 02:28:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:28:37.023159 | orchestrator | 2026-01-02 02:28:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:28:37.023228 | orchestrator | 2026-01-02 02:28:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:28:40.067465 | orchestrator | 2026-01-02 02:28:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:28:40.069052 | orchestrator | 2026-01-02 02:28:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:28:40.069199 | orchestrator | 2026-01-02 02:28:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:28:43.117433 | orchestrator | 2026-01-02 02:28:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:28:43.120003 | orchestrator | 2026-01-02 02:28:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:28:43.120057 | orchestrator | 2026-01-02 02:28:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:28:46.171663 | orchestrator | 2026-01-02 02:28:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:28:46.173666 | orchestrator | 2026-01-02 02:28:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:28:46.173766 | orchestrator | 2026-01-02 02:28:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:28:49.224667 | orchestrator | 2026-01-02 02:28:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:28:49.225204 | orchestrator | 2026-01-02 02:28:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:28:49.225288 | orchestrator | 2026-01-02 02:28:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:28:52.274711 | orchestrator | 2026-01-02 02:28:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:28:52.276422 | orchestrator | 2026-01-02 02:28:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:28:52.276514 | orchestrator | 2026-01-02 02:28:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:28:55.328027 | orchestrator | 2026-01-02 02:28:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:28:55.328184 | orchestrator | 2026-01-02 02:28:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:28:55.328343 | orchestrator | 2026-01-02 02:28:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:28:58.377654 | orchestrator | 2026-01-02 02:28:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:28:58.379841 | orchestrator | 2026-01-02 02:28:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:28:58.379884 | orchestrator | 2026-01-02 02:28:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:29:01.424927 | orchestrator | 2026-01-02 02:29:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:29:01.427696 | orchestrator | 2026-01-02 02:29:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:29:01.427752 | orchestrator | 2026-01-02 02:29:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:29:04.469063 | orchestrator | 2026-01-02 02:29:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:29:04.470220 | orchestrator | 2026-01-02 02:29:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:29:04.470494 | orchestrator | 2026-01-02 02:29:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:29:07.529976 | orchestrator | 2026-01-02 02:29:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:29:07.531202 | orchestrator | 2026-01-02 02:29:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:29:07.531262 | orchestrator | 2026-01-02 02:29:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:29:10.578713 | orchestrator | 2026-01-02 02:29:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:29:10.579734 | orchestrator | 2026-01-02 02:29:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:29:10.579769 | orchestrator | 2026-01-02 02:29:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:29:13.632756 | orchestrator | 2026-01-02 02:29:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:29:13.634343 | orchestrator | 2026-01-02 02:29:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:29:13.635435 | orchestrator | 2026-01-02 02:29:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:29:16.684635 | orchestrator | 2026-01-02 02:29:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:29:16.687688 | orchestrator | 2026-01-02 02:29:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:29:16.687851 | orchestrator | 2026-01-02 02:29:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:29:19.737227 | orchestrator | 2026-01-02 02:29:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:29:19.740330 | orchestrator | 2026-01-02 02:29:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:29:19.740375 | orchestrator | 2026-01-02 02:29:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:29:22.783306 | orchestrator | 2026-01-02 02:29:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:29:22.783883 | orchestrator | 2026-01-02 02:29:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:29:22.783937 | orchestrator | 2026-01-02 02:29:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:29:25.829656 | orchestrator | 2026-01-02 02:29:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:29:25.831130 | orchestrator | 2026-01-02 02:29:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:29:25.831166 | orchestrator | 2026-01-02 02:29:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:29:28.877214 | orchestrator | 2026-01-02 02:29:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:29:28.879464 | orchestrator | 2026-01-02 02:29:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:29:28.879538 | orchestrator | 2026-01-02 02:29:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:29:31.933897 | orchestrator | 2026-01-02 02:29:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:29:31.936737 | orchestrator | 2026-01-02 02:29:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:29:31.936787 | orchestrator | 2026-01-02 02:29:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:29:34.997095 | orchestrator | 2026-01-02 02:29:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:29:34.999961 | orchestrator | 2026-01-02 02:29:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:29:35.000068 | orchestrator | 2026-01-02 02:29:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:29:38.043280 | orchestrator | 2026-01-02 02:29:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:29:38.043901 | orchestrator | 2026-01-02 02:29:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:29:38.043974 | orchestrator | 2026-01-02 02:29:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:29:41.095883 | orchestrator | 2026-01-02 02:29:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:29:41.096493 | orchestrator | 2026-01-02 02:29:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:29:41.096646 | orchestrator | 2026-01-02 02:29:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:29:44.146391 | orchestrator | 2026-01-02 02:29:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:29:44.148518 | orchestrator | 2026-01-02 02:29:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:29:44.148644 | orchestrator | 2026-01-02 02:29:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:29:47.199704 | orchestrator | 2026-01-02 02:29:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:29:47.202121 | orchestrator | 2026-01-02 02:29:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:29:47.202167 | orchestrator | 2026-01-02 02:29:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:29:50.249479 | orchestrator | 2026-01-02 02:29:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:29:50.252831 | orchestrator | 2026-01-02 02:29:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:29:50.252901 | orchestrator | 2026-01-02 02:29:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:29:53.297974 | orchestrator | 2026-01-02 02:29:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:29:53.298928 | orchestrator | 2026-01-02 02:29:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:29:53.299035 | orchestrator | 2026-01-02 02:29:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:29:56.349011 | orchestrator | 2026-01-02 02:29:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:29:56.350088 | orchestrator | 2026-01-02 02:29:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:29:56.350138 | orchestrator | 2026-01-02 02:29:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:29:59.397860 | orchestrator | 2026-01-02 02:29:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:29:59.398699 | orchestrator | 2026-01-02 02:29:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:29:59.398733 | orchestrator | 2026-01-02 02:29:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:30:02.444901 | orchestrator | 2026-01-02 02:30:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:30:02.447118 | orchestrator | 2026-01-02 02:30:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:30:02.447179 | orchestrator | 2026-01-02 02:30:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:30:05.489037 | orchestrator | 2026-01-02 02:30:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:30:05.490467 | orchestrator | 2026-01-02 02:30:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:30:05.490506 | orchestrator | 2026-01-02 02:30:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:30:08.539246 | orchestrator | 2026-01-02 02:30:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:30:08.542790 | orchestrator | 2026-01-02 02:30:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:30:08.542840 | orchestrator | 2026-01-02 02:30:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:30:11.592032 | orchestrator | 2026-01-02 02:30:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:30:11.592729 | orchestrator | 2026-01-02 02:30:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:30:11.592752 | orchestrator | 2026-01-02 02:30:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:30:14.638916 | orchestrator | 2026-01-02 02:30:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:30:14.640674 | orchestrator | 2026-01-02 02:30:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:30:14.641402 | orchestrator | 2026-01-02 02:30:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:30:17.686422 | orchestrator | 2026-01-02 02:30:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:30:17.688146 | orchestrator | 2026-01-02 02:30:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:30:17.688198 | orchestrator | 2026-01-02 02:30:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:30:20.732984 | orchestrator | 2026-01-02 02:30:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:30:20.736145 | orchestrator | 2026-01-02 02:30:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:30:20.736185 | orchestrator | 2026-01-02 02:30:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:30:23.784650 | orchestrator | 2026-01-02 02:30:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:30:23.786739 | orchestrator | 2026-01-02 02:30:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:30:23.786781 | orchestrator | 2026-01-02 02:30:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:30:26.842240 | orchestrator | 2026-01-02 02:30:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:30:26.844709 | orchestrator | 2026-01-02 02:30:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:30:26.844784 | orchestrator | 2026-01-02 02:30:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:30:29.897342 | orchestrator | 2026-01-02 02:30:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:30:29.899173 | orchestrator | 2026-01-02 02:30:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:30:29.899240 | orchestrator | 2026-01-02 02:30:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:30:32.950545 | orchestrator | 2026-01-02 02:30:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:30:32.997912 | orchestrator | 2026-01-02 02:30:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:30:32.998059 | orchestrator | 2026-01-02 02:30:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:30:36.009740 | orchestrator | 2026-01-02 02:30:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:30:36.011567 | orchestrator | 2026-01-02 02:30:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:30:36.011652 | orchestrator | 2026-01-02 02:30:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:30:39.051541 | orchestrator | 2026-01-02 02:30:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:30:39.052451 | orchestrator | 2026-01-02 02:30:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:30:39.052478 | orchestrator | 2026-01-02 02:30:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:30:42.096832 | orchestrator | 2026-01-02 02:30:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:30:42.097398 | orchestrator | 2026-01-02 02:30:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:30:42.097441 | orchestrator | 2026-01-02 02:30:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:30:45.147276 | orchestrator | 2026-01-02 02:30:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:30:45.147850 | orchestrator | 2026-01-02 02:30:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:30:45.147892 | orchestrator | 2026-01-02 02:30:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:30:48.195297 | orchestrator | 2026-01-02 02:30:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:30:48.196906 | orchestrator | 2026-01-02 02:30:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:30:48.196964 | orchestrator | 2026-01-02 02:30:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:30:51.241291 | orchestrator | 2026-01-02 02:30:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:30:51.243310 | orchestrator | 2026-01-02 02:30:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:30:51.243566 | orchestrator | 2026-01-02 02:30:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:30:54.296415 | orchestrator | 2026-01-02 02:30:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:30:54.296972 | orchestrator | 2026-01-02 02:30:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:30:54.297008 | orchestrator | 2026-01-02 02:30:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:30:57.343041 | orchestrator | 2026-01-02 02:30:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:30:57.344695 | orchestrator | 2026-01-02 02:30:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:30:57.344746 | orchestrator | 2026-01-02 02:30:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:31:00.391013 | orchestrator | 2026-01-02 02:31:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:31:00.391281 | orchestrator | 2026-01-02 02:31:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:31:00.391305 | orchestrator | 2026-01-02 02:31:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:31:03.440162 | orchestrator | 2026-01-02 02:31:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:31:03.440934 | orchestrator | 2026-01-02 02:31:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:31:03.441038 | orchestrator | 2026-01-02 02:31:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:31:06.493192 | orchestrator | 2026-01-02 02:31:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:31:06.495421 | orchestrator | 2026-01-02 02:31:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:31:06.495500 | orchestrator | 2026-01-02 02:31:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:31:09.548365 | orchestrator | 2026-01-02 02:31:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:31:09.550515 | orchestrator | 2026-01-02 02:31:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:31:09.550629 | orchestrator | 2026-01-02 02:31:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:31:12.592964 | orchestrator | 2026-01-02 02:31:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:31:12.595427 | orchestrator | 2026-01-02 02:31:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:31:12.595465 | orchestrator | 2026-01-02 02:31:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:31:15.645226 | orchestrator | 2026-01-02 02:31:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:31:15.646651 | orchestrator | 2026-01-02 02:31:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:31:15.646825 | orchestrator | 2026-01-02 02:31:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:31:18.695370 | orchestrator | 2026-01-02 02:31:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:31:18.697179 | orchestrator | 2026-01-02 02:31:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:31:18.697210 | orchestrator | 2026-01-02 02:31:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:31:21.741141 | orchestrator | 2026-01-02 02:31:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:31:21.741387 | orchestrator | 2026-01-02 02:31:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:31:21.741417 | orchestrator | 2026-01-02 02:31:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:31:24.793674 | orchestrator | 2026-01-02 02:31:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:31:24.793805 | orchestrator | 2026-01-02 02:31:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:31:24.793818 | orchestrator | 2026-01-02 02:31:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:31:27.843885 | orchestrator | 2026-01-02 02:31:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:31:27.845444 | orchestrator | 2026-01-02 02:31:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:31:27.845482 | orchestrator | 2026-01-02 02:31:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:31:30.892462 | orchestrator | 2026-01-02 02:31:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:31:30.892649 | orchestrator | 2026-01-02 02:31:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:31:30.893500 | orchestrator | 2026-01-02 02:31:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:31:33.940932 | orchestrator | 2026-01-02 02:31:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:31:33.942096 | orchestrator | 2026-01-02 02:31:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:31:33.942137 | orchestrator | 2026-01-02 02:31:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:31:36.985012 | orchestrator | 2026-01-02 02:31:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:31:36.986192 | orchestrator | 2026-01-02 02:31:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:31:36.986226 | orchestrator | 2026-01-02 02:31:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:31:40.042203 | orchestrator | 2026-01-02 02:31:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:31:40.044218 | orchestrator | 2026-01-02 02:31:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:31:40.044256 | orchestrator | 2026-01-02 02:31:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:31:43.093575 | orchestrator | 2026-01-02 02:31:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:31:43.094844 | orchestrator | 2026-01-02 02:31:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:31:43.094890 | orchestrator | 2026-01-02 02:31:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:31:46.149764 | orchestrator | 2026-01-02 02:31:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:31:46.152948 | orchestrator | 2026-01-02 02:31:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:31:46.152988 | orchestrator | 2026-01-02 02:31:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:31:49.200401 | orchestrator | 2026-01-02 02:31:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:31:49.202794 | orchestrator | 2026-01-02 02:31:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:31:49.202859 | orchestrator | 2026-01-02 02:31:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:31:52.255295 | orchestrator | 2026-01-02 02:31:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:31:52.257298 | orchestrator | 2026-01-02 02:31:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:31:52.257462 | orchestrator | 2026-01-02 02:31:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:31:55.303073 | orchestrator | 2026-01-02 02:31:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:31:55.305588 | orchestrator | 2026-01-02 02:31:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:31:55.305656 | orchestrator | 2026-01-02 02:31:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:31:58.355748 | orchestrator | 2026-01-02 02:31:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:31:58.357180 | orchestrator | 2026-01-02 02:31:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:31:58.357230 | orchestrator | 2026-01-02 02:31:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:32:01.415809 | orchestrator | 2026-01-02 02:32:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:32:01.418273 | orchestrator | 2026-01-02 02:32:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:32:01.418316 | orchestrator | 2026-01-02 02:32:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:32:04.469850 | orchestrator | 2026-01-02 02:32:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:32:04.471695 | orchestrator | 2026-01-02 02:32:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:32:04.471776 | orchestrator | 2026-01-02 02:32:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:32:07.516749 | orchestrator | 2026-01-02 02:32:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:32:07.518385 | orchestrator | 2026-01-02 02:32:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:32:07.518406 | orchestrator | 2026-01-02 02:32:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:32:10.565923 | orchestrator | 2026-01-02 02:32:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:32:10.567839 | orchestrator | 2026-01-02 02:32:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:32:10.567877 | orchestrator | 2026-01-02 02:32:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:32:13.615086 | orchestrator | 2026-01-02 02:32:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:32:13.617560 | orchestrator | 2026-01-02 02:32:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:32:13.617747 | orchestrator | 2026-01-02 02:32:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:32:16.670577 | orchestrator | 2026-01-02 02:32:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:32:16.672786 | orchestrator | 2026-01-02 02:32:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:32:16.672829 | orchestrator | 2026-01-02 02:32:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:32:19.723511 | orchestrator | 2026-01-02 02:32:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:32:19.725265 | orchestrator | 2026-01-02 02:32:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:32:19.725322 | orchestrator | 2026-01-02 02:32:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:32:22.774856 | orchestrator | 2026-01-02 02:32:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:32:22.776923 | orchestrator | 2026-01-02 02:32:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:32:22.777578 | orchestrator | 2026-01-02 02:32:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:32:25.829430 | orchestrator | 2026-01-02 02:32:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:32:25.831935 | orchestrator | 2026-01-02 02:32:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:32:25.832011 | orchestrator | 2026-01-02 02:32:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:32:28.882274 | orchestrator | 2026-01-02 02:32:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:32:28.883918 | orchestrator | 2026-01-02 02:32:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:32:28.884227 | orchestrator | 2026-01-02 02:32:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:32:31.934488 | orchestrator | 2026-01-02 02:32:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:32:31.936408 | orchestrator | 2026-01-02 02:32:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:32:31.936481 | orchestrator | 2026-01-02 02:32:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:32:34.986846 | orchestrator | 2026-01-02 02:32:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:32:34.988728 | orchestrator | 2026-01-02 02:32:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:32:34.989024 | orchestrator | 2026-01-02 02:32:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:32:38.030598 | orchestrator | 2026-01-02 02:32:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:32:38.031022 | orchestrator | 2026-01-02 02:32:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:32:38.031420 | orchestrator | 2026-01-02 02:32:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:32:41.082280 | orchestrator | 2026-01-02 02:32:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:32:41.084063 | orchestrator | 2026-01-02 02:32:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:32:41.084138 | orchestrator | 2026-01-02 02:32:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:32:44.131053 | orchestrator | 2026-01-02 02:32:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:32:44.133484 | orchestrator | 2026-01-02 02:32:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:32:44.133559 | orchestrator | 2026-01-02 02:32:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:32:47.176996 | orchestrator | 2026-01-02 02:32:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:32:47.178442 | orchestrator | 2026-01-02 02:32:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:32:47.178466 | orchestrator | 2026-01-02 02:32:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:32:50.226309 | orchestrator | 2026-01-02 02:32:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:32:50.228154 | orchestrator | 2026-01-02 02:32:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:32:50.228191 | orchestrator | 2026-01-02 02:32:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:32:53.270292 | orchestrator | 2026-01-02 02:32:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:32:53.274001 | orchestrator | 2026-01-02 02:32:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:32:53.274146 | orchestrator | 2026-01-02 02:32:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:32:56.319089 | orchestrator | 2026-01-02 02:32:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:32:56.321268 | orchestrator | 2026-01-02 02:32:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:32:56.321334 | orchestrator | 2026-01-02 02:32:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:32:59.371814 | orchestrator | 2026-01-02 02:32:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:32:59.373110 | orchestrator | 2026-01-02 02:32:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:32:59.373205 | orchestrator | 2026-01-02 02:32:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:33:02.426854 | orchestrator | 2026-01-02 02:33:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:33:02.427917 | orchestrator | 2026-01-02 02:33:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:33:02.428011 | orchestrator | 2026-01-02 02:33:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:33:05.478469 | orchestrator | 2026-01-02 02:33:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:33:05.479963 | orchestrator | 2026-01-02 02:33:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:33:05.480010 | orchestrator | 2026-01-02 02:33:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:33:08.531833 | orchestrator | 2026-01-02 02:33:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:33:08.533096 | orchestrator | 2026-01-02 02:33:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:33:08.533215 | orchestrator | 2026-01-02 02:33:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:33:11.576240 | orchestrator | 2026-01-02 02:33:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:33:11.577135 | orchestrator | 2026-01-02 02:33:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:33:11.577155 | orchestrator | 2026-01-02 02:33:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:33:14.628971 | orchestrator | 2026-01-02 02:33:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:33:14.630978 | orchestrator | 2026-01-02 02:33:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:33:14.631295 | orchestrator | 2026-01-02 02:33:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:33:17.675804 | orchestrator | 2026-01-02 02:33:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:33:17.677124 | orchestrator | 2026-01-02 02:33:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:33:17.677224 | orchestrator | 2026-01-02 02:33:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:33:20.724092 | orchestrator | 2026-01-02 02:33:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:33:20.725721 | orchestrator | 2026-01-02 02:33:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:33:20.725754 | orchestrator | 2026-01-02 02:33:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:33:23.768583 | orchestrator | 2026-01-02 02:33:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:33:23.769850 | orchestrator | 2026-01-02 02:33:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:33:23.769892 | orchestrator | 2026-01-02 02:33:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:33:26.816497 | orchestrator | 2026-01-02 02:33:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:33:26.818997 | orchestrator | 2026-01-02 02:33:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:33:26.819037 | orchestrator | 2026-01-02 02:33:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:33:29.871569 | orchestrator | 2026-01-02 02:33:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:33:29.872669 | orchestrator | 2026-01-02 02:33:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:33:29.872787 | orchestrator | 2026-01-02 02:33:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:33:32.926276 | orchestrator | 2026-01-02 02:33:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:33:32.927750 | orchestrator | 2026-01-02 02:33:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:33:32.927793 | orchestrator | 2026-01-02 02:33:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:33:35.975969 | orchestrator | 2026-01-02 02:33:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:33:35.978255 | orchestrator | 2026-01-02 02:33:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:33:35.978309 | orchestrator | 2026-01-02 02:33:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:33:39.034535 | orchestrator | 2026-01-02 02:33:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:33:39.035934 | orchestrator | 2026-01-02 02:33:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:33:39.036070 | orchestrator | 2026-01-02 02:33:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:33:42.078199 | orchestrator | 2026-01-02 02:33:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:33:42.079909 | orchestrator | 2026-01-02 02:33:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:33:42.079947 | orchestrator | 2026-01-02 02:33:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:33:45.125079 | orchestrator | 2026-01-02 02:33:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:33:45.126746 | orchestrator | 2026-01-02 02:33:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:33:45.126779 | orchestrator | 2026-01-02 02:33:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:33:48.173357 | orchestrator | 2026-01-02 02:33:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:33:48.173524 | orchestrator | 2026-01-02 02:33:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:33:48.173545 | orchestrator | 2026-01-02 02:33:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:33:51.227484 | orchestrator | 2026-01-02 02:33:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:33:51.229029 | orchestrator | 2026-01-02 02:33:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:33:51.229083 | orchestrator | 2026-01-02 02:33:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:33:54.279070 | orchestrator | 2026-01-02 02:33:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:33:54.282160 | orchestrator | 2026-01-02 02:33:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:33:54.282241 | orchestrator | 2026-01-02 02:33:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:33:57.323145 | orchestrator | 2026-01-02 02:33:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:33:57.324465 | orchestrator | 2026-01-02 02:33:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:33:57.324749 | orchestrator | 2026-01-02 02:33:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:34:00.378595 | orchestrator | 2026-01-02 02:34:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:34:00.381490 | orchestrator | 2026-01-02 02:34:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:34:00.381543 | orchestrator | 2026-01-02 02:34:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:34:03.437693 | orchestrator | 2026-01-02 02:34:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:34:03.439221 | orchestrator | 2026-01-02 02:34:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:34:03.439274 | orchestrator | 2026-01-02 02:34:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:34:06.490280 | orchestrator | 2026-01-02 02:34:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:34:06.492512 | orchestrator | 2026-01-02 02:34:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:34:06.492554 | orchestrator | 2026-01-02 02:34:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:34:09.543244 | orchestrator | 2026-01-02 02:34:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:34:09.545168 | orchestrator | 2026-01-02 02:34:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:34:09.545243 | orchestrator | 2026-01-02 02:34:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:34:12.597984 | orchestrator | 2026-01-02 02:34:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:34:12.600116 | orchestrator | 2026-01-02 02:34:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:34:12.600158 | orchestrator | 2026-01-02 02:34:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:34:15.645754 | orchestrator | 2026-01-02 02:34:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:34:15.646749 | orchestrator | 2026-01-02 02:34:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:34:15.646786 | orchestrator | 2026-01-02 02:34:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:34:18.703336 | orchestrator | 2026-01-02 02:34:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:34:18.705890 | orchestrator | 2026-01-02 02:34:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:34:18.705964 | orchestrator | 2026-01-02 02:34:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:34:21.754552 | orchestrator | 2026-01-02 02:34:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:34:21.756589 | orchestrator | 2026-01-02 02:34:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:34:21.756709 | orchestrator | 2026-01-02 02:34:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:34:24.805519 | orchestrator | 2026-01-02 02:34:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:34:24.807274 | orchestrator | 2026-01-02 02:34:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:34:24.807320 | orchestrator | 2026-01-02 02:34:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:34:27.857059 | orchestrator | 2026-01-02 02:34:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:34:27.859717 | orchestrator | 2026-01-02 02:34:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:34:27.859873 | orchestrator | 2026-01-02 02:34:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:34:30.910303 | orchestrator | 2026-01-02 02:34:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:34:30.914301 | orchestrator | 2026-01-02 02:34:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:34:30.914925 | orchestrator | 2026-01-02 02:34:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:34:33.973281 | orchestrator | 2026-01-02 02:34:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:34:33.974540 | orchestrator | 2026-01-02 02:34:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:34:33.974581 | orchestrator | 2026-01-02 02:34:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:34:37.020153 | orchestrator | 2026-01-02 02:34:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:34:37.023130 | orchestrator | 2026-01-02 02:34:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:34:37.023208 | orchestrator | 2026-01-02 02:34:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:34:40.065610 | orchestrator | 2026-01-02 02:34:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:34:40.068079 | orchestrator | 2026-01-02 02:34:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:34:40.068110 | orchestrator | 2026-01-02 02:34:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:34:43.108426 | orchestrator | 2026-01-02 02:34:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:34:43.109222 | orchestrator | 2026-01-02 02:34:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:34:43.109322 | orchestrator | 2026-01-02 02:34:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:34:46.161093 | orchestrator | 2026-01-02 02:34:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:34:46.162830 | orchestrator | 2026-01-02 02:34:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:34:46.162940 | orchestrator | 2026-01-02 02:34:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:34:49.223216 | orchestrator | 2026-01-02 02:34:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:34:49.224368 | orchestrator | 2026-01-02 02:34:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:34:49.224533 | orchestrator | 2026-01-02 02:34:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:34:52.275260 | orchestrator | 2026-01-02 02:34:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:34:52.276249 | orchestrator | 2026-01-02 02:34:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:34:52.276299 | orchestrator | 2026-01-02 02:34:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:34:55.318979 | orchestrator | 2026-01-02 02:34:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:34:55.320864 | orchestrator | 2026-01-02 02:34:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:34:55.320896 | orchestrator | 2026-01-02 02:34:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:34:58.373433 | orchestrator | 2026-01-02 02:34:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:34:58.376171 | orchestrator | 2026-01-02 02:34:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:34:58.376218 | orchestrator | 2026-01-02 02:34:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:35:01.425789 | orchestrator | 2026-01-02 02:35:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:35:01.428162 | orchestrator | 2026-01-02 02:35:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:35:01.428235 | orchestrator | 2026-01-02 02:35:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:35:04.477297 | orchestrator | 2026-01-02 02:35:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:35:04.478552 | orchestrator | 2026-01-02 02:35:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:35:04.478742 | orchestrator | 2026-01-02 02:35:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:35:07.529140 | orchestrator | 2026-01-02 02:35:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:35:07.530982 | orchestrator | 2026-01-02 02:35:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:35:07.531277 | orchestrator | 2026-01-02 02:35:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:35:10.580751 | orchestrator | 2026-01-02 02:35:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:35:10.582380 | orchestrator | 2026-01-02 02:35:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:35:10.582422 | orchestrator | 2026-01-02 02:35:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:35:13.632056 | orchestrator | 2026-01-02 02:35:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:35:13.633282 | orchestrator | 2026-01-02 02:35:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:35:13.633336 | orchestrator | 2026-01-02 02:35:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:35:16.674814 | orchestrator | 2026-01-02 02:35:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:35:16.675032 | orchestrator | 2026-01-02 02:35:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:35:16.675067 | orchestrator | 2026-01-02 02:35:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:35:19.722932 | orchestrator | 2026-01-02 02:35:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:35:19.724956 | orchestrator | 2026-01-02 02:35:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:35:19.724999 | orchestrator | 2026-01-02 02:35:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:35:22.771611 | orchestrator | 2026-01-02 02:35:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:35:22.773086 | orchestrator | 2026-01-02 02:35:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:35:22.773122 | orchestrator | 2026-01-02 02:35:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:35:25.823749 | orchestrator | 2026-01-02 02:35:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:35:25.824633 | orchestrator | 2026-01-02 02:35:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:35:25.824693 | orchestrator | 2026-01-02 02:35:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:35:28.880420 | orchestrator | 2026-01-02 02:35:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:35:28.882191 | orchestrator | 2026-01-02 02:35:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:35:28.882231 | orchestrator | 2026-01-02 02:35:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:35:31.927103 | orchestrator | 2026-01-02 02:35:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:35:31.930001 | orchestrator | 2026-01-02 02:35:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:35:31.930104 | orchestrator | 2026-01-02 02:35:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:35:34.982966 | orchestrator | 2026-01-02 02:35:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:35:34.984814 | orchestrator | 2026-01-02 02:35:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:35:34.984931 | orchestrator | 2026-01-02 02:35:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:35:38.029174 | orchestrator | 2026-01-02 02:35:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:35:38.029357 | orchestrator | 2026-01-02 02:35:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:35:38.029379 | orchestrator | 2026-01-02 02:35:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:35:41.082854 | orchestrator | 2026-01-02 02:35:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:35:41.086136 | orchestrator | 2026-01-02 02:35:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:35:41.086226 | orchestrator | 2026-01-02 02:35:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:35:44.136045 | orchestrator | 2026-01-02 02:35:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:35:44.137484 | orchestrator | 2026-01-02 02:35:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:35:44.137538 | orchestrator | 2026-01-02 02:35:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:35:47.185973 | orchestrator | 2026-01-02 02:35:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:35:47.188032 | orchestrator | 2026-01-02 02:35:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:35:47.188080 | orchestrator | 2026-01-02 02:35:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:35:50.232178 | orchestrator | 2026-01-02 02:35:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:35:50.233684 | orchestrator | 2026-01-02 02:35:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:35:50.233726 | orchestrator | 2026-01-02 02:35:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:35:53.283027 | orchestrator | 2026-01-02 02:35:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:35:53.284950 | orchestrator | 2026-01-02 02:35:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:35:53.285156 | orchestrator | 2026-01-02 02:35:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:35:56.332101 | orchestrator | 2026-01-02 02:35:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:35:56.334264 | orchestrator | 2026-01-02 02:35:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:35:56.334321 | orchestrator | 2026-01-02 02:35:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:35:59.386578 | orchestrator | 2026-01-02 02:35:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:35:59.388362 | orchestrator | 2026-01-02 02:35:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:35:59.388394 | orchestrator | 2026-01-02 02:35:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:36:02.441490 | orchestrator | 2026-01-02 02:36:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:36:02.444433 | orchestrator | 2026-01-02 02:36:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:36:02.444522 | orchestrator | 2026-01-02 02:36:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:36:05.491245 | orchestrator | 2026-01-02 02:36:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:36:05.494232 | orchestrator | 2026-01-02 02:36:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:36:05.494321 | orchestrator | 2026-01-02 02:36:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:36:08.551234 | orchestrator | 2026-01-02 02:36:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:36:08.552513 | orchestrator | 2026-01-02 02:36:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:36:08.552579 | orchestrator | 2026-01-02 02:36:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:36:11.604092 | orchestrator | 2026-01-02 02:36:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:36:11.605333 | orchestrator | 2026-01-02 02:36:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:36:11.605372 | orchestrator | 2026-01-02 02:36:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:36:14.651439 | orchestrator | 2026-01-02 02:36:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:36:14.653292 | orchestrator | 2026-01-02 02:36:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:36:14.653340 | orchestrator | 2026-01-02 02:36:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:36:17.704484 | orchestrator | 2026-01-02 02:36:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:36:17.705877 | orchestrator | 2026-01-02 02:36:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:36:17.705935 | orchestrator | 2026-01-02 02:36:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:36:20.756848 | orchestrator | 2026-01-02 02:36:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:36:20.759560 | orchestrator | 2026-01-02 02:36:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:36:20.759603 | orchestrator | 2026-01-02 02:36:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:36:23.810562 | orchestrator | 2026-01-02 02:36:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:36:23.812441 | orchestrator | 2026-01-02 02:36:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:36:23.812478 | orchestrator | 2026-01-02 02:36:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:36:26.863313 | orchestrator | 2026-01-02 02:36:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:36:26.864637 | orchestrator | 2026-01-02 02:36:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:36:26.864768 | orchestrator | 2026-01-02 02:36:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:36:29.917166 | orchestrator | 2026-01-02 02:36:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:36:29.918386 | orchestrator | 2026-01-02 02:36:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:36:29.918529 | orchestrator | 2026-01-02 02:36:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:36:32.966431 | orchestrator | 2026-01-02 02:36:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:36:32.968497 | orchestrator | 2026-01-02 02:36:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:36:32.968591 | orchestrator | 2026-01-02 02:36:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:36:36.013643 | orchestrator | 2026-01-02 02:36:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:36:36.015254 | orchestrator | 2026-01-02 02:36:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:36:36.015301 | orchestrator | 2026-01-02 02:36:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:36:39.060973 | orchestrator | 2026-01-02 02:36:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:36:39.061991 | orchestrator | 2026-01-02 02:36:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:36:39.062122 | orchestrator | 2026-01-02 02:36:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:36:42.102616 | orchestrator | 2026-01-02 02:36:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:36:42.102782 | orchestrator | 2026-01-02 02:36:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:36:42.102803 | orchestrator | 2026-01-02 02:36:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:36:45.151358 | orchestrator | 2026-01-02 02:36:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:36:45.153632 | orchestrator | 2026-01-02 02:36:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:36:45.153831 | orchestrator | 2026-01-02 02:36:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:36:48.205950 | orchestrator | 2026-01-02 02:36:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:36:48.207106 | orchestrator | 2026-01-02 02:36:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:36:48.207145 | orchestrator | 2026-01-02 02:36:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:36:51.256020 | orchestrator | 2026-01-02 02:36:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:36:51.256588 | orchestrator | 2026-01-02 02:36:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:36:51.256743 | orchestrator | 2026-01-02 02:36:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:36:54.306356 | orchestrator | 2026-01-02 02:36:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:36:54.308769 | orchestrator | 2026-01-02 02:36:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:36:54.308890 | orchestrator | 2026-01-02 02:36:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:36:57.353170 | orchestrator | 2026-01-02 02:36:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:36:57.355895 | orchestrator | 2026-01-02 02:36:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:36:57.355982 | orchestrator | 2026-01-02 02:36:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:37:00.406369 | orchestrator | 2026-01-02 02:37:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:37:00.409029 | orchestrator | 2026-01-02 02:37:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:37:00.409129 | orchestrator | 2026-01-02 02:37:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:37:03.462366 | orchestrator | 2026-01-02 02:37:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:37:03.464020 | orchestrator | 2026-01-02 02:37:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:37:03.464157 | orchestrator | 2026-01-02 02:37:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:37:06.510959 | orchestrator | 2026-01-02 02:37:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:37:06.514256 | orchestrator | 2026-01-02 02:37:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:37:06.514492 | orchestrator | 2026-01-02 02:37:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:37:09.563598 | orchestrator | 2026-01-02 02:37:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:37:09.565983 | orchestrator | 2026-01-02 02:37:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:37:09.566242 | orchestrator | 2026-01-02 02:37:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:37:12.611088 | orchestrator | 2026-01-02 02:37:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:37:12.612665 | orchestrator | 2026-01-02 02:37:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:37:12.612809 | orchestrator | 2026-01-02 02:37:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:37:15.656084 | orchestrator | 2026-01-02 02:37:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:37:15.658202 | orchestrator | 2026-01-02 02:37:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:37:15.658265 | orchestrator | 2026-01-02 02:37:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:37:18.704067 | orchestrator | 2026-01-02 02:37:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:37:18.707364 | orchestrator | 2026-01-02 02:37:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:37:18.707401 | orchestrator | 2026-01-02 02:37:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:37:21.754986 | orchestrator | 2026-01-02 02:37:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:37:21.757483 | orchestrator | 2026-01-02 02:37:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:37:21.757583 | orchestrator | 2026-01-02 02:37:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:37:24.800938 | orchestrator | 2026-01-02 02:37:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:37:24.802566 | orchestrator | 2026-01-02 02:37:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:37:24.802612 | orchestrator | 2026-01-02 02:37:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:37:27.857569 | orchestrator | 2026-01-02 02:37:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:37:27.858568 | orchestrator | 2026-01-02 02:37:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:37:27.858608 | orchestrator | 2026-01-02 02:37:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:37:30.905226 | orchestrator | 2026-01-02 02:37:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:37:30.906783 | orchestrator | 2026-01-02 02:37:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:37:30.906988 | orchestrator | 2026-01-02 02:37:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:37:33.958749 | orchestrator | 2026-01-02 02:37:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:37:33.960847 | orchestrator | 2026-01-02 02:37:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:37:33.960888 | orchestrator | 2026-01-02 02:37:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:37:37.010260 | orchestrator | 2026-01-02 02:37:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:37:37.012937 | orchestrator | 2026-01-02 02:37:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:37:37.012977 | orchestrator | 2026-01-02 02:37:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:37:40.064952 | orchestrator | 2026-01-02 02:37:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:37:40.066113 | orchestrator | 2026-01-02 02:37:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:37:40.066193 | orchestrator | 2026-01-02 02:37:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:37:43.115999 | orchestrator | 2026-01-02 02:37:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:37:43.117973 | orchestrator | 2026-01-02 02:37:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:37:43.118014 | orchestrator | 2026-01-02 02:37:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:37:46.166922 | orchestrator | 2026-01-02 02:37:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:37:46.168343 | orchestrator | 2026-01-02 02:37:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:37:46.168412 | orchestrator | 2026-01-02 02:37:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:37:49.221857 | orchestrator | 2026-01-02 02:37:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:37:49.222987 | orchestrator | 2026-01-02 02:37:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:37:49.223030 | orchestrator | 2026-01-02 02:37:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:37:52.271468 | orchestrator | 2026-01-02 02:37:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:37:52.273336 | orchestrator | 2026-01-02 02:37:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:37:52.273363 | orchestrator | 2026-01-02 02:37:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:37:55.323957 | orchestrator | 2026-01-02 02:37:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:37:55.324593 | orchestrator | 2026-01-02 02:37:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:37:55.324636 | orchestrator | 2026-01-02 02:37:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:37:58.367089 | orchestrator | 2026-01-02 02:37:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:37:58.368593 | orchestrator | 2026-01-02 02:37:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:37:58.368620 | orchestrator | 2026-01-02 02:37:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:38:01.419732 | orchestrator | 2026-01-02 02:38:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:38:01.421510 | orchestrator | 2026-01-02 02:38:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:38:01.421707 | orchestrator | 2026-01-02 02:38:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:38:04.467307 | orchestrator | 2026-01-02 02:38:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:38:04.468628 | orchestrator | 2026-01-02 02:38:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:38:04.468739 | orchestrator | 2026-01-02 02:38:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:38:07.514116 | orchestrator | 2026-01-02 02:38:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:38:07.515506 | orchestrator | 2026-01-02 02:38:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:38:07.515543 | orchestrator | 2026-01-02 02:38:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:38:10.561370 | orchestrator | 2026-01-02 02:38:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:38:10.562217 | orchestrator | 2026-01-02 02:38:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:38:10.562265 | orchestrator | 2026-01-02 02:38:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:38:13.623393 | orchestrator | 2026-01-02 02:38:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:38:13.625302 | orchestrator | 2026-01-02 02:38:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:38:13.625368 | orchestrator | 2026-01-02 02:38:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:38:16.678812 | orchestrator | 2026-01-02 02:38:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:38:16.681221 | orchestrator | 2026-01-02 02:38:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:38:16.681256 | orchestrator | 2026-01-02 02:38:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:38:19.729745 | orchestrator | 2026-01-02 02:38:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:38:19.732032 | orchestrator | 2026-01-02 02:38:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:38:19.732091 | orchestrator | 2026-01-02 02:38:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:38:22.778582 | orchestrator | 2026-01-02 02:38:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:38:22.780444 | orchestrator | 2026-01-02 02:38:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:38:22.780504 | orchestrator | 2026-01-02 02:38:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:38:25.832629 | orchestrator | 2026-01-02 02:38:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:38:25.834406 | orchestrator | 2026-01-02 02:38:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:38:25.834449 | orchestrator | 2026-01-02 02:38:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:38:28.892427 | orchestrator | 2026-01-02 02:38:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:38:28.895643 | orchestrator | 2026-01-02 02:38:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:38:28.895760 | orchestrator | 2026-01-02 02:38:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:38:31.942451 | orchestrator | 2026-01-02 02:38:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:38:31.943997 | orchestrator | 2026-01-02 02:38:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:38:31.944046 | orchestrator | 2026-01-02 02:38:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:38:34.994003 | orchestrator | 2026-01-02 02:38:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:38:34.995598 | orchestrator | 2026-01-02 02:38:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:38:34.995640 | orchestrator | 2026-01-02 02:38:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:38:38.040540 | orchestrator | 2026-01-02 02:38:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:38:38.040815 | orchestrator | 2026-01-02 02:38:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:38:38.041142 | orchestrator | 2026-01-02 02:38:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:38:41.084245 | orchestrator | 2026-01-02 02:38:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:38:41.086742 | orchestrator | 2026-01-02 02:38:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:38:41.086782 | orchestrator | 2026-01-02 02:38:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:38:44.144930 | orchestrator | 2026-01-02 02:38:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:38:44.146583 | orchestrator | 2026-01-02 02:38:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:38:44.146623 | orchestrator | 2026-01-02 02:38:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:38:47.195059 | orchestrator | 2026-01-02 02:38:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:38:47.196872 | orchestrator | 2026-01-02 02:38:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:38:47.197015 | orchestrator | 2026-01-02 02:38:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:38:50.244147 | orchestrator | 2026-01-02 02:38:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:38:50.246092 | orchestrator | 2026-01-02 02:38:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:38:50.246124 | orchestrator | 2026-01-02 02:38:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:38:53.294279 | orchestrator | 2026-01-02 02:38:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:38:53.297218 | orchestrator | 2026-01-02 02:38:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:38:53.297298 | orchestrator | 2026-01-02 02:38:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:38:56.344579 | orchestrator | 2026-01-02 02:38:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:38:56.346215 | orchestrator | 2026-01-02 02:38:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:38:56.346475 | orchestrator | 2026-01-02 02:38:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:38:59.401610 | orchestrator | 2026-01-02 02:38:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:38:59.402642 | orchestrator | 2026-01-02 02:38:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:38:59.402749 | orchestrator | 2026-01-02 02:38:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:39:02.447792 | orchestrator | 2026-01-02 02:39:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:39:02.449139 | orchestrator | 2026-01-02 02:39:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:39:02.449170 | orchestrator | 2026-01-02 02:39:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:39:05.501271 | orchestrator | 2026-01-02 02:39:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:39:05.502344 | orchestrator | 2026-01-02 02:39:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:39:05.502378 | orchestrator | 2026-01-02 02:39:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:39:08.547190 | orchestrator | 2026-01-02 02:39:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:39:08.549046 | orchestrator | 2026-01-02 02:39:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:39:08.549239 | orchestrator | 2026-01-02 02:39:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:39:11.592329 | orchestrator | 2026-01-02 02:39:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:39:11.595015 | orchestrator | 2026-01-02 02:39:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:39:11.595802 | orchestrator | 2026-01-02 02:39:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:39:14.640962 | orchestrator | 2026-01-02 02:39:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:39:14.641970 | orchestrator | 2026-01-02 02:39:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:39:14.642403 | orchestrator | 2026-01-02 02:39:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:39:17.690933 | orchestrator | 2026-01-02 02:39:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:39:17.694266 | orchestrator | 2026-01-02 02:39:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:39:17.694402 | orchestrator | 2026-01-02 02:39:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:39:20.746613 | orchestrator | 2026-01-02 02:39:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:39:20.749590 | orchestrator | 2026-01-02 02:39:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:39:20.749640 | orchestrator | 2026-01-02 02:39:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:39:23.799538 | orchestrator | 2026-01-02 02:39:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:39:23.802922 | orchestrator | 2026-01-02 02:39:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:39:23.802968 | orchestrator | 2026-01-02 02:39:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:39:26.859616 | orchestrator | 2026-01-02 02:39:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:39:26.863470 | orchestrator | 2026-01-02 02:39:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:39:26.863512 | orchestrator | 2026-01-02 02:39:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:39:29.914134 | orchestrator | 2026-01-02 02:39:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:39:29.917433 | orchestrator | 2026-01-02 02:39:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:39:29.917475 | orchestrator | 2026-01-02 02:39:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:39:32.957666 | orchestrator | 2026-01-02 02:39:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:39:32.958522 | orchestrator | 2026-01-02 02:39:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:39:32.958988 | orchestrator | 2026-01-02 02:39:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:39:36.011666 | orchestrator | 2026-01-02 02:39:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:39:36.014397 | orchestrator | 2026-01-02 02:39:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:39:36.014465 | orchestrator | 2026-01-02 02:39:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:39:39.061657 | orchestrator | 2026-01-02 02:39:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:39:39.063633 | orchestrator | 2026-01-02 02:39:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:39:39.063820 | orchestrator | 2026-01-02 02:39:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:39:42.109511 | orchestrator | 2026-01-02 02:39:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:39:42.110811 | orchestrator | 2026-01-02 02:39:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:39:42.111211 | orchestrator | 2026-01-02 02:39:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:39:45.158861 | orchestrator | 2026-01-02 02:39:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:39:45.160487 | orchestrator | 2026-01-02 02:39:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:39:45.160599 | orchestrator | 2026-01-02 02:39:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:39:48.218258 | orchestrator | 2026-01-02 02:39:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:39:48.219004 | orchestrator | 2026-01-02 02:39:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:39:48.219038 | orchestrator | 2026-01-02 02:39:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:39:51.270191 | orchestrator | 2026-01-02 02:39:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:39:51.271894 | orchestrator | 2026-01-02 02:39:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:39:51.271961 | orchestrator | 2026-01-02 02:39:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:39:54.319854 | orchestrator | 2026-01-02 02:39:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:39:54.320897 | orchestrator | 2026-01-02 02:39:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:39:54.320941 | orchestrator | 2026-01-02 02:39:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:39:57.368249 | orchestrator | 2026-01-02 02:39:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:39:57.370762 | orchestrator | 2026-01-02 02:39:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:39:57.370825 | orchestrator | 2026-01-02 02:39:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:40:00.426105 | orchestrator | 2026-01-02 02:40:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:40:00.427552 | orchestrator | 2026-01-02 02:40:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:40:00.427629 | orchestrator | 2026-01-02 02:40:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:40:03.482291 | orchestrator | 2026-01-02 02:40:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:40:03.485704 | orchestrator | 2026-01-02 02:40:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:40:03.485874 | orchestrator | 2026-01-02 02:40:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:40:06.533370 | orchestrator | 2026-01-02 02:40:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:40:06.535293 | orchestrator | 2026-01-02 02:40:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:40:06.535347 | orchestrator | 2026-01-02 02:40:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:40:09.590272 | orchestrator | 2026-01-02 02:40:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:40:09.593062 | orchestrator | 2026-01-02 02:40:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:40:09.593129 | orchestrator | 2026-01-02 02:40:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:40:12.641222 | orchestrator | 2026-01-02 02:40:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:40:12.643065 | orchestrator | 2026-01-02 02:40:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:40:12.643137 | orchestrator | 2026-01-02 02:40:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:40:15.694183 | orchestrator | 2026-01-02 02:40:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:40:15.696925 | orchestrator | 2026-01-02 02:40:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:40:15.697093 | orchestrator | 2026-01-02 02:40:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:40:18.750132 | orchestrator | 2026-01-02 02:40:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:40:18.752399 | orchestrator | 2026-01-02 02:40:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:40:18.752435 | orchestrator | 2026-01-02 02:40:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:40:21.799549 | orchestrator | 2026-01-02 02:40:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:40:21.799861 | orchestrator | 2026-01-02 02:40:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:40:21.799894 | orchestrator | 2026-01-02 02:40:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:40:24.848601 | orchestrator | 2026-01-02 02:40:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:40:24.849202 | orchestrator | 2026-01-02 02:40:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:40:24.849238 | orchestrator | 2026-01-02 02:40:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:40:27.892960 | orchestrator | 2026-01-02 02:40:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:40:27.894357 | orchestrator | 2026-01-02 02:40:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:40:27.894431 | orchestrator | 2026-01-02 02:40:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:40:30.937231 | orchestrator | 2026-01-02 02:40:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:40:30.938652 | orchestrator | 2026-01-02 02:40:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:40:30.938795 | orchestrator | 2026-01-02 02:40:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:40:33.984844 | orchestrator | 2026-01-02 02:40:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:40:33.987085 | orchestrator | 2026-01-02 02:40:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:40:33.987151 | orchestrator | 2026-01-02 02:40:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:40:37.029618 | orchestrator | 2026-01-02 02:40:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:40:37.030591 | orchestrator | 2026-01-02 02:40:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:40:37.030625 | orchestrator | 2026-01-02 02:40:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:40:40.072369 | orchestrator | 2026-01-02 02:40:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:40:40.074242 | orchestrator | 2026-01-02 02:40:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:40:40.074285 | orchestrator | 2026-01-02 02:40:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:40:43.119145 | orchestrator | 2026-01-02 02:40:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:40:43.120510 | orchestrator | 2026-01-02 02:40:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:40:43.121119 | orchestrator | 2026-01-02 02:40:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:40:46.178303 | orchestrator | 2026-01-02 02:40:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:40:46.180152 | orchestrator | 2026-01-02 02:40:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:40:46.180176 | orchestrator | 2026-01-02 02:40:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:40:49.231305 | orchestrator | 2026-01-02 02:40:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:40:49.233708 | orchestrator | 2026-01-02 02:40:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:40:49.233888 | orchestrator | 2026-01-02 02:40:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:40:52.286689 | orchestrator | 2026-01-02 02:40:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:40:52.288580 | orchestrator | 2026-01-02 02:40:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:40:52.288611 | orchestrator | 2026-01-02 02:40:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:40:55.339493 | orchestrator | 2026-01-02 02:40:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:40:55.341049 | orchestrator | 2026-01-02 02:40:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:40:55.341101 | orchestrator | 2026-01-02 02:40:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:40:58.391984 | orchestrator | 2026-01-02 02:40:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:40:58.393911 | orchestrator | 2026-01-02 02:40:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:40:58.393947 | orchestrator | 2026-01-02 02:40:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:41:01.433320 | orchestrator | 2026-01-02 02:41:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:41:01.434333 | orchestrator | 2026-01-02 02:41:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:41:01.434422 | orchestrator | 2026-01-02 02:41:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:41:04.481407 | orchestrator | 2026-01-02 02:41:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:41:04.483340 | orchestrator | 2026-01-02 02:41:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:41:04.483398 | orchestrator | 2026-01-02 02:41:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:41:07.530186 | orchestrator | 2026-01-02 02:41:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:41:07.532482 | orchestrator | 2026-01-02 02:41:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:41:07.532528 | orchestrator | 2026-01-02 02:41:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:41:10.581166 | orchestrator | 2026-01-02 02:41:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:41:10.583833 | orchestrator | 2026-01-02 02:41:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:41:10.583874 | orchestrator | 2026-01-02 02:41:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:41:13.639852 | orchestrator | 2026-01-02 02:41:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:41:13.641946 | orchestrator | 2026-01-02 02:41:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:41:13.641992 | orchestrator | 2026-01-02 02:41:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:41:16.687702 | orchestrator | 2026-01-02 02:41:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:41:16.689166 | orchestrator | 2026-01-02 02:41:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:41:16.689496 | orchestrator | 2026-01-02 02:41:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:41:19.739387 | orchestrator | 2026-01-02 02:41:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:41:19.741216 | orchestrator | 2026-01-02 02:41:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:41:19.741273 | orchestrator | 2026-01-02 02:41:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:41:22.782308 | orchestrator | 2026-01-02 02:41:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:41:22.783717 | orchestrator | 2026-01-02 02:41:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:41:22.783773 | orchestrator | 2026-01-02 02:41:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:41:25.827678 | orchestrator | 2026-01-02 02:41:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:41:25.829832 | orchestrator | 2026-01-02 02:41:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:41:25.829883 | orchestrator | 2026-01-02 02:41:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:41:28.874830 | orchestrator | 2026-01-02 02:41:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:41:28.876447 | orchestrator | 2026-01-02 02:41:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:41:28.876590 | orchestrator | 2026-01-02 02:41:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:41:31.923903 | orchestrator | 2026-01-02 02:41:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:41:31.924612 | orchestrator | 2026-01-02 02:41:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:41:31.924656 | orchestrator | 2026-01-02 02:41:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:41:34.970838 | orchestrator | 2026-01-02 02:41:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:41:34.971971 | orchestrator | 2026-01-02 02:41:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:41:34.972127 | orchestrator | 2026-01-02 02:41:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:41:38.025307 | orchestrator | 2026-01-02 02:41:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:41:38.025430 | orchestrator | 2026-01-02 02:41:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:41:38.025783 | orchestrator | 2026-01-02 02:41:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:41:41.078505 | orchestrator | 2026-01-02 02:41:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:41:41.080119 | orchestrator | 2026-01-02 02:41:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:41:41.080194 | orchestrator | 2026-01-02 02:41:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:41:44.122441 | orchestrator | 2026-01-02 02:41:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:41:44.124694 | orchestrator | 2026-01-02 02:41:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:41:44.124732 | orchestrator | 2026-01-02 02:41:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:41:47.179601 | orchestrator | 2026-01-02 02:41:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:41:47.182622 | orchestrator | 2026-01-02 02:41:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:41:47.182666 | orchestrator | 2026-01-02 02:41:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:41:50.234919 | orchestrator | 2026-01-02 02:41:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:41:50.237161 | orchestrator | 2026-01-02 02:41:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:41:50.237249 | orchestrator | 2026-01-02 02:41:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:41:53.287324 | orchestrator | 2026-01-02 02:41:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:41:53.289281 | orchestrator | 2026-01-02 02:41:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:41:53.289319 | orchestrator | 2026-01-02 02:41:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:41:56.334697 | orchestrator | 2026-01-02 02:41:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:41:56.337275 | orchestrator | 2026-01-02 02:41:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:41:56.337372 | orchestrator | 2026-01-02 02:41:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:41:59.383012 | orchestrator | 2026-01-02 02:41:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:41:59.384250 | orchestrator | 2026-01-02 02:41:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:41:59.384527 | orchestrator | 2026-01-02 02:41:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:42:02.433995 | orchestrator | 2026-01-02 02:42:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:42:02.436139 | orchestrator | 2026-01-02 02:42:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:42:02.436195 | orchestrator | 2026-01-02 02:42:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:42:05.486886 | orchestrator | 2026-01-02 02:42:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:42:05.488149 | orchestrator | 2026-01-02 02:42:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:42:05.488179 | orchestrator | 2026-01-02 02:42:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:42:08.534889 | orchestrator | 2026-01-02 02:42:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:42:08.538050 | orchestrator | 2026-01-02 02:42:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:42:08.538085 | orchestrator | 2026-01-02 02:42:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:42:11.589357 | orchestrator | 2026-01-02 02:42:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:42:11.591504 | orchestrator | 2026-01-02 02:42:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:42:11.591554 | orchestrator | 2026-01-02 02:42:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:42:14.646446 | orchestrator | 2026-01-02 02:42:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:42:14.647203 | orchestrator | 2026-01-02 02:42:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:42:14.647252 | orchestrator | 2026-01-02 02:42:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:42:17.695984 | orchestrator | 2026-01-02 02:42:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:42:17.697920 | orchestrator | 2026-01-02 02:42:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:42:17.698104 | orchestrator | 2026-01-02 02:42:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:42:20.750078 | orchestrator | 2026-01-02 02:42:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:42:20.752621 | orchestrator | 2026-01-02 02:42:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:42:20.752715 | orchestrator | 2026-01-02 02:42:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:42:23.801563 | orchestrator | 2026-01-02 02:42:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:42:23.802868 | orchestrator | 2026-01-02 02:42:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:42:23.802917 | orchestrator | 2026-01-02 02:42:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:42:26.852244 | orchestrator | 2026-01-02 02:42:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:42:26.853630 | orchestrator | 2026-01-02 02:42:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:42:26.853721 | orchestrator | 2026-01-02 02:42:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:42:29.903198 | orchestrator | 2026-01-02 02:42:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:42:29.905070 | orchestrator | 2026-01-02 02:42:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:42:29.905165 | orchestrator | 2026-01-02 02:42:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:42:32.954591 | orchestrator | 2026-01-02 02:42:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:42:32.957406 | orchestrator | 2026-01-02 02:42:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:42:32.957452 | orchestrator | 2026-01-02 02:42:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:42:36.002906 | orchestrator | 2026-01-02 02:42:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:42:36.005709 | orchestrator | 2026-01-02 02:42:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:42:36.005786 | orchestrator | 2026-01-02 02:42:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:42:39.047442 | orchestrator | 2026-01-02 02:42:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:42:39.050066 | orchestrator | 2026-01-02 02:42:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:42:39.050121 | orchestrator | 2026-01-02 02:42:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:42:42.095693 | orchestrator | 2026-01-02 02:42:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:42:42.099085 | orchestrator | 2026-01-02 02:42:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:42:42.099206 | orchestrator | 2026-01-02 02:42:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:42:45.146654 | orchestrator | 2026-01-02 02:42:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:42:45.150727 | orchestrator | 2026-01-02 02:42:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:42:45.150970 | orchestrator | 2026-01-02 02:42:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:42:48.207822 | orchestrator | 2026-01-02 02:42:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:42:48.209438 | orchestrator | 2026-01-02 02:42:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:42:48.209470 | orchestrator | 2026-01-02 02:42:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:42:51.258918 | orchestrator | 2026-01-02 02:42:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:42:51.261742 | orchestrator | 2026-01-02 02:42:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:42:51.261834 | orchestrator | 2026-01-02 02:42:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:42:54.308372 | orchestrator | 2026-01-02 02:42:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:42:54.310672 | orchestrator | 2026-01-02 02:42:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:42:54.310717 | orchestrator | 2026-01-02 02:42:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:42:57.363067 | orchestrator | 2026-01-02 02:42:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:42:57.366646 | orchestrator | 2026-01-02 02:42:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:42:57.366707 | orchestrator | 2026-01-02 02:42:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:43:00.415494 | orchestrator | 2026-01-02 02:43:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:43:00.416880 | orchestrator | 2026-01-02 02:43:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:43:00.416930 | orchestrator | 2026-01-02 02:43:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:43:03.468697 | orchestrator | 2026-01-02 02:43:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:43:03.471070 | orchestrator | 2026-01-02 02:43:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:43:03.471186 | orchestrator | 2026-01-02 02:43:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:43:06.526163 | orchestrator | 2026-01-02 02:43:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:43:06.528048 | orchestrator | 2026-01-02 02:43:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:43:06.528180 | orchestrator | 2026-01-02 02:43:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:43:09.578973 | orchestrator | 2026-01-02 02:43:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:43:09.580669 | orchestrator | 2026-01-02 02:43:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:43:09.580703 | orchestrator | 2026-01-02 02:43:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:43:12.631267 | orchestrator | 2026-01-02 02:43:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:43:12.632806 | orchestrator | 2026-01-02 02:43:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:43:12.633316 | orchestrator | 2026-01-02 02:43:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:43:15.684199 | orchestrator | 2026-01-02 02:43:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:43:15.687087 | orchestrator | 2026-01-02 02:43:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:43:15.687143 | orchestrator | 2026-01-02 02:43:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:43:18.734798 | orchestrator | 2026-01-02 02:43:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:43:18.736332 | orchestrator | 2026-01-02 02:43:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:43:18.736473 | orchestrator | 2026-01-02 02:43:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:43:21.790431 | orchestrator | 2026-01-02 02:43:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:43:21.793057 | orchestrator | 2026-01-02 02:43:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:43:21.793148 | orchestrator | 2026-01-02 02:43:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:43:24.845063 | orchestrator | 2026-01-02 02:43:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:43:24.848512 | orchestrator | 2026-01-02 02:43:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:43:24.848581 | orchestrator | 2026-01-02 02:43:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:43:27.895699 | orchestrator | 2026-01-02 02:43:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:43:27.898289 | orchestrator | 2026-01-02 02:43:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:43:27.898460 | orchestrator | 2026-01-02 02:43:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:43:30.949019 | orchestrator | 2026-01-02 02:43:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:43:30.951222 | orchestrator | 2026-01-02 02:43:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:43:30.951310 | orchestrator | 2026-01-02 02:43:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:43:34.004168 | orchestrator | 2026-01-02 02:43:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:43:34.005215 | orchestrator | 2026-01-02 02:43:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:43:34.005246 | orchestrator | 2026-01-02 02:43:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:43:37.051861 | orchestrator | 2026-01-02 02:43:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:43:37.053557 | orchestrator | 2026-01-02 02:43:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:43:37.053590 | orchestrator | 2026-01-02 02:43:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:43:40.103409 | orchestrator | 2026-01-02 02:43:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:43:40.107023 | orchestrator | 2026-01-02 02:43:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:43:40.107638 | orchestrator | 2026-01-02 02:43:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:43:43.159008 | orchestrator | 2026-01-02 02:43:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:43:43.160215 | orchestrator | 2026-01-02 02:43:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:43:43.160245 | orchestrator | 2026-01-02 02:43:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:43:46.214631 | orchestrator | 2026-01-02 02:43:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:43:46.216560 | orchestrator | 2026-01-02 02:43:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:43:46.216587 | orchestrator | 2026-01-02 02:43:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:43:49.261045 | orchestrator | 2026-01-02 02:43:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:43:49.263226 | orchestrator | 2026-01-02 02:43:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:43:49.263298 | orchestrator | 2026-01-02 02:43:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:43:52.317476 | orchestrator | 2026-01-02 02:43:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:43:52.326974 | orchestrator | 2026-01-02 02:43:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:43:52.327049 | orchestrator | 2026-01-02 02:43:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:43:55.367852 | orchestrator | 2026-01-02 02:43:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:43:55.370915 | orchestrator | 2026-01-02 02:43:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:43:55.371172 | orchestrator | 2026-01-02 02:43:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:43:58.415757 | orchestrator | 2026-01-02 02:43:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:43:58.418469 | orchestrator | 2026-01-02 02:43:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:43:58.418530 | orchestrator | 2026-01-02 02:43:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:44:01.467629 | orchestrator | 2026-01-02 02:44:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:44:01.469223 | orchestrator | 2026-01-02 02:44:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:44:01.469370 | orchestrator | 2026-01-02 02:44:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:44:04.515298 | orchestrator | 2026-01-02 02:44:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:44:04.516465 | orchestrator | 2026-01-02 02:44:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:44:04.516516 | orchestrator | 2026-01-02 02:44:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:44:07.561364 | orchestrator | 2026-01-02 02:44:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:44:07.563297 | orchestrator | 2026-01-02 02:44:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:44:07.563375 | orchestrator | 2026-01-02 02:44:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:44:10.608084 | orchestrator | 2026-01-02 02:44:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:44:10.609737 | orchestrator | 2026-01-02 02:44:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:44:10.609804 | orchestrator | 2026-01-02 02:44:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:44:13.660578 | orchestrator | 2026-01-02 02:44:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:44:13.662257 | orchestrator | 2026-01-02 02:44:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:44:13.662307 | orchestrator | 2026-01-02 02:44:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:44:16.716657 | orchestrator | 2026-01-02 02:44:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:44:16.720387 | orchestrator | 2026-01-02 02:44:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:44:16.720640 | orchestrator | 2026-01-02 02:44:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:44:19.767588 | orchestrator | 2026-01-02 02:44:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:44:19.768755 | orchestrator | 2026-01-02 02:44:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:44:19.768955 | orchestrator | 2026-01-02 02:44:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:44:22.814933 | orchestrator | 2026-01-02 02:44:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:44:22.816357 | orchestrator | 2026-01-02 02:44:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:44:22.816394 | orchestrator | 2026-01-02 02:44:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:44:25.866098 | orchestrator | 2026-01-02 02:44:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:44:25.868524 | orchestrator | 2026-01-02 02:44:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:44:25.868560 | orchestrator | 2026-01-02 02:44:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:44:28.910830 | orchestrator | 2026-01-02 02:44:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:44:28.911191 | orchestrator | 2026-01-02 02:44:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:44:28.911233 | orchestrator | 2026-01-02 02:44:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:44:31.954530 | orchestrator | 2026-01-02 02:44:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:44:31.956558 | orchestrator | 2026-01-02 02:44:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:44:31.956682 | orchestrator | 2026-01-02 02:44:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:44:35.004554 | orchestrator | 2026-01-02 02:44:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:44:35.006094 | orchestrator | 2026-01-02 02:44:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:44:35.006124 | orchestrator | 2026-01-02 02:44:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:44:38.048730 | orchestrator | 2026-01-02 02:44:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:44:38.048954 | orchestrator | 2026-01-02 02:44:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:44:38.048976 | orchestrator | 2026-01-02 02:44:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:44:41.100738 | orchestrator | 2026-01-02 02:44:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:44:41.102246 | orchestrator | 2026-01-02 02:44:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:44:41.102386 | orchestrator | 2026-01-02 02:44:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:44:44.142269 | orchestrator | 2026-01-02 02:44:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:44:44.143094 | orchestrator | 2026-01-02 02:44:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:44:44.143114 | orchestrator | 2026-01-02 02:44:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:44:47.186608 | orchestrator | 2026-01-02 02:44:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:44:47.189317 | orchestrator | 2026-01-02 02:44:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:44:47.189807 | orchestrator | 2026-01-02 02:44:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:44:50.233574 | orchestrator | 2026-01-02 02:44:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:44:50.234956 | orchestrator | 2026-01-02 02:44:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:44:50.235086 | orchestrator | 2026-01-02 02:44:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:44:53.284065 | orchestrator | 2026-01-02 02:44:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:44:53.285603 | orchestrator | 2026-01-02 02:44:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:44:53.285895 | orchestrator | 2026-01-02 02:44:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:44:56.323843 | orchestrator | 2026-01-02 02:44:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:44:56.324767 | orchestrator | 2026-01-02 02:44:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:44:56.324854 | orchestrator | 2026-01-02 02:44:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:44:59.366140 | orchestrator | 2026-01-02 02:44:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:44:59.367162 | orchestrator | 2026-01-02 02:44:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:44:59.367211 | orchestrator | 2026-01-02 02:44:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:45:02.407769 | orchestrator | 2026-01-02 02:45:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:45:02.408566 | orchestrator | 2026-01-02 02:45:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:45:02.408666 | orchestrator | 2026-01-02 02:45:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:45:05.456496 | orchestrator | 2026-01-02 02:45:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:45:05.457714 | orchestrator | 2026-01-02 02:45:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:45:05.458104 | orchestrator | 2026-01-02 02:45:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:45:08.507286 | orchestrator | 2026-01-02 02:45:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:45:08.510188 | orchestrator | 2026-01-02 02:45:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:45:08.510626 | orchestrator | 2026-01-02 02:45:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:45:11.558599 | orchestrator | 2026-01-02 02:45:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:45:11.561014 | orchestrator | 2026-01-02 02:45:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:45:11.561069 | orchestrator | 2026-01-02 02:45:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:45:14.608635 | orchestrator | 2026-01-02 02:45:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:45:14.611377 | orchestrator | 2026-01-02 02:45:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:45:14.611452 | orchestrator | 2026-01-02 02:45:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:45:17.654321 | orchestrator | 2026-01-02 02:45:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:45:17.656552 | orchestrator | 2026-01-02 02:45:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:45:17.656614 | orchestrator | 2026-01-02 02:45:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:45:20.708127 | orchestrator | 2026-01-02 02:45:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:45:20.711079 | orchestrator | 2026-01-02 02:45:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:45:20.711575 | orchestrator | 2026-01-02 02:45:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:45:23.755467 | orchestrator | 2026-01-02 02:45:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:45:23.756703 | orchestrator | 2026-01-02 02:45:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:45:23.756890 | orchestrator | 2026-01-02 02:45:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:45:26.803511 | orchestrator | 2026-01-02 02:45:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:45:26.805136 | orchestrator | 2026-01-02 02:45:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:45:26.805160 | orchestrator | 2026-01-02 02:45:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:45:29.856240 | orchestrator | 2026-01-02 02:45:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:45:29.857699 | orchestrator | 2026-01-02 02:45:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:45:29.857745 | orchestrator | 2026-01-02 02:45:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:45:32.912081 | orchestrator | 2026-01-02 02:45:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:45:32.915375 | orchestrator | 2026-01-02 02:45:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:45:32.915491 | orchestrator | 2026-01-02 02:45:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:45:35.962149 | orchestrator | 2026-01-02 02:45:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:45:35.964605 | orchestrator | 2026-01-02 02:45:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:45:35.964688 | orchestrator | 2026-01-02 02:45:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:45:39.019425 | orchestrator | 2026-01-02 02:45:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:45:39.020886 | orchestrator | 2026-01-02 02:45:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:45:39.020928 | orchestrator | 2026-01-02 02:45:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:45:42.070005 | orchestrator | 2026-01-02 02:45:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:45:42.070619 | orchestrator | 2026-01-02 02:45:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:45:42.070740 | orchestrator | 2026-01-02 02:45:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:45:45.125642 | orchestrator | 2026-01-02 02:45:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:45:45.128577 | orchestrator | 2026-01-02 02:45:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:45:45.128664 | orchestrator | 2026-01-02 02:45:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:45:48.180760 | orchestrator | 2026-01-02 02:45:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:45:48.182963 | orchestrator | 2026-01-02 02:45:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:45:48.183035 | orchestrator | 2026-01-02 02:45:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:45:51.238353 | orchestrator | 2026-01-02 02:45:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:45:51.240922 | orchestrator | 2026-01-02 02:45:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:45:51.240973 | orchestrator | 2026-01-02 02:45:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:45:54.283597 | orchestrator | 2026-01-02 02:45:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:45:54.285104 | orchestrator | 2026-01-02 02:45:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:45:54.285157 | orchestrator | 2026-01-02 02:45:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:45:57.335113 | orchestrator | 2026-01-02 02:45:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:45:57.337195 | orchestrator | 2026-01-02 02:45:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:45:57.337242 | orchestrator | 2026-01-02 02:45:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:46:00.387526 | orchestrator | 2026-01-02 02:46:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:46:00.388739 | orchestrator | 2026-01-02 02:46:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:46:00.388775 | orchestrator | 2026-01-02 02:46:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:46:03.440274 | orchestrator | 2026-01-02 02:46:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:46:03.443416 | orchestrator | 2026-01-02 02:46:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:46:03.443497 | orchestrator | 2026-01-02 02:46:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:46:06.489980 | orchestrator | 2026-01-02 02:46:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:46:06.491314 | orchestrator | 2026-01-02 02:46:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:46:06.491344 | orchestrator | 2026-01-02 02:46:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:46:09.543416 | orchestrator | 2026-01-02 02:46:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:46:09.545042 | orchestrator | 2026-01-02 02:46:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:46:09.545170 | orchestrator | 2026-01-02 02:46:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:46:12.598202 | orchestrator | 2026-01-02 02:46:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:46:12.600461 | orchestrator | 2026-01-02 02:46:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:46:12.600492 | orchestrator | 2026-01-02 02:46:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:46:15.649636 | orchestrator | 2026-01-02 02:46:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:46:15.651068 | orchestrator | 2026-01-02 02:46:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:46:15.651127 | orchestrator | 2026-01-02 02:46:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:46:18.694331 | orchestrator | 2026-01-02 02:46:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:46:18.695862 | orchestrator | 2026-01-02 02:46:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:46:18.695886 | orchestrator | 2026-01-02 02:46:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:46:21.738537 | orchestrator | 2026-01-02 02:46:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:46:21.740901 | orchestrator | 2026-01-02 02:46:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:46:21.740978 | orchestrator | 2026-01-02 02:46:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:46:24.785760 | orchestrator | 2026-01-02 02:46:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:46:24.788316 | orchestrator | 2026-01-02 02:46:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:46:24.788371 | orchestrator | 2026-01-02 02:46:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:46:27.841871 | orchestrator | 2026-01-02 02:46:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:46:27.843704 | orchestrator | 2026-01-02 02:46:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:46:27.843734 | orchestrator | 2026-01-02 02:46:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:46:30.892430 | orchestrator | 2026-01-02 02:46:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:46:30.895555 | orchestrator | 2026-01-02 02:46:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:46:30.895604 | orchestrator | 2026-01-02 02:46:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:46:33.946189 | orchestrator | 2026-01-02 02:46:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:46:33.947786 | orchestrator | 2026-01-02 02:46:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:46:33.947880 | orchestrator | 2026-01-02 02:46:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:46:36.993691 | orchestrator | 2026-01-02 02:46:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:46:36.994238 | orchestrator | 2026-01-02 02:46:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:46:36.994266 | orchestrator | 2026-01-02 02:46:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:46:40.053195 | orchestrator | 2026-01-02 02:46:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:46:40.055163 | orchestrator | 2026-01-02 02:46:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:46:40.055237 | orchestrator | 2026-01-02 02:46:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:46:43.107937 | orchestrator | 2026-01-02 02:46:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:46:43.110365 | orchestrator | 2026-01-02 02:46:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:46:43.110441 | orchestrator | 2026-01-02 02:46:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:46:46.163719 | orchestrator | 2026-01-02 02:46:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:46:46.168070 | orchestrator | 2026-01-02 02:46:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:46:46.168122 | orchestrator | 2026-01-02 02:46:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:46:49.215861 | orchestrator | 2026-01-02 02:46:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:46:49.219348 | orchestrator | 2026-01-02 02:46:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:46:49.219394 | orchestrator | 2026-01-02 02:46:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:46:52.271887 | orchestrator | 2026-01-02 02:46:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:46:52.272406 | orchestrator | 2026-01-02 02:46:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:46:52.272438 | orchestrator | 2026-01-02 02:46:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:46:55.325865 | orchestrator | 2026-01-02 02:46:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:46:55.327605 | orchestrator | 2026-01-02 02:46:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:46:55.327641 | orchestrator | 2026-01-02 02:46:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:46:58.376590 | orchestrator | 2026-01-02 02:46:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:46:58.378121 | orchestrator | 2026-01-02 02:46:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:46:58.378234 | orchestrator | 2026-01-02 02:46:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:47:01.423351 | orchestrator | 2026-01-02 02:47:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:47:01.425496 | orchestrator | 2026-01-02 02:47:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:47:01.425552 | orchestrator | 2026-01-02 02:47:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:47:04.470211 | orchestrator | 2026-01-02 02:47:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:47:04.472120 | orchestrator | 2026-01-02 02:47:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:47:04.472219 | orchestrator | 2026-01-02 02:47:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:47:07.514738 | orchestrator | 2026-01-02 02:47:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:47:07.516491 | orchestrator | 2026-01-02 02:47:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:47:07.517159 | orchestrator | 2026-01-02 02:47:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:47:10.558004 | orchestrator | 2026-01-02 02:47:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:47:10.558528 | orchestrator | 2026-01-02 02:47:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:47:10.558813 | orchestrator | 2026-01-02 02:47:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:47:13.612498 | orchestrator | 2026-01-02 02:47:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:47:13.613710 | orchestrator | 2026-01-02 02:47:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:47:13.613996 | orchestrator | 2026-01-02 02:47:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:47:16.661907 | orchestrator | 2026-01-02 02:47:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:47:16.663349 | orchestrator | 2026-01-02 02:47:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:47:16.663422 | orchestrator | 2026-01-02 02:47:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:47:19.714665 | orchestrator | 2026-01-02 02:47:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:47:19.716329 | orchestrator | 2026-01-02 02:47:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:47:19.716384 | orchestrator | 2026-01-02 02:47:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:47:22.764637 | orchestrator | 2026-01-02 02:47:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:47:22.767634 | orchestrator | 2026-01-02 02:47:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:47:22.767878 | orchestrator | 2026-01-02 02:47:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:47:25.812942 | orchestrator | 2026-01-02 02:47:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:47:25.815254 | orchestrator | 2026-01-02 02:47:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:47:25.815326 | orchestrator | 2026-01-02 02:47:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:47:28.869056 | orchestrator | 2026-01-02 02:47:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:47:28.871384 | orchestrator | 2026-01-02 02:47:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:47:28.871502 | orchestrator | 2026-01-02 02:47:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:47:31.911304 | orchestrator | 2026-01-02 02:47:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:47:31.913048 | orchestrator | 2026-01-02 02:47:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:47:31.913203 | orchestrator | 2026-01-02 02:47:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:47:34.958820 | orchestrator | 2026-01-02 02:47:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:47:34.960515 | orchestrator | 2026-01-02 02:47:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:47:34.961202 | orchestrator | 2026-01-02 02:47:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:47:38.019159 | orchestrator | 2026-01-02 02:47:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:47:38.020903 | orchestrator | 2026-01-02 02:47:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:47:38.021175 | orchestrator | 2026-01-02 02:47:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:47:41.067993 | orchestrator | 2026-01-02 02:47:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:47:41.069771 | orchestrator | 2026-01-02 02:47:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:47:41.069798 | orchestrator | 2026-01-02 02:47:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:47:44.111394 | orchestrator | 2026-01-02 02:47:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:47:44.112683 | orchestrator | 2026-01-02 02:47:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:47:44.112768 | orchestrator | 2026-01-02 02:47:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:47:47.170475 | orchestrator | 2026-01-02 02:47:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:47:47.172981 | orchestrator | 2026-01-02 02:47:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:47:47.173391 | orchestrator | 2026-01-02 02:47:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:47:50.229917 | orchestrator | 2026-01-02 02:47:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:47:50.234440 | orchestrator | 2026-01-02 02:47:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:47:50.235000 | orchestrator | 2026-01-02 02:47:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:47:53.294644 | orchestrator | 2026-01-02 02:47:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:47:53.296148 | orchestrator | 2026-01-02 02:47:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:47:53.296178 | orchestrator | 2026-01-02 02:47:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:47:56.344384 | orchestrator | 2026-01-02 02:47:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:47:56.345702 | orchestrator | 2026-01-02 02:47:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:47:56.345744 | orchestrator | 2026-01-02 02:47:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:47:59.396325 | orchestrator | 2026-01-02 02:47:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:47:59.399193 | orchestrator | 2026-01-02 02:47:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:47:59.399239 | orchestrator | 2026-01-02 02:47:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:48:02.455825 | orchestrator | 2026-01-02 02:48:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:48:02.458165 | orchestrator | 2026-01-02 02:48:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:48:02.458227 | orchestrator | 2026-01-02 02:48:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:48:05.505550 | orchestrator | 2026-01-02 02:48:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:48:05.506347 | orchestrator | 2026-01-02 02:48:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:48:05.506636 | orchestrator | 2026-01-02 02:48:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:48:08.553714 | orchestrator | 2026-01-02 02:48:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:48:08.555078 | orchestrator | 2026-01-02 02:48:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:48:08.555124 | orchestrator | 2026-01-02 02:48:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:48:11.606712 | orchestrator | 2026-01-02 02:48:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:48:11.608475 | orchestrator | 2026-01-02 02:48:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:48:11.608524 | orchestrator | 2026-01-02 02:48:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:48:14.655412 | orchestrator | 2026-01-02 02:48:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:48:14.657247 | orchestrator | 2026-01-02 02:48:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:48:14.657279 | orchestrator | 2026-01-02 02:48:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:48:17.703314 | orchestrator | 2026-01-02 02:48:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:48:17.705579 | orchestrator | 2026-01-02 02:48:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:48:17.705815 | orchestrator | 2026-01-02 02:48:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:48:20.753625 | orchestrator | 2026-01-02 02:48:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:48:20.755493 | orchestrator | 2026-01-02 02:48:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:48:20.756243 | orchestrator | 2026-01-02 02:48:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:48:23.801406 | orchestrator | 2026-01-02 02:48:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:48:23.802147 | orchestrator | 2026-01-02 02:48:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:48:23.802241 | orchestrator | 2026-01-02 02:48:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:48:26.853555 | orchestrator | 2026-01-02 02:48:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:48:26.854652 | orchestrator | 2026-01-02 02:48:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:48:26.854820 | orchestrator | 2026-01-02 02:48:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:48:29.901251 | orchestrator | 2026-01-02 02:48:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:48:29.902696 | orchestrator | 2026-01-02 02:48:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:48:29.902731 | orchestrator | 2026-01-02 02:48:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:48:32.957309 | orchestrator | 2026-01-02 02:48:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:48:32.959042 | orchestrator | 2026-01-02 02:48:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:48:32.959366 | orchestrator | 2026-01-02 02:48:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:48:36.008493 | orchestrator | 2026-01-02 02:48:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:48:36.010372 | orchestrator | 2026-01-02 02:48:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:48:36.010497 | orchestrator | 2026-01-02 02:48:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:48:39.058647 | orchestrator | 2026-01-02 02:48:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:48:39.060536 | orchestrator | 2026-01-02 02:48:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:48:39.060666 | orchestrator | 2026-01-02 02:48:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:48:42.105221 | orchestrator | 2026-01-02 02:48:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:48:42.106141 | orchestrator | 2026-01-02 02:48:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:48:42.106215 | orchestrator | 2026-01-02 02:48:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:48:45.153092 | orchestrator | 2026-01-02 02:48:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:48:45.154611 | orchestrator | 2026-01-02 02:48:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:48:45.154651 | orchestrator | 2026-01-02 02:48:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:48:48.207265 | orchestrator | 2026-01-02 02:48:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:48:48.211803 | orchestrator | 2026-01-02 02:48:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:48:48.212738 | orchestrator | 2026-01-02 02:48:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:48:51.262506 | orchestrator | 2026-01-02 02:48:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:48:51.264630 | orchestrator | 2026-01-02 02:48:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:48:51.264781 | orchestrator | 2026-01-02 02:48:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:48:54.305441 | orchestrator | 2026-01-02 02:48:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:48:54.308819 | orchestrator | 2026-01-02 02:48:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:48:54.308943 | orchestrator | 2026-01-02 02:48:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:48:57.361166 | orchestrator | 2026-01-02 02:48:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:48:57.362495 | orchestrator | 2026-01-02 02:48:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:48:57.362529 | orchestrator | 2026-01-02 02:48:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:49:00.413440 | orchestrator | 2026-01-02 02:49:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:49:00.414633 | orchestrator | 2026-01-02 02:49:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:49:00.414695 | orchestrator | 2026-01-02 02:49:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:49:03.460667 | orchestrator | 2026-01-02 02:49:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:49:03.462838 | orchestrator | 2026-01-02 02:49:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:49:03.462983 | orchestrator | 2026-01-02 02:49:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:49:06.510824 | orchestrator | 2026-01-02 02:49:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:49:06.512425 | orchestrator | 2026-01-02 02:49:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:49:06.512478 | orchestrator | 2026-01-02 02:49:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:49:09.564607 | orchestrator | 2026-01-02 02:49:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:49:09.566185 | orchestrator | 2026-01-02 02:49:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:49:09.566398 | orchestrator | 2026-01-02 02:49:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:49:12.620531 | orchestrator | 2026-01-02 02:49:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:49:12.623251 | orchestrator | 2026-01-02 02:49:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:49:12.623291 | orchestrator | 2026-01-02 02:49:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:49:15.678614 | orchestrator | 2026-01-02 02:49:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:49:15.679447 | orchestrator | 2026-01-02 02:49:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:49:15.679484 | orchestrator | 2026-01-02 02:49:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:49:18.736126 | orchestrator | 2026-01-02 02:49:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:49:18.737684 | orchestrator | 2026-01-02 02:49:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:49:18.737734 | orchestrator | 2026-01-02 02:49:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:49:21.786664 | orchestrator | 2026-01-02 02:49:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:49:21.788609 | orchestrator | 2026-01-02 02:49:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:49:21.788673 | orchestrator | 2026-01-02 02:49:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:49:24.835303 | orchestrator | 2026-01-02 02:49:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:49:24.838055 | orchestrator | 2026-01-02 02:49:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:49:24.838105 | orchestrator | 2026-01-02 02:49:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:49:27.895038 | orchestrator | 2026-01-02 02:49:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:49:27.898151 | orchestrator | 2026-01-02 02:49:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:49:27.898201 | orchestrator | 2026-01-02 02:49:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:49:30.946188 | orchestrator | 2026-01-02 02:49:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:49:30.949746 | orchestrator | 2026-01-02 02:49:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:49:30.950391 | orchestrator | 2026-01-02 02:49:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:49:34.004085 | orchestrator | 2026-01-02 02:49:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:49:34.005433 | orchestrator | 2026-01-02 02:49:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:49:34.005468 | orchestrator | 2026-01-02 02:49:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:49:37.054437 | orchestrator | 2026-01-02 02:49:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:49:37.055677 | orchestrator | 2026-01-02 02:49:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:49:37.055718 | orchestrator | 2026-01-02 02:49:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:49:40.113658 | orchestrator | 2026-01-02 02:49:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:49:40.116286 | orchestrator | 2026-01-02 02:49:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:49:40.116331 | orchestrator | 2026-01-02 02:49:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:49:43.161457 | orchestrator | 2026-01-02 02:49:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:49:43.162398 | orchestrator | 2026-01-02 02:49:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:49:43.162446 | orchestrator | 2026-01-02 02:49:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:49:46.216120 | orchestrator | 2026-01-02 02:49:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:49:46.219556 | orchestrator | 2026-01-02 02:49:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:49:46.219636 | orchestrator | 2026-01-02 02:49:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:49:49.269498 | orchestrator | 2026-01-02 02:49:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:49:49.271021 | orchestrator | 2026-01-02 02:49:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:49:49.271179 | orchestrator | 2026-01-02 02:49:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:49:52.312017 | orchestrator | 2026-01-02 02:49:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:49:52.313438 | orchestrator | 2026-01-02 02:49:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:49:52.313493 | orchestrator | 2026-01-02 02:49:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:49:55.365942 | orchestrator | 2026-01-02 02:49:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:49:55.366895 | orchestrator | 2026-01-02 02:49:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:49:55.366938 | orchestrator | 2026-01-02 02:49:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:49:58.417686 | orchestrator | 2026-01-02 02:49:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:49:58.420251 | orchestrator | 2026-01-02 02:49:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:49:58.420329 | orchestrator | 2026-01-02 02:49:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:50:01.466575 | orchestrator | 2026-01-02 02:50:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:50:01.468793 | orchestrator | 2026-01-02 02:50:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:50:01.468936 | orchestrator | 2026-01-02 02:50:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:50:04.510692 | orchestrator | 2026-01-02 02:50:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:50:04.511152 | orchestrator | 2026-01-02 02:50:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:50:04.511192 | orchestrator | 2026-01-02 02:50:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:50:07.553255 | orchestrator | 2026-01-02 02:50:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:50:07.555570 | orchestrator | 2026-01-02 02:50:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:50:07.555608 | orchestrator | 2026-01-02 02:50:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:50:10.600861 | orchestrator | 2026-01-02 02:50:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:50:10.603166 | orchestrator | 2026-01-02 02:50:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:50:10.604021 | orchestrator | 2026-01-02 02:50:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:50:13.649386 | orchestrator | 2026-01-02 02:50:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:50:13.649705 | orchestrator | 2026-01-02 02:50:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:50:13.649750 | orchestrator | 2026-01-02 02:50:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:50:16.695318 | orchestrator | 2026-01-02 02:50:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:50:16.697493 | orchestrator | 2026-01-02 02:50:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:50:16.698501 | orchestrator | 2026-01-02 02:50:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:50:19.740736 | orchestrator | 2026-01-02 02:50:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:50:19.741189 | orchestrator | 2026-01-02 02:50:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:50:19.741222 | orchestrator | 2026-01-02 02:50:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:50:22.790412 | orchestrator | 2026-01-02 02:50:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:50:22.793294 | orchestrator | 2026-01-02 02:50:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:50:22.793336 | orchestrator | 2026-01-02 02:50:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:50:25.845860 | orchestrator | 2026-01-02 02:50:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:50:25.846825 | orchestrator | 2026-01-02 02:50:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:50:25.846861 | orchestrator | 2026-01-02 02:50:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:50:28.896049 | orchestrator | 2026-01-02 02:50:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:50:28.898439 | orchestrator | 2026-01-02 02:50:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:50:28.898660 | orchestrator | 2026-01-02 02:50:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:50:31.949818 | orchestrator | 2026-01-02 02:50:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:50:31.951202 | orchestrator | 2026-01-02 02:50:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:50:31.951424 | orchestrator | 2026-01-02 02:50:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:50:35.002091 | orchestrator | 2026-01-02 02:50:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:50:35.004208 | orchestrator | 2026-01-02 02:50:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:50:35.004581 | orchestrator | 2026-01-02 02:50:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:50:38.054708 | orchestrator | 2026-01-02 02:50:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:50:38.056568 | orchestrator | 2026-01-02 02:50:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:50:38.056613 | orchestrator | 2026-01-02 02:50:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:50:41.100472 | orchestrator | 2026-01-02 02:50:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:50:41.100784 | orchestrator | 2026-01-02 02:50:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:50:41.101030 | orchestrator | 2026-01-02 02:50:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:50:44.148310 | orchestrator | 2026-01-02 02:50:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:50:44.151741 | orchestrator | 2026-01-02 02:50:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:50:44.151988 | orchestrator | 2026-01-02 02:50:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:50:47.200410 | orchestrator | 2026-01-02 02:50:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:50:47.201990 | orchestrator | 2026-01-02 02:50:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:50:47.202012 | orchestrator | 2026-01-02 02:50:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:50:50.249629 | orchestrator | 2026-01-02 02:50:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:50:50.251333 | orchestrator | 2026-01-02 02:50:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:50:50.251668 | orchestrator | 2026-01-02 02:50:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:50:53.293198 | orchestrator | 2026-01-02 02:50:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:50:53.294220 | orchestrator | 2026-01-02 02:50:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:50:53.294502 | orchestrator | 2026-01-02 02:50:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:50:56.339759 | orchestrator | 2026-01-02 02:50:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:50:56.341761 | orchestrator | 2026-01-02 02:50:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:50:56.341811 | orchestrator | 2026-01-02 02:50:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:50:59.383407 | orchestrator | 2026-01-02 02:50:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:50:59.385855 | orchestrator | 2026-01-02 02:50:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:50:59.386134 | orchestrator | 2026-01-02 02:50:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:51:02.433992 | orchestrator | 2026-01-02 02:51:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:51:02.435825 | orchestrator | 2026-01-02 02:51:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:51:02.435864 | orchestrator | 2026-01-02 02:51:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:51:05.491442 | orchestrator | 2026-01-02 02:51:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:51:05.494424 | orchestrator | 2026-01-02 02:51:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:51:05.494590 | orchestrator | 2026-01-02 02:51:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:51:08.546282 | orchestrator | 2026-01-02 02:51:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:51:08.548000 | orchestrator | 2026-01-02 02:51:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:51:08.548081 | orchestrator | 2026-01-02 02:51:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:51:11.601574 | orchestrator | 2026-01-02 02:51:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:51:11.603981 | orchestrator | 2026-01-02 02:51:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:51:11.604037 | orchestrator | 2026-01-02 02:51:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:51:14.654604 | orchestrator | 2026-01-02 02:51:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:51:14.657328 | orchestrator | 2026-01-02 02:51:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:51:14.657381 | orchestrator | 2026-01-02 02:51:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:51:17.716591 | orchestrator | 2026-01-02 02:51:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:51:17.718740 | orchestrator | 2026-01-02 02:51:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:51:17.718806 | orchestrator | 2026-01-02 02:51:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:51:20.773401 | orchestrator | 2026-01-02 02:51:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:51:20.775092 | orchestrator | 2026-01-02 02:51:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:51:20.775139 | orchestrator | 2026-01-02 02:51:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:51:23.834004 | orchestrator | 2026-01-02 02:51:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:51:23.835727 | orchestrator | 2026-01-02 02:51:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:51:23.835769 | orchestrator | 2026-01-02 02:51:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:51:26.885343 | orchestrator | 2026-01-02 02:51:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:51:26.887127 | orchestrator | 2026-01-02 02:51:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:51:26.887185 | orchestrator | 2026-01-02 02:51:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:51:29.938307 | orchestrator | 2026-01-02 02:51:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:51:29.939838 | orchestrator | 2026-01-02 02:51:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:51:29.939989 | orchestrator | 2026-01-02 02:51:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:51:32.992401 | orchestrator | 2026-01-02 02:51:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:51:32.993777 | orchestrator | 2026-01-02 02:51:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:51:32.993815 | orchestrator | 2026-01-02 02:51:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:51:36.039459 | orchestrator | 2026-01-02 02:51:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:51:36.041021 | orchestrator | 2026-01-02 02:51:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:51:36.041072 | orchestrator | 2026-01-02 02:51:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:51:39.088096 | orchestrator | 2026-01-02 02:51:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:51:39.090224 | orchestrator | 2026-01-02 02:51:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:51:39.090266 | orchestrator | 2026-01-02 02:51:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:51:42.141298 | orchestrator | 2026-01-02 02:51:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:51:42.142555 | orchestrator | 2026-01-02 02:51:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:51:42.143294 | orchestrator | 2026-01-02 02:51:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:51:45.201035 | orchestrator | 2026-01-02 02:51:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:51:45.202923 | orchestrator | 2026-01-02 02:51:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:51:45.202973 | orchestrator | 2026-01-02 02:51:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:51:48.249281 | orchestrator | 2026-01-02 02:51:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:51:48.251181 | orchestrator | 2026-01-02 02:51:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:51:48.251392 | orchestrator | 2026-01-02 02:51:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:51:51.304965 | orchestrator | 2026-01-02 02:51:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:51:51.306907 | orchestrator | 2026-01-02 02:51:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:51:51.307121 | orchestrator | 2026-01-02 02:51:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:51:54.363806 | orchestrator | 2026-01-02 02:51:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:51:54.367446 | orchestrator | 2026-01-02 02:51:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:51:54.367521 | orchestrator | 2026-01-02 02:51:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:51:57.413084 | orchestrator | 2026-01-02 02:51:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:51:57.413849 | orchestrator | 2026-01-02 02:51:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:51:57.413958 | orchestrator | 2026-01-02 02:51:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:52:00.459717 | orchestrator | 2026-01-02 02:52:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:52:00.462101 | orchestrator | 2026-01-02 02:52:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:52:00.462135 | orchestrator | 2026-01-02 02:52:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:52:03.509285 | orchestrator | 2026-01-02 02:52:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:52:03.511428 | orchestrator | 2026-01-02 02:52:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:52:03.511459 | orchestrator | 2026-01-02 02:52:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:52:06.548047 | orchestrator | 2026-01-02 02:52:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:52:06.549689 | orchestrator | 2026-01-02 02:52:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:52:06.549749 | orchestrator | 2026-01-02 02:52:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:52:09.591746 | orchestrator | 2026-01-02 02:52:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:52:09.593449 | orchestrator | 2026-01-02 02:52:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:52:09.593594 | orchestrator | 2026-01-02 02:52:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:52:12.639816 | orchestrator | 2026-01-02 02:52:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:52:12.642379 | orchestrator | 2026-01-02 02:52:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:52:12.642511 | orchestrator | 2026-01-02 02:52:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:52:15.683436 | orchestrator | 2026-01-02 02:52:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:52:15.684205 | orchestrator | 2026-01-02 02:52:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:52:15.684257 | orchestrator | 2026-01-02 02:52:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:52:18.731656 | orchestrator | 2026-01-02 02:52:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:52:18.732843 | orchestrator | 2026-01-02 02:52:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:52:18.732883 | orchestrator | 2026-01-02 02:52:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:52:21.775650 | orchestrator | 2026-01-02 02:52:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:52:21.777713 | orchestrator | 2026-01-02 02:52:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:52:21.777820 | orchestrator | 2026-01-02 02:52:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:52:24.824965 | orchestrator | 2026-01-02 02:52:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:52:24.826603 | orchestrator | 2026-01-02 02:52:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:52:24.826656 | orchestrator | 2026-01-02 02:52:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:52:27.873388 | orchestrator | 2026-01-02 02:52:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:52:27.875381 | orchestrator | 2026-01-02 02:52:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:52:27.875416 | orchestrator | 2026-01-02 02:52:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:52:30.916732 | orchestrator | 2026-01-02 02:52:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:52:30.918760 | orchestrator | 2026-01-02 02:52:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:52:30.918807 | orchestrator | 2026-01-02 02:52:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:52:33.969383 | orchestrator | 2026-01-02 02:52:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:52:33.970990 | orchestrator | 2026-01-02 02:52:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:52:33.971024 | orchestrator | 2026-01-02 02:52:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:52:37.016360 | orchestrator | 2026-01-02 02:52:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:52:37.016711 | orchestrator | 2026-01-02 02:52:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:52:37.016741 | orchestrator | 2026-01-02 02:52:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:52:40.071616 | orchestrator | 2026-01-02 02:52:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:52:40.073734 | orchestrator | 2026-01-02 02:52:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:52:40.073751 | orchestrator | 2026-01-02 02:52:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:52:43.110826 | orchestrator | 2026-01-02 02:52:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:52:43.111965 | orchestrator | 2026-01-02 02:52:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:52:43.112012 | orchestrator | 2026-01-02 02:52:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:52:46.157380 | orchestrator | 2026-01-02 02:52:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:52:46.159662 | orchestrator | 2026-01-02 02:52:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:52:46.159813 | orchestrator | 2026-01-02 02:52:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:52:49.200401 | orchestrator | 2026-01-02 02:52:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:52:49.201388 | orchestrator | 2026-01-02 02:52:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:52:49.201436 | orchestrator | 2026-01-02 02:52:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:52:52.247283 | orchestrator | 2026-01-02 02:52:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:52:52.249160 | orchestrator | 2026-01-02 02:52:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:52:52.249436 | orchestrator | 2026-01-02 02:52:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:52:55.294334 | orchestrator | 2026-01-02 02:52:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:52:55.295244 | orchestrator | 2026-01-02 02:52:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:52:55.295277 | orchestrator | 2026-01-02 02:52:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:52:58.341106 | orchestrator | 2026-01-02 02:52:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:52:58.343573 | orchestrator | 2026-01-02 02:52:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:52:58.343611 | orchestrator | 2026-01-02 02:52:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:53:01.382860 | orchestrator | 2026-01-02 02:53:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:53:01.384452 | orchestrator | 2026-01-02 02:53:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:53:01.384486 | orchestrator | 2026-01-02 02:53:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:53:04.430630 | orchestrator | 2026-01-02 02:53:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:53:04.432175 | orchestrator | 2026-01-02 02:53:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:53:04.432241 | orchestrator | 2026-01-02 02:53:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:53:07.474710 | orchestrator | 2026-01-02 02:53:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:53:07.476341 | orchestrator | 2026-01-02 02:53:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:53:07.476581 | orchestrator | 2026-01-02 02:53:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:53:10.520702 | orchestrator | 2026-01-02 02:53:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:53:10.522705 | orchestrator | 2026-01-02 02:53:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:53:10.522761 | orchestrator | 2026-01-02 02:53:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:53:13.570404 | orchestrator | 2026-01-02 02:53:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:53:13.572300 | orchestrator | 2026-01-02 02:53:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:53:13.572349 | orchestrator | 2026-01-02 02:53:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:53:16.620378 | orchestrator | 2026-01-02 02:53:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:53:16.622467 | orchestrator | 2026-01-02 02:53:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:53:16.622787 | orchestrator | 2026-01-02 02:53:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:53:19.668281 | orchestrator | 2026-01-02 02:53:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:53:19.670525 | orchestrator | 2026-01-02 02:53:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:53:19.670593 | orchestrator | 2026-01-02 02:53:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:53:22.710233 | orchestrator | 2026-01-02 02:53:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:53:22.711365 | orchestrator | 2026-01-02 02:53:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:53:22.711479 | orchestrator | 2026-01-02 02:53:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:53:25.760221 | orchestrator | 2026-01-02 02:53:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:53:25.761787 | orchestrator | 2026-01-02 02:53:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:53:25.761842 | orchestrator | 2026-01-02 02:53:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:53:28.801883 | orchestrator | 2026-01-02 02:53:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:53:28.803590 | orchestrator | 2026-01-02 02:53:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:53:28.803621 | orchestrator | 2026-01-02 02:53:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:53:31.851697 | orchestrator | 2026-01-02 02:53:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:53:31.853588 | orchestrator | 2026-01-02 02:53:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:53:31.853623 | orchestrator | 2026-01-02 02:53:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:53:34.897491 | orchestrator | 2026-01-02 02:53:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:53:34.899596 | orchestrator | 2026-01-02 02:53:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:53:34.899706 | orchestrator | 2026-01-02 02:53:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:53:37.947731 | orchestrator | 2026-01-02 02:53:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:53:37.948831 | orchestrator | 2026-01-02 02:53:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:53:37.948879 | orchestrator | 2026-01-02 02:53:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:53:41.000537 | orchestrator | 2026-01-02 02:53:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:53:41.003466 | orchestrator | 2026-01-02 02:53:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:53:41.003551 | orchestrator | 2026-01-02 02:53:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:53:44.067297 | orchestrator | 2026-01-02 02:53:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:53:44.067540 | orchestrator | 2026-01-02 02:53:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:53:44.067572 | orchestrator | 2026-01-02 02:53:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:53:47.114641 | orchestrator | 2026-01-02 02:53:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:53:47.115502 | orchestrator | 2026-01-02 02:53:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:53:47.115724 | orchestrator | 2026-01-02 02:53:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:53:50.168141 | orchestrator | 2026-01-02 02:53:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:53:50.170347 | orchestrator | 2026-01-02 02:53:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:53:50.170382 | orchestrator | 2026-01-02 02:53:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:53:53.222864 | orchestrator | 2026-01-02 02:53:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:53:53.224460 | orchestrator | 2026-01-02 02:53:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:53:53.224672 | orchestrator | 2026-01-02 02:53:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:53:56.266799 | orchestrator | 2026-01-02 02:53:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:53:56.269280 | orchestrator | 2026-01-02 02:53:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:53:56.269322 | orchestrator | 2026-01-02 02:53:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:53:59.323212 | orchestrator | 2026-01-02 02:53:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:53:59.325543 | orchestrator | 2026-01-02 02:53:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:53:59.325698 | orchestrator | 2026-01-02 02:53:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:54:02.378717 | orchestrator | 2026-01-02 02:54:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:54:02.380229 | orchestrator | 2026-01-02 02:54:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:54:02.380281 | orchestrator | 2026-01-02 02:54:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:54:05.431804 | orchestrator | 2026-01-02 02:54:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:54:05.433467 | orchestrator | 2026-01-02 02:54:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:54:05.433504 | orchestrator | 2026-01-02 02:54:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:54:08.479685 | orchestrator | 2026-01-02 02:54:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:54:08.480763 | orchestrator | 2026-01-02 02:54:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:54:08.480795 | orchestrator | 2026-01-02 02:54:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:54:11.525535 | orchestrator | 2026-01-02 02:54:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:54:11.528277 | orchestrator | 2026-01-02 02:54:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:54:11.528339 | orchestrator | 2026-01-02 02:54:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:54:14.578598 | orchestrator | 2026-01-02 02:54:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:54:14.581865 | orchestrator | 2026-01-02 02:54:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:54:14.582083 | orchestrator | 2026-01-02 02:54:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:54:17.630097 | orchestrator | 2026-01-02 02:54:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:54:17.632701 | orchestrator | 2026-01-02 02:54:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:54:17.632760 | orchestrator | 2026-01-02 02:54:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:54:20.679585 | orchestrator | 2026-01-02 02:54:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:54:20.682280 | orchestrator | 2026-01-02 02:54:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:54:20.682334 | orchestrator | 2026-01-02 02:54:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:54:23.719121 | orchestrator | 2026-01-02 02:54:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:54:23.721177 | orchestrator | 2026-01-02 02:54:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:54:23.721206 | orchestrator | 2026-01-02 02:54:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:54:26.776266 | orchestrator | 2026-01-02 02:54:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:54:26.780492 | orchestrator | 2026-01-02 02:54:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:54:26.780549 | orchestrator | 2026-01-02 02:54:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:54:29.833543 | orchestrator | 2026-01-02 02:54:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:54:29.836520 | orchestrator | 2026-01-02 02:54:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:54:29.836617 | orchestrator | 2026-01-02 02:54:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:54:32.891355 | orchestrator | 2026-01-02 02:54:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:54:32.892702 | orchestrator | 2026-01-02 02:54:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:54:32.892753 | orchestrator | 2026-01-02 02:54:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:54:35.936743 | orchestrator | 2026-01-02 02:54:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:54:35.938312 | orchestrator | 2026-01-02 02:54:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:54:35.938469 | orchestrator | 2026-01-02 02:54:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:54:38.997491 | orchestrator | 2026-01-02 02:54:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:54:38.999305 | orchestrator | 2026-01-02 02:54:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:54:38.999355 | orchestrator | 2026-01-02 02:54:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:54:42.048051 | orchestrator | 2026-01-02 02:54:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:54:42.050383 | orchestrator | 2026-01-02 02:54:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:54:42.050482 | orchestrator | 2026-01-02 02:54:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:54:45.106363 | orchestrator | 2026-01-02 02:54:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:54:45.107490 | orchestrator | 2026-01-02 02:54:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:54:45.107892 | orchestrator | 2026-01-02 02:54:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:54:48.153759 | orchestrator | 2026-01-02 02:54:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:54:48.155554 | orchestrator | 2026-01-02 02:54:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:54:48.155997 | orchestrator | 2026-01-02 02:54:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:54:51.207692 | orchestrator | 2026-01-02 02:54:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:54:51.209941 | orchestrator | 2026-01-02 02:54:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:54:51.209971 | orchestrator | 2026-01-02 02:54:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:54:54.256769 | orchestrator | 2026-01-02 02:54:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:54:54.259005 | orchestrator | 2026-01-02 02:54:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:54:54.259055 | orchestrator | 2026-01-02 02:54:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:54:57.309220 | orchestrator | 2026-01-02 02:54:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:54:57.310401 | orchestrator | 2026-01-02 02:54:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:54:57.310452 | orchestrator | 2026-01-02 02:54:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:55:00.383025 | orchestrator | 2026-01-02 02:55:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:55:00.384500 | orchestrator | 2026-01-02 02:55:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:55:00.384560 | orchestrator | 2026-01-02 02:55:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:55:03.432908 | orchestrator | 2026-01-02 02:55:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:55:03.435363 | orchestrator | 2026-01-02 02:55:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:55:03.435418 | orchestrator | 2026-01-02 02:55:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:55:06.486404 | orchestrator | 2026-01-02 02:55:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:55:06.488203 | orchestrator | 2026-01-02 02:55:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:55:06.488285 | orchestrator | 2026-01-02 02:55:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:55:09.536243 | orchestrator | 2026-01-02 02:55:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:55:09.538781 | orchestrator | 2026-01-02 02:55:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:55:09.538813 | orchestrator | 2026-01-02 02:55:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:55:12.591195 | orchestrator | 2026-01-02 02:55:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:55:12.592918 | orchestrator | 2026-01-02 02:55:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:55:12.593038 | orchestrator | 2026-01-02 02:55:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:55:15.642683 | orchestrator | 2026-01-02 02:55:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:55:15.645512 | orchestrator | 2026-01-02 02:55:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:55:15.645624 | orchestrator | 2026-01-02 02:55:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:55:18.687772 | orchestrator | 2026-01-02 02:55:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:55:18.688039 | orchestrator | 2026-01-02 02:55:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:55:18.688064 | orchestrator | 2026-01-02 02:55:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:55:21.730251 | orchestrator | 2026-01-02 02:55:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:55:21.732315 | orchestrator | 2026-01-02 02:55:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:55:21.732374 | orchestrator | 2026-01-02 02:55:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:55:24.790465 | orchestrator | 2026-01-02 02:55:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:55:24.792033 | orchestrator | 2026-01-02 02:55:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:55:24.792090 | orchestrator | 2026-01-02 02:55:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:55:27.840200 | orchestrator | 2026-01-02 02:55:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:55:27.840377 | orchestrator | 2026-01-02 02:55:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:55:27.840400 | orchestrator | 2026-01-02 02:55:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:55:30.892774 | orchestrator | 2026-01-02 02:55:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:55:30.894775 | orchestrator | 2026-01-02 02:55:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:55:30.894809 | orchestrator | 2026-01-02 02:55:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:55:33.937678 | orchestrator | 2026-01-02 02:55:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:55:33.938888 | orchestrator | 2026-01-02 02:55:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:55:33.939038 | orchestrator | 2026-01-02 02:55:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:55:36.985054 | orchestrator | 2026-01-02 02:55:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:55:36.986697 | orchestrator | 2026-01-02 02:55:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:55:36.986724 | orchestrator | 2026-01-02 02:55:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:55:40.037783 | orchestrator | 2026-01-02 02:55:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:55:40.038783 | orchestrator | 2026-01-02 02:55:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:55:40.038827 | orchestrator | 2026-01-02 02:55:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:55:43.077493 | orchestrator | 2026-01-02 02:55:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:55:43.078992 | orchestrator | 2026-01-02 02:55:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:55:43.079054 | orchestrator | 2026-01-02 02:55:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:55:46.133478 | orchestrator | 2026-01-02 02:55:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:55:46.135187 | orchestrator | 2026-01-02 02:55:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:55:46.135227 | orchestrator | 2026-01-02 02:55:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:55:49.188653 | orchestrator | 2026-01-02 02:55:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:55:49.190095 | orchestrator | 2026-01-02 02:55:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:55:49.190283 | orchestrator | 2026-01-02 02:55:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:55:52.233846 | orchestrator | 2026-01-02 02:55:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:55:52.234981 | orchestrator | 2026-01-02 02:55:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:55:52.235021 | orchestrator | 2026-01-02 02:55:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:55:55.285178 | orchestrator | 2026-01-02 02:55:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:55:55.285953 | orchestrator | 2026-01-02 02:55:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:55:55.285967 | orchestrator | 2026-01-02 02:55:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:55:58.334637 | orchestrator | 2026-01-02 02:55:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:55:58.335726 | orchestrator | 2026-01-02 02:55:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:55:58.335965 | orchestrator | 2026-01-02 02:55:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:56:01.389484 | orchestrator | 2026-01-02 02:56:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:56:01.391666 | orchestrator | 2026-01-02 02:56:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:56:01.391726 | orchestrator | 2026-01-02 02:56:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:56:04.448598 | orchestrator | 2026-01-02 02:56:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:56:04.450436 | orchestrator | 2026-01-02 02:56:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:56:04.450565 | orchestrator | 2026-01-02 02:56:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:56:07.499178 | orchestrator | 2026-01-02 02:56:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:56:07.500945 | orchestrator | 2026-01-02 02:56:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:56:07.500967 | orchestrator | 2026-01-02 02:56:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:56:10.551516 | orchestrator | 2026-01-02 02:56:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:56:10.552670 | orchestrator | 2026-01-02 02:56:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:56:10.552720 | orchestrator | 2026-01-02 02:56:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:56:13.605735 | orchestrator | 2026-01-02 02:56:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:56:13.607395 | orchestrator | 2026-01-02 02:56:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:56:13.607432 | orchestrator | 2026-01-02 02:56:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:56:16.655267 | orchestrator | 2026-01-02 02:56:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:56:16.657278 | orchestrator | 2026-01-02 02:56:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:56:16.657646 | orchestrator | 2026-01-02 02:56:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:56:19.709660 | orchestrator | 2026-01-02 02:56:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:56:19.711440 | orchestrator | 2026-01-02 02:56:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:56:19.712830 | orchestrator | 2026-01-02 02:56:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:56:22.750985 | orchestrator | 2026-01-02 02:56:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:56:22.753272 | orchestrator | 2026-01-02 02:56:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:56:22.753348 | orchestrator | 2026-01-02 02:56:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:56:25.799707 | orchestrator | 2026-01-02 02:56:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:56:25.801642 | orchestrator | 2026-01-02 02:56:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:56:25.801803 | orchestrator | 2026-01-02 02:56:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:56:28.848338 | orchestrator | 2026-01-02 02:56:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:56:28.850223 | orchestrator | 2026-01-02 02:56:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:56:28.850304 | orchestrator | 2026-01-02 02:56:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:56:31.901151 | orchestrator | 2026-01-02 02:56:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:56:31.902593 | orchestrator | 2026-01-02 02:56:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:56:31.902627 | orchestrator | 2026-01-02 02:56:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:56:34.951277 | orchestrator | 2026-01-02 02:56:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:56:34.953914 | orchestrator | 2026-01-02 02:56:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:56:34.953997 | orchestrator | 2026-01-02 02:56:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:56:37.994315 | orchestrator | 2026-01-02 02:56:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:56:37.995339 | orchestrator | 2026-01-02 02:56:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:56:37.995669 | orchestrator | 2026-01-02 02:56:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:56:41.046367 | orchestrator | 2026-01-02 02:56:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:56:41.048281 | orchestrator | 2026-01-02 02:56:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:56:41.048337 | orchestrator | 2026-01-02 02:56:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:56:44.094602 | orchestrator | 2026-01-02 02:56:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:56:44.096928 | orchestrator | 2026-01-02 02:56:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:56:44.096986 | orchestrator | 2026-01-02 02:56:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:56:47.138923 | orchestrator | 2026-01-02 02:56:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:56:47.139239 | orchestrator | 2026-01-02 02:56:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:56:47.139324 | orchestrator | 2026-01-02 02:56:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:56:50.186979 | orchestrator | 2026-01-02 02:56:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:56:50.189270 | orchestrator | 2026-01-02 02:56:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:56:50.189335 | orchestrator | 2026-01-02 02:56:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:56:53.229159 | orchestrator | 2026-01-02 02:56:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:56:53.232919 | orchestrator | 2026-01-02 02:56:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:56:53.233020 | orchestrator | 2026-01-02 02:56:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:56:56.282245 | orchestrator | 2026-01-02 02:56:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:56:56.284040 | orchestrator | 2026-01-02 02:56:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:56:56.284058 | orchestrator | 2026-01-02 02:56:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:56:59.335226 | orchestrator | 2026-01-02 02:56:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:56:59.337713 | orchestrator | 2026-01-02 02:56:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:56:59.338277 | orchestrator | 2026-01-02 02:56:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:57:02.396617 | orchestrator | 2026-01-02 02:57:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:57:02.398496 | orchestrator | 2026-01-02 02:57:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:57:02.398549 | orchestrator | 2026-01-02 02:57:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:57:05.449763 | orchestrator | 2026-01-02 02:57:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:57:05.453865 | orchestrator | 2026-01-02 02:57:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:57:05.454724 | orchestrator | 2026-01-02 02:57:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:57:08.504807 | orchestrator | 2026-01-02 02:57:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:57:08.507320 | orchestrator | 2026-01-02 02:57:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:57:08.507405 | orchestrator | 2026-01-02 02:57:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:57:11.558779 | orchestrator | 2026-01-02 02:57:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:57:11.560401 | orchestrator | 2026-01-02 02:57:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:57:11.560593 | orchestrator | 2026-01-02 02:57:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:57:14.616280 | orchestrator | 2026-01-02 02:57:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:57:14.618909 | orchestrator | 2026-01-02 02:57:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:57:14.619170 | orchestrator | 2026-01-02 02:57:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:57:17.671732 | orchestrator | 2026-01-02 02:57:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:57:17.673068 | orchestrator | 2026-01-02 02:57:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:57:17.673274 | orchestrator | 2026-01-02 02:57:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:57:20.719383 | orchestrator | 2026-01-02 02:57:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:57:20.720484 | orchestrator | 2026-01-02 02:57:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:57:20.720513 | orchestrator | 2026-01-02 02:57:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:57:23.772269 | orchestrator | 2026-01-02 02:57:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:57:23.774295 | orchestrator | 2026-01-02 02:57:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:57:23.774340 | orchestrator | 2026-01-02 02:57:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:57:26.818707 | orchestrator | 2026-01-02 02:57:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:57:26.819596 | orchestrator | 2026-01-02 02:57:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:57:26.819798 | orchestrator | 2026-01-02 02:57:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:57:29.866288 | orchestrator | 2026-01-02 02:57:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:57:29.868204 | orchestrator | 2026-01-02 02:57:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:57:29.868255 | orchestrator | 2026-01-02 02:57:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:57:32.919739 | orchestrator | 2026-01-02 02:57:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:57:32.921815 | orchestrator | 2026-01-02 02:57:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:57:33.179440 | orchestrator | 2026-01-02 02:57:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:57:35.976804 | orchestrator | 2026-01-02 02:57:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:57:35.978239 | orchestrator | 2026-01-02 02:57:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:57:35.978295 | orchestrator | 2026-01-02 02:57:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:57:39.022877 | orchestrator | 2026-01-02 02:57:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:57:39.023935 | orchestrator | 2026-01-02 02:57:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:57:39.024006 | orchestrator | 2026-01-02 02:57:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:57:42.073444 | orchestrator | 2026-01-02 02:57:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:57:42.073622 | orchestrator | 2026-01-02 02:57:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:57:42.073651 | orchestrator | 2026-01-02 02:57:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:57:45.115325 | orchestrator | 2026-01-02 02:57:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:57:45.118607 | orchestrator | 2026-01-02 02:57:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:57:45.118646 | orchestrator | 2026-01-02 02:57:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:57:48.172484 | orchestrator | 2026-01-02 02:57:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:57:48.175071 | orchestrator | 2026-01-02 02:57:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:57:48.175229 | orchestrator | 2026-01-02 02:57:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:57:51.219295 | orchestrator | 2026-01-02 02:57:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:57:51.222466 | orchestrator | 2026-01-02 02:57:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:57:51.222522 | orchestrator | 2026-01-02 02:57:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:57:54.271309 | orchestrator | 2026-01-02 02:57:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:57:54.273241 | orchestrator | 2026-01-02 02:57:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:57:54.273330 | orchestrator | 2026-01-02 02:57:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:57:57.313037 | orchestrator | 2026-01-02 02:57:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:57:57.313887 | orchestrator | 2026-01-02 02:57:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:57:57.313901 | orchestrator | 2026-01-02 02:57:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:58:00.358520 | orchestrator | 2026-01-02 02:58:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:58:00.359487 | orchestrator | 2026-01-02 02:58:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:58:00.359524 | orchestrator | 2026-01-02 02:58:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:58:03.400185 | orchestrator | 2026-01-02 02:58:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:58:03.402268 | orchestrator | 2026-01-02 02:58:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:58:03.402353 | orchestrator | 2026-01-02 02:58:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:58:06.452097 | orchestrator | 2026-01-02 02:58:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:58:06.453119 | orchestrator | 2026-01-02 02:58:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:58:06.453186 | orchestrator | 2026-01-02 02:58:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:58:09.497085 | orchestrator | 2026-01-02 02:58:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:58:09.499266 | orchestrator | 2026-01-02 02:58:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:58:09.499318 | orchestrator | 2026-01-02 02:58:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:58:12.549122 | orchestrator | 2026-01-02 02:58:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:58:12.550343 | orchestrator | 2026-01-02 02:58:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:58:12.550604 | orchestrator | 2026-01-02 02:58:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:58:15.596176 | orchestrator | 2026-01-02 02:58:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:58:15.597455 | orchestrator | 2026-01-02 02:58:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:58:15.597517 | orchestrator | 2026-01-02 02:58:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:58:18.640379 | orchestrator | 2026-01-02 02:58:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:58:18.642504 | orchestrator | 2026-01-02 02:58:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:58:18.642596 | orchestrator | 2026-01-02 02:58:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:58:21.688195 | orchestrator | 2026-01-02 02:58:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:58:21.689319 | orchestrator | 2026-01-02 02:58:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:58:21.689382 | orchestrator | 2026-01-02 02:58:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:58:24.740349 | orchestrator | 2026-01-02 02:58:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:58:24.742657 | orchestrator | 2026-01-02 02:58:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:58:24.742843 | orchestrator | 2026-01-02 02:58:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:58:27.790072 | orchestrator | 2026-01-02 02:58:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:58:27.792328 | orchestrator | 2026-01-02 02:58:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:58:27.792482 | orchestrator | 2026-01-02 02:58:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:58:30.839577 | orchestrator | 2026-01-02 02:58:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:58:30.840851 | orchestrator | 2026-01-02 02:58:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:58:30.840884 | orchestrator | 2026-01-02 02:58:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:58:33.895641 | orchestrator | 2026-01-02 02:58:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:58:33.897476 | orchestrator | 2026-01-02 02:58:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:58:33.897539 | orchestrator | 2026-01-02 02:58:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:58:36.943765 | orchestrator | 2026-01-02 02:58:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:58:36.945021 | orchestrator | 2026-01-02 02:58:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:58:36.945215 | orchestrator | 2026-01-02 02:58:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:58:39.996694 | orchestrator | 2026-01-02 02:58:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:58:40.000143 | orchestrator | 2026-01-02 02:58:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:58:40.000222 | orchestrator | 2026-01-02 02:58:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:58:43.042373 | orchestrator | 2026-01-02 02:58:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:58:43.043521 | orchestrator | 2026-01-02 02:58:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:58:43.043553 | orchestrator | 2026-01-02 02:58:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:58:46.091931 | orchestrator | 2026-01-02 02:58:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:58:46.094398 | orchestrator | 2026-01-02 02:58:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:58:46.094434 | orchestrator | 2026-01-02 02:58:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:58:49.143949 | orchestrator | 2026-01-02 02:58:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:58:49.145067 | orchestrator | 2026-01-02 02:58:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:58:49.145107 | orchestrator | 2026-01-02 02:58:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:58:52.195005 | orchestrator | 2026-01-02 02:58:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:58:52.197694 | orchestrator | 2026-01-02 02:58:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:58:52.197791 | orchestrator | 2026-01-02 02:58:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:58:55.240367 | orchestrator | 2026-01-02 02:58:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:58:55.243338 | orchestrator | 2026-01-02 02:58:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:58:55.243390 | orchestrator | 2026-01-02 02:58:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:58:58.293542 | orchestrator | 2026-01-02 02:58:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:58:58.295725 | orchestrator | 2026-01-02 02:58:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:58:58.295820 | orchestrator | 2026-01-02 02:58:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:59:01.348021 | orchestrator | 2026-01-02 02:59:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:59:01.350092 | orchestrator | 2026-01-02 02:59:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:59:01.350133 | orchestrator | 2026-01-02 02:59:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:59:04.401729 | orchestrator | 2026-01-02 02:59:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:59:04.403319 | orchestrator | 2026-01-02 02:59:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:59:04.403423 | orchestrator | 2026-01-02 02:59:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:59:07.450493 | orchestrator | 2026-01-02 02:59:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:59:07.451625 | orchestrator | 2026-01-02 02:59:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:59:07.451886 | orchestrator | 2026-01-02 02:59:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:59:10.499642 | orchestrator | 2026-01-02 02:59:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:59:10.501684 | orchestrator | 2026-01-02 02:59:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:59:10.501810 | orchestrator | 2026-01-02 02:59:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:59:13.547197 | orchestrator | 2026-01-02 02:59:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:59:13.548563 | orchestrator | 2026-01-02 02:59:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:59:13.548611 | orchestrator | 2026-01-02 02:59:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:59:16.598624 | orchestrator | 2026-01-02 02:59:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:59:16.599724 | orchestrator | 2026-01-02 02:59:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:59:16.599764 | orchestrator | 2026-01-02 02:59:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:59:19.648004 | orchestrator | 2026-01-02 02:59:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:59:19.649307 | orchestrator | 2026-01-02 02:59:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:59:19.649370 | orchestrator | 2026-01-02 02:59:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:59:22.696523 | orchestrator | 2026-01-02 02:59:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:59:22.699040 | orchestrator | 2026-01-02 02:59:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:59:22.699090 | orchestrator | 2026-01-02 02:59:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:59:25.747468 | orchestrator | 2026-01-02 02:59:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:59:25.750449 | orchestrator | 2026-01-02 02:59:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:59:25.750487 | orchestrator | 2026-01-02 02:59:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:59:28.794548 | orchestrator | 2026-01-02 02:59:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:59:28.796571 | orchestrator | 2026-01-02 02:59:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:59:28.796849 | orchestrator | 2026-01-02 02:59:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:59:31.844718 | orchestrator | 2026-01-02 02:59:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:59:31.847702 | orchestrator | 2026-01-02 02:59:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:59:31.847825 | orchestrator | 2026-01-02 02:59:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:59:34.893325 | orchestrator | 2026-01-02 02:59:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:59:34.895167 | orchestrator | 2026-01-02 02:59:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:59:34.895206 | orchestrator | 2026-01-02 02:59:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:59:37.944800 | orchestrator | 2026-01-02 02:59:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:59:37.946743 | orchestrator | 2026-01-02 02:59:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:59:37.946784 | orchestrator | 2026-01-02 02:59:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:59:41.003480 | orchestrator | 2026-01-02 02:59:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:59:41.005906 | orchestrator | 2026-01-02 02:59:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:59:41.006096 | orchestrator | 2026-01-02 02:59:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:59:44.071731 | orchestrator | 2026-01-02 02:59:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:59:44.073511 | orchestrator | 2026-01-02 02:59:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:59:44.073627 | orchestrator | 2026-01-02 02:59:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:59:47.122735 | orchestrator | 2026-01-02 02:59:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:59:47.125016 | orchestrator | 2026-01-02 02:59:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:59:47.125090 | orchestrator | 2026-01-02 02:59:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:59:50.179000 | orchestrator | 2026-01-02 02:59:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:59:50.182199 | orchestrator | 2026-01-02 02:59:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:59:50.182269 | orchestrator | 2026-01-02 02:59:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:59:53.232060 | orchestrator | 2026-01-02 02:59:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:59:53.232892 | orchestrator | 2026-01-02 02:59:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:59:53.233043 | orchestrator | 2026-01-02 02:59:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:59:56.281680 | orchestrator | 2026-01-02 02:59:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:59:56.283532 | orchestrator | 2026-01-02 02:59:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:59:56.283581 | orchestrator | 2026-01-02 02:59:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:59:59.338220 | orchestrator | 2026-01-02 02:59:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 02:59:59.339802 | orchestrator | 2026-01-02 02:59:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 02:59:59.339889 | orchestrator | 2026-01-02 02:59:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:00:02.388589 | orchestrator | 2026-01-02 03:00:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:00:02.390178 | orchestrator | 2026-01-02 03:00:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:00:02.390313 | orchestrator | 2026-01-02 03:00:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:00:05.428545 | orchestrator | 2026-01-02 03:00:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:00:05.431133 | orchestrator | 2026-01-02 03:00:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:00:05.431288 | orchestrator | 2026-01-02 03:00:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:00:08.472773 | orchestrator | 2026-01-02 03:00:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:00:08.474265 | orchestrator | 2026-01-02 03:00:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:00:08.474326 | orchestrator | 2026-01-02 03:00:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:00:11.524189 | orchestrator | 2026-01-02 03:00:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:00:11.525708 | orchestrator | 2026-01-02 03:00:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:00:11.525741 | orchestrator | 2026-01-02 03:00:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:00:14.573430 | orchestrator | 2026-01-02 03:00:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:00:14.577021 | orchestrator | 2026-01-02 03:00:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:00:14.577171 | orchestrator | 2026-01-02 03:00:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:00:17.619342 | orchestrator | 2026-01-02 03:00:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:00:17.621638 | orchestrator | 2026-01-02 03:00:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:00:17.621688 | orchestrator | 2026-01-02 03:00:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:00:20.676494 | orchestrator | 2026-01-02 03:00:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:00:20.677880 | orchestrator | 2026-01-02 03:00:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:00:20.677904 | orchestrator | 2026-01-02 03:00:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:00:23.719114 | orchestrator | 2026-01-02 03:00:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:00:23.720833 | orchestrator | 2026-01-02 03:00:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:00:23.721114 | orchestrator | 2026-01-02 03:00:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:00:26.762275 | orchestrator | 2026-01-02 03:00:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:00:26.763368 | orchestrator | 2026-01-02 03:00:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:00:26.763400 | orchestrator | 2026-01-02 03:00:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:00:29.811638 | orchestrator | 2026-01-02 03:00:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:00:29.812924 | orchestrator | 2026-01-02 03:00:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:00:29.813035 | orchestrator | 2026-01-02 03:00:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:00:32.862731 | orchestrator | 2026-01-02 03:00:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:00:32.863736 | orchestrator | 2026-01-02 03:00:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:00:32.863768 | orchestrator | 2026-01-02 03:00:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:00:35.903288 | orchestrator | 2026-01-02 03:00:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:00:35.903549 | orchestrator | 2026-01-02 03:00:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:00:35.903722 | orchestrator | 2026-01-02 03:00:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:00:38.947664 | orchestrator | 2026-01-02 03:00:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:00:38.949667 | orchestrator | 2026-01-02 03:00:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:00:38.949747 | orchestrator | 2026-01-02 03:00:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:00:42.000506 | orchestrator | 2026-01-02 03:00:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:00:42.013710 | orchestrator | 2026-01-02 03:00:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:00:42.013791 | orchestrator | 2026-01-02 03:00:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:00:45.065074 | orchestrator | 2026-01-02 03:00:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:00:45.065441 | orchestrator | 2026-01-02 03:00:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:00:45.065473 | orchestrator | 2026-01-02 03:00:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:00:48.110909 | orchestrator | 2026-01-02 03:00:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:00:48.112560 | orchestrator | 2026-01-02 03:00:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:00:48.112777 | orchestrator | 2026-01-02 03:00:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:00:51.159545 | orchestrator | 2026-01-02 03:00:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:00:51.161147 | orchestrator | 2026-01-02 03:00:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:00:51.161224 | orchestrator | 2026-01-02 03:00:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:00:54.204861 | orchestrator | 2026-01-02 03:00:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:00:54.206004 | orchestrator | 2026-01-02 03:00:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:00:54.206102 | orchestrator | 2026-01-02 03:00:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:00:57.265765 | orchestrator | 2026-01-02 03:00:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:00:57.267174 | orchestrator | 2026-01-02 03:00:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:00:57.267438 | orchestrator | 2026-01-02 03:00:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:01:00.319218 | orchestrator | 2026-01-02 03:01:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:01:00.322103 | orchestrator | 2026-01-02 03:01:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:01:00.322243 | orchestrator | 2026-01-02 03:01:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:01:03.368302 | orchestrator | 2026-01-02 03:01:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:01:03.371525 | orchestrator | 2026-01-02 03:01:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:01:03.371605 | orchestrator | 2026-01-02 03:01:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:01:06.421384 | orchestrator | 2026-01-02 03:01:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:01:06.423608 | orchestrator | 2026-01-02 03:01:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:01:06.423657 | orchestrator | 2026-01-02 03:01:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:01:09.474558 | orchestrator | 2026-01-02 03:01:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:01:09.476650 | orchestrator | 2026-01-02 03:01:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:01:09.476741 | orchestrator | 2026-01-02 03:01:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:01:12.525460 | orchestrator | 2026-01-02 03:01:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:01:12.528311 | orchestrator | 2026-01-02 03:01:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:01:12.528370 | orchestrator | 2026-01-02 03:01:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:01:15.575919 | orchestrator | 2026-01-02 03:01:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:01:15.577035 | orchestrator | 2026-01-02 03:01:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:01:15.577061 | orchestrator | 2026-01-02 03:01:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:01:18.622639 | orchestrator | 2026-01-02 03:01:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:01:18.624039 | orchestrator | 2026-01-02 03:01:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:01:18.624070 | orchestrator | 2026-01-02 03:01:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:01:21.677706 | orchestrator | 2026-01-02 03:01:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:01:21.681575 | orchestrator | 2026-01-02 03:01:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:01:21.681623 | orchestrator | 2026-01-02 03:01:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:01:24.728234 | orchestrator | 2026-01-02 03:01:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:01:24.729690 | orchestrator | 2026-01-02 03:01:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:01:24.729743 | orchestrator | 2026-01-02 03:01:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:01:27.777737 | orchestrator | 2026-01-02 03:01:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:01:27.780209 | orchestrator | 2026-01-02 03:01:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:01:27.780273 | orchestrator | 2026-01-02 03:01:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:01:30.828639 | orchestrator | 2026-01-02 03:01:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:01:30.830859 | orchestrator | 2026-01-02 03:01:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:01:30.831069 | orchestrator | 2026-01-02 03:01:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:01:33.887226 | orchestrator | 2026-01-02 03:01:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:01:33.888667 | orchestrator | 2026-01-02 03:01:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:01:33.888701 | orchestrator | 2026-01-02 03:01:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:01:36.939038 | orchestrator | 2026-01-02 03:01:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:01:36.939628 | orchestrator | 2026-01-02 03:01:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:01:36.939676 | orchestrator | 2026-01-02 03:01:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:01:39.986361 | orchestrator | 2026-01-02 03:01:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:01:39.986595 | orchestrator | 2026-01-02 03:01:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:01:39.986622 | orchestrator | 2026-01-02 03:01:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:01:43.042900 | orchestrator | 2026-01-02 03:01:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:01:43.043346 | orchestrator | 2026-01-02 03:01:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:01:43.043378 | orchestrator | 2026-01-02 03:01:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:01:46.096621 | orchestrator | 2026-01-02 03:01:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:01:46.099196 | orchestrator | 2026-01-02 03:01:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:01:46.099272 | orchestrator | 2026-01-02 03:01:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:01:49.154440 | orchestrator | 2026-01-02 03:01:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:01:49.155782 | orchestrator | 2026-01-02 03:01:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:01:49.155850 | orchestrator | 2026-01-02 03:01:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:01:52.205391 | orchestrator | 2026-01-02 03:01:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:01:52.206284 | orchestrator | 2026-01-02 03:01:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:01:52.206333 | orchestrator | 2026-01-02 03:01:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:01:55.258732 | orchestrator | 2026-01-02 03:01:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:01:55.259453 | orchestrator | 2026-01-02 03:01:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:01:55.259497 | orchestrator | 2026-01-02 03:01:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:01:58.301732 | orchestrator | 2026-01-02 03:01:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:01:58.302219 | orchestrator | 2026-01-02 03:01:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:01:58.302255 | orchestrator | 2026-01-02 03:01:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:02:01.354366 | orchestrator | 2026-01-02 03:02:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:02:01.356924 | orchestrator | 2026-01-02 03:02:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:02:01.357077 | orchestrator | 2026-01-02 03:02:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:02:04.404656 | orchestrator | 2026-01-02 03:02:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:02:04.404985 | orchestrator | 2026-01-02 03:02:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:02:04.405027 | orchestrator | 2026-01-02 03:02:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:02:07.446485 | orchestrator | 2026-01-02 03:02:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:02:07.450179 | orchestrator | 2026-01-02 03:02:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:02:07.450248 | orchestrator | 2026-01-02 03:02:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:02:10.495827 | orchestrator | 2026-01-02 03:02:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:02:10.497801 | orchestrator | 2026-01-02 03:02:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:02:10.497868 | orchestrator | 2026-01-02 03:02:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:02:13.535729 | orchestrator | 2026-01-02 03:02:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:02:13.537150 | orchestrator | 2026-01-02 03:02:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:02:13.537195 | orchestrator | 2026-01-02 03:02:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:02:16.581012 | orchestrator | 2026-01-02 03:02:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:02:16.582979 | orchestrator | 2026-01-02 03:02:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:02:16.583038 | orchestrator | 2026-01-02 03:02:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:02:19.632570 | orchestrator | 2026-01-02 03:02:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:02:19.634356 | orchestrator | 2026-01-02 03:02:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:02:19.634422 | orchestrator | 2026-01-02 03:02:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:02:22.685336 | orchestrator | 2026-01-02 03:02:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:02:22.687568 | orchestrator | 2026-01-02 03:02:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:02:22.687629 | orchestrator | 2026-01-02 03:02:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:02:25.739768 | orchestrator | 2026-01-02 03:02:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:02:25.741872 | orchestrator | 2026-01-02 03:02:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:02:25.741976 | orchestrator | 2026-01-02 03:02:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:02:28.792044 | orchestrator | 2026-01-02 03:02:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:02:28.794320 | orchestrator | 2026-01-02 03:02:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:02:28.794398 | orchestrator | 2026-01-02 03:02:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:02:31.848494 | orchestrator | 2026-01-02 03:02:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:02:31.850852 | orchestrator | 2026-01-02 03:02:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:02:31.850915 | orchestrator | 2026-01-02 03:02:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:02:34.894353 | orchestrator | 2026-01-02 03:02:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:02:34.896201 | orchestrator | 2026-01-02 03:02:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:02:34.896262 | orchestrator | 2026-01-02 03:02:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:02:37.949361 | orchestrator | 2026-01-02 03:02:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:02:37.950791 | orchestrator | 2026-01-02 03:02:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:02:37.950847 | orchestrator | 2026-01-02 03:02:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:02:40.993410 | orchestrator | 2026-01-02 03:02:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:02:40.995485 | orchestrator | 2026-01-02 03:02:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:02:40.995528 | orchestrator | 2026-01-02 03:02:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:02:44.064210 | orchestrator | 2026-01-02 03:02:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:02:44.065342 | orchestrator | 2026-01-02 03:02:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:02:44.065492 | orchestrator | 2026-01-02 03:02:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:02:47.111407 | orchestrator | 2026-01-02 03:02:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:02:47.112527 | orchestrator | 2026-01-02 03:02:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:02:47.112617 | orchestrator | 2026-01-02 03:02:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:02:50.165387 | orchestrator | 2026-01-02 03:02:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:02:50.167638 | orchestrator | 2026-01-02 03:02:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:02:50.167691 | orchestrator | 2026-01-02 03:02:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:02:53.217994 | orchestrator | 2026-01-02 03:02:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:02:53.219218 | orchestrator | 2026-01-02 03:02:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:02:53.219259 | orchestrator | 2026-01-02 03:02:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:02:56.267603 | orchestrator | 2026-01-02 03:02:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:02:56.268304 | orchestrator | 2026-01-02 03:02:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:02:56.268336 | orchestrator | 2026-01-02 03:02:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:02:59.311401 | orchestrator | 2026-01-02 03:02:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:02:59.311647 | orchestrator | 2026-01-02 03:02:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:02:59.311682 | orchestrator | 2026-01-02 03:02:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:03:02.362865 | orchestrator | 2026-01-02 03:03:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:03:02.366165 | orchestrator | 2026-01-02 03:03:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:03:02.366248 | orchestrator | 2026-01-02 03:03:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:03:05.417436 | orchestrator | 2026-01-02 03:03:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:03:05.418732 | orchestrator | 2026-01-02 03:03:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:03:05.418766 | orchestrator | 2026-01-02 03:03:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:03:08.468602 | orchestrator | 2026-01-02 03:03:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:03:08.469370 | orchestrator | 2026-01-02 03:03:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:03:08.469418 | orchestrator | 2026-01-02 03:03:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:03:11.515624 | orchestrator | 2026-01-02 03:03:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:03:11.516656 | orchestrator | 2026-01-02 03:03:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:03:11.516830 | orchestrator | 2026-01-02 03:03:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:03:14.562639 | orchestrator | 2026-01-02 03:03:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:03:14.563216 | orchestrator | 2026-01-02 03:03:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:03:14.563241 | orchestrator | 2026-01-02 03:03:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:03:17.611398 | orchestrator | 2026-01-02 03:03:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:03:17.612195 | orchestrator | 2026-01-02 03:03:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:03:17.612234 | orchestrator | 2026-01-02 03:03:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:03:20.660544 | orchestrator | 2026-01-02 03:03:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:03:20.662554 | orchestrator | 2026-01-02 03:03:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:03:20.662703 | orchestrator | 2026-01-02 03:03:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:03:23.717116 | orchestrator | 2026-01-02 03:03:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:03:23.719034 | orchestrator | 2026-01-02 03:03:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:03:23.719435 | orchestrator | 2026-01-02 03:03:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:03:26.771575 | orchestrator | 2026-01-02 03:03:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:03:26.772958 | orchestrator | 2026-01-02 03:03:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:03:26.772988 | orchestrator | 2026-01-02 03:03:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:03:29.819791 | orchestrator | 2026-01-02 03:03:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:03:29.819874 | orchestrator | 2026-01-02 03:03:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:03:29.819884 | orchestrator | 2026-01-02 03:03:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:03:32.869297 | orchestrator | 2026-01-02 03:03:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:03:32.871271 | orchestrator | 2026-01-02 03:03:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:03:32.871311 | orchestrator | 2026-01-02 03:03:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:03:35.921829 | orchestrator | 2026-01-02 03:03:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:03:35.922435 | orchestrator | 2026-01-02 03:03:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:03:35.922530 | orchestrator | 2026-01-02 03:03:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:03:38.967798 | orchestrator | 2026-01-02 03:03:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:03:38.969672 | orchestrator | 2026-01-02 03:03:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:03:38.969904 | orchestrator | 2026-01-02 03:03:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:03:42.023341 | orchestrator | 2026-01-02 03:03:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:03:42.024447 | orchestrator | 2026-01-02 03:03:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:03:42.024498 | orchestrator | 2026-01-02 03:03:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:03:45.079046 | orchestrator | 2026-01-02 03:03:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:03:45.079292 | orchestrator | 2026-01-02 03:03:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:03:45.079318 | orchestrator | 2026-01-02 03:03:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:03:48.130484 | orchestrator | 2026-01-02 03:03:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:03:48.132869 | orchestrator | 2026-01-02 03:03:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:03:48.133067 | orchestrator | 2026-01-02 03:03:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:03:51.175353 | orchestrator | 2026-01-02 03:03:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:03:51.175613 | orchestrator | 2026-01-02 03:03:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:03:51.175639 | orchestrator | 2026-01-02 03:03:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:03:54.220138 | orchestrator | 2026-01-02 03:03:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:03:54.221777 | orchestrator | 2026-01-02 03:03:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:03:54.221807 | orchestrator | 2026-01-02 03:03:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:03:57.274397 | orchestrator | 2026-01-02 03:03:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:03:57.276432 | orchestrator | 2026-01-02 03:03:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:03:57.276511 | orchestrator | 2026-01-02 03:03:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:04:00.340257 | orchestrator | 2026-01-02 03:04:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:04:00.341284 | orchestrator | 2026-01-02 03:04:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:04:00.341580 | orchestrator | 2026-01-02 03:04:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:04:03.394659 | orchestrator | 2026-01-02 03:04:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:04:03.395962 | orchestrator | 2026-01-02 03:04:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:04:03.396269 | orchestrator | 2026-01-02 03:04:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:04:06.437376 | orchestrator | 2026-01-02 03:04:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:04:06.439695 | orchestrator | 2026-01-02 03:04:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:04:06.439719 | orchestrator | 2026-01-02 03:04:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:04:09.486869 | orchestrator | 2026-01-02 03:04:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:04:09.488588 | orchestrator | 2026-01-02 03:04:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:04:09.488655 | orchestrator | 2026-01-02 03:04:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:04:12.534459 | orchestrator | 2026-01-02 03:04:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:04:12.537165 | orchestrator | 2026-01-02 03:04:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:04:12.537300 | orchestrator | 2026-01-02 03:04:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:04:15.580029 | orchestrator | 2026-01-02 03:04:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:04:15.581841 | orchestrator | 2026-01-02 03:04:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:04:15.581877 | orchestrator | 2026-01-02 03:04:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:04:18.632481 | orchestrator | 2026-01-02 03:04:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:04:18.634207 | orchestrator | 2026-01-02 03:04:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:04:18.634376 | orchestrator | 2026-01-02 03:04:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:04:21.686403 | orchestrator | 2026-01-02 03:04:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:04:21.687052 | orchestrator | 2026-01-02 03:04:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:04:21.687103 | orchestrator | 2026-01-02 03:04:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:04:24.748741 | orchestrator | 2026-01-02 03:04:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:04:24.751015 | orchestrator | 2026-01-02 03:04:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:04:24.751231 | orchestrator | 2026-01-02 03:04:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:04:27.800402 | orchestrator | 2026-01-02 03:04:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:04:27.802603 | orchestrator | 2026-01-02 03:04:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:04:27.803051 | orchestrator | 2026-01-02 03:04:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:04:30.840788 | orchestrator | 2026-01-02 03:04:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:04:30.842230 | orchestrator | 2026-01-02 03:04:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:04:30.842272 | orchestrator | 2026-01-02 03:04:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:04:33.887638 | orchestrator | 2026-01-02 03:04:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:04:33.890174 | orchestrator | 2026-01-02 03:04:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:04:33.890239 | orchestrator | 2026-01-02 03:04:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:04:36.937951 | orchestrator | 2026-01-02 03:04:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:04:36.939229 | orchestrator | 2026-01-02 03:04:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:04:36.939266 | orchestrator | 2026-01-02 03:04:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:04:39.994731 | orchestrator | 2026-01-02 03:04:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:04:39.996666 | orchestrator | 2026-01-02 03:04:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:04:39.996717 | orchestrator | 2026-01-02 03:04:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:04:43.047272 | orchestrator | 2026-01-02 03:04:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:04:43.048526 | orchestrator | 2026-01-02 03:04:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:04:43.048604 | orchestrator | 2026-01-02 03:04:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:04:46.099125 | orchestrator | 2026-01-02 03:04:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:04:46.101310 | orchestrator | 2026-01-02 03:04:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:04:46.101351 | orchestrator | 2026-01-02 03:04:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:04:49.148306 | orchestrator | 2026-01-02 03:04:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:04:49.149414 | orchestrator | 2026-01-02 03:04:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:04:49.149573 | orchestrator | 2026-01-02 03:04:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:04:52.198887 | orchestrator | 2026-01-02 03:04:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:04:52.201027 | orchestrator | 2026-01-02 03:04:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:04:52.201551 | orchestrator | 2026-01-02 03:04:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:04:55.250290 | orchestrator | 2026-01-02 03:04:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:04:55.250807 | orchestrator | 2026-01-02 03:04:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:04:55.250840 | orchestrator | 2026-01-02 03:04:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:04:58.305786 | orchestrator | 2026-01-02 03:04:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:04:58.306353 | orchestrator | 2026-01-02 03:04:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:04:58.306390 | orchestrator | 2026-01-02 03:04:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:05:01.357403 | orchestrator | 2026-01-02 03:05:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:05:01.359274 | orchestrator | 2026-01-02 03:05:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:05:01.359371 | orchestrator | 2026-01-02 03:05:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:05:04.408221 | orchestrator | 2026-01-02 03:05:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:05:04.410407 | orchestrator | 2026-01-02 03:05:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:05:04.410501 | orchestrator | 2026-01-02 03:05:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:05:07.457013 | orchestrator | 2026-01-02 03:05:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:05:07.458176 | orchestrator | 2026-01-02 03:05:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:05:07.458231 | orchestrator | 2026-01-02 03:05:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:05:10.505080 | orchestrator | 2026-01-02 03:05:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:05:10.506753 | orchestrator | 2026-01-02 03:05:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:05:10.506814 | orchestrator | 2026-01-02 03:05:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:05:13.558149 | orchestrator | 2026-01-02 03:05:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:05:13.560183 | orchestrator | 2026-01-02 03:05:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:05:13.560295 | orchestrator | 2026-01-02 03:05:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:05:16.606733 | orchestrator | 2026-01-02 03:05:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:05:16.608130 | orchestrator | 2026-01-02 03:05:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:05:16.608179 | orchestrator | 2026-01-02 03:05:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:05:19.649745 | orchestrator | 2026-01-02 03:05:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:05:19.651132 | orchestrator | 2026-01-02 03:05:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:05:19.651228 | orchestrator | 2026-01-02 03:05:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:05:22.695238 | orchestrator | 2026-01-02 03:05:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:05:22.697419 | orchestrator | 2026-01-02 03:05:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:05:22.697471 | orchestrator | 2026-01-02 03:05:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:05:25.756180 | orchestrator | 2026-01-02 03:05:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:05:25.758715 | orchestrator | 2026-01-02 03:05:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:05:25.758962 | orchestrator | 2026-01-02 03:05:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:05:28.818984 | orchestrator | 2026-01-02 03:05:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:05:28.820854 | orchestrator | 2026-01-02 03:05:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:05:28.820945 | orchestrator | 2026-01-02 03:05:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:05:31.870106 | orchestrator | 2026-01-02 03:05:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:05:31.871179 | orchestrator | 2026-01-02 03:05:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:05:31.871321 | orchestrator | 2026-01-02 03:05:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:05:34.924952 | orchestrator | 2026-01-02 03:05:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:05:34.926858 | orchestrator | 2026-01-02 03:05:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:05:34.926916 | orchestrator | 2026-01-02 03:05:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:05:37.973281 | orchestrator | 2026-01-02 03:05:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:05:37.975476 | orchestrator | 2026-01-02 03:05:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:05:37.976005 | orchestrator | 2026-01-02 03:05:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:05:41.035498 | orchestrator | 2026-01-02 03:05:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:05:41.037296 | orchestrator | 2026-01-02 03:05:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:05:41.037356 | orchestrator | 2026-01-02 03:05:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:05:44.090362 | orchestrator | 2026-01-02 03:05:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:05:44.092950 | orchestrator | 2026-01-02 03:05:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:05:44.093003 | orchestrator | 2026-01-02 03:05:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:05:47.141050 | orchestrator | 2026-01-02 03:05:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:05:47.142345 | orchestrator | 2026-01-02 03:05:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:05:47.142437 | orchestrator | 2026-01-02 03:05:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:05:50.187807 | orchestrator | 2026-01-02 03:05:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:05:50.188938 | orchestrator | 2026-01-02 03:05:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:05:50.188981 | orchestrator | 2026-01-02 03:05:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:05:53.233992 | orchestrator | 2026-01-02 03:05:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:05:53.235300 | orchestrator | 2026-01-02 03:05:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:05:53.235432 | orchestrator | 2026-01-02 03:05:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:05:56.285441 | orchestrator | 2026-01-02 03:05:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:05:56.286130 | orchestrator | 2026-01-02 03:05:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:05:56.286164 | orchestrator | 2026-01-02 03:05:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:05:59.336349 | orchestrator | 2026-01-02 03:05:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:05:59.338686 | orchestrator | 2026-01-02 03:05:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:05:59.338725 | orchestrator | 2026-01-02 03:05:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:06:02.393440 | orchestrator | 2026-01-02 03:06:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:06:02.393949 | orchestrator | 2026-01-02 03:06:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:06:02.394160 | orchestrator | 2026-01-02 03:06:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:06:05.439743 | orchestrator | 2026-01-02 03:06:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:06:05.440512 | orchestrator | 2026-01-02 03:06:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:06:05.440800 | orchestrator | 2026-01-02 03:06:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:06:08.484385 | orchestrator | 2026-01-02 03:06:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:06:08.485705 | orchestrator | 2026-01-02 03:06:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:06:08.486114 | orchestrator | 2026-01-02 03:06:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:06:11.536761 | orchestrator | 2026-01-02 03:06:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:06:11.538149 | orchestrator | 2026-01-02 03:06:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:06:11.538186 | orchestrator | 2026-01-02 03:06:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:06:14.587333 | orchestrator | 2026-01-02 03:06:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:06:14.589017 | orchestrator | 2026-01-02 03:06:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:06:14.589071 | orchestrator | 2026-01-02 03:06:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:06:17.636348 | orchestrator | 2026-01-02 03:06:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:06:17.638009 | orchestrator | 2026-01-02 03:06:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:06:17.638174 | orchestrator | 2026-01-02 03:06:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:06:20.682173 | orchestrator | 2026-01-02 03:06:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:06:20.682942 | orchestrator | 2026-01-02 03:06:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:06:20.682985 | orchestrator | 2026-01-02 03:06:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:06:23.727454 | orchestrator | 2026-01-02 03:06:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:06:23.729623 | orchestrator | 2026-01-02 03:06:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:06:23.729701 | orchestrator | 2026-01-02 03:06:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:06:26.784554 | orchestrator | 2026-01-02 03:06:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:06:26.785336 | orchestrator | 2026-01-02 03:06:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:06:26.785359 | orchestrator | 2026-01-02 03:06:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:06:29.833796 | orchestrator | 2026-01-02 03:06:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:06:29.834975 | orchestrator | 2026-01-02 03:06:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:06:29.835012 | orchestrator | 2026-01-02 03:06:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:06:32.886333 | orchestrator | 2026-01-02 03:06:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:06:32.886969 | orchestrator | 2026-01-02 03:06:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:06:32.887288 | orchestrator | 2026-01-02 03:06:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:06:35.931484 | orchestrator | 2026-01-02 03:06:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:06:35.932722 | orchestrator | 2026-01-02 03:06:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:06:35.932753 | orchestrator | 2026-01-02 03:06:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:06:38.992585 | orchestrator | 2026-01-02 03:06:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:06:38.995117 | orchestrator | 2026-01-02 03:06:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:06:38.995152 | orchestrator | 2026-01-02 03:06:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:06:42.060240 | orchestrator | 2026-01-02 03:06:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:06:42.060351 | orchestrator | 2026-01-02 03:06:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:06:42.060367 | orchestrator | 2026-01-02 03:06:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:06:45.111381 | orchestrator | 2026-01-02 03:06:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:06:45.113155 | orchestrator | 2026-01-02 03:06:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:06:45.113209 | orchestrator | 2026-01-02 03:06:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:06:48.157753 | orchestrator | 2026-01-02 03:06:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:06:48.159976 | orchestrator | 2026-01-02 03:06:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:06:48.160015 | orchestrator | 2026-01-02 03:06:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:06:51.204505 | orchestrator | 2026-01-02 03:06:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:06:51.206477 | orchestrator | 2026-01-02 03:06:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:06:51.206607 | orchestrator | 2026-01-02 03:06:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:06:54.257084 | orchestrator | 2026-01-02 03:06:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:06:54.257197 | orchestrator | 2026-01-02 03:06:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:06:54.257214 | orchestrator | 2026-01-02 03:06:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:06:57.295077 | orchestrator | 2026-01-02 03:06:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:06:57.296967 | orchestrator | 2026-01-02 03:06:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:06:57.297019 | orchestrator | 2026-01-02 03:06:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:07:00.345332 | orchestrator | 2026-01-02 03:07:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:07:00.346647 | orchestrator | 2026-01-02 03:07:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:07:00.347240 | orchestrator | 2026-01-02 03:07:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:07:03.397736 | orchestrator | 2026-01-02 03:07:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:07:03.399210 | orchestrator | 2026-01-02 03:07:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:07:03.399244 | orchestrator | 2026-01-02 03:07:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:07:06.451769 | orchestrator | 2026-01-02 03:07:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:07:06.453367 | orchestrator | 2026-01-02 03:07:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:07:06.453408 | orchestrator | 2026-01-02 03:07:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:07:09.498714 | orchestrator | 2026-01-02 03:07:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:07:09.499857 | orchestrator | 2026-01-02 03:07:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:07:09.499913 | orchestrator | 2026-01-02 03:07:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:07:12.544072 | orchestrator | 2026-01-02 03:07:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:07:12.544510 | orchestrator | 2026-01-02 03:07:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:07:12.544548 | orchestrator | 2026-01-02 03:07:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:07:15.596105 | orchestrator | 2026-01-02 03:07:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:07:15.597448 | orchestrator | 2026-01-02 03:07:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:07:15.597717 | orchestrator | 2026-01-02 03:07:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:07:18.650915 | orchestrator | 2026-01-02 03:07:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:07:18.653302 | orchestrator | 2026-01-02 03:07:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:07:18.653374 | orchestrator | 2026-01-02 03:07:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:07:21.694814 | orchestrator | 2026-01-02 03:07:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:07:21.695086 | orchestrator | 2026-01-02 03:07:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:07:21.695114 | orchestrator | 2026-01-02 03:07:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:07:24.735102 | orchestrator | 2026-01-02 03:07:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:07:24.738830 | orchestrator | 2026-01-02 03:07:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:07:24.738981 | orchestrator | 2026-01-02 03:07:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:07:27.786225 | orchestrator | 2026-01-02 03:07:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:07:27.786862 | orchestrator | 2026-01-02 03:07:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:07:27.787362 | orchestrator | 2026-01-02 03:07:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:07:30.837242 | orchestrator | 2026-01-02 03:07:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:07:30.838416 | orchestrator | 2026-01-02 03:07:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:07:30.838454 | orchestrator | 2026-01-02 03:07:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:07:33.892373 | orchestrator | 2026-01-02 03:07:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:07:33.893056 | orchestrator | 2026-01-02 03:07:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:07:33.893090 | orchestrator | 2026-01-02 03:07:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:07:36.939555 | orchestrator | 2026-01-02 03:07:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:07:36.939737 | orchestrator | 2026-01-02 03:07:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:07:36.939760 | orchestrator | 2026-01-02 03:07:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:07:39.990182 | orchestrator | 2026-01-02 03:07:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:07:39.990919 | orchestrator | 2026-01-02 03:07:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:07:39.991080 | orchestrator | 2026-01-02 03:07:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:07:43.042175 | orchestrator | 2026-01-02 03:07:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:07:43.043556 | orchestrator | 2026-01-02 03:07:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:07:43.043656 | orchestrator | 2026-01-02 03:07:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:07:46.093475 | orchestrator | 2026-01-02 03:07:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:07:46.094809 | orchestrator | 2026-01-02 03:07:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:07:46.095009 | orchestrator | 2026-01-02 03:07:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:07:49.146414 | orchestrator | 2026-01-02 03:07:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:07:49.146597 | orchestrator | 2026-01-02 03:07:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:07:49.146665 | orchestrator | 2026-01-02 03:07:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:07:52.185613 | orchestrator | 2026-01-02 03:07:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:07:52.187081 | orchestrator | 2026-01-02 03:07:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:07:52.187375 | orchestrator | 2026-01-02 03:07:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:07:55.232668 | orchestrator | 2026-01-02 03:07:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:07:55.233519 | orchestrator | 2026-01-02 03:07:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:07:55.233554 | orchestrator | 2026-01-02 03:07:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:07:58.281408 | orchestrator | 2026-01-02 03:07:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:07:58.282423 | orchestrator | 2026-01-02 03:07:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:07:58.282689 | orchestrator | 2026-01-02 03:07:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:08:01.339176 | orchestrator | 2026-01-02 03:08:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:08:01.340204 | orchestrator | 2026-01-02 03:08:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:08:01.340296 | orchestrator | 2026-01-02 03:08:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:08:04.389748 | orchestrator | 2026-01-02 03:08:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:08:04.390751 | orchestrator | 2026-01-02 03:08:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:08:04.390855 | orchestrator | 2026-01-02 03:08:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:08:07.436216 | orchestrator | 2026-01-02 03:08:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:08:07.437858 | orchestrator | 2026-01-02 03:08:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:08:07.437931 | orchestrator | 2026-01-02 03:08:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:08:10.478490 | orchestrator | 2026-01-02 03:08:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:08:10.479117 | orchestrator | 2026-01-02 03:08:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:08:10.479160 | orchestrator | 2026-01-02 03:08:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:08:13.526245 | orchestrator | 2026-01-02 03:08:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:08:13.529620 | orchestrator | 2026-01-02 03:08:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:08:13.529699 | orchestrator | 2026-01-02 03:08:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:08:16.586243 | orchestrator | 2026-01-02 03:08:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:08:16.586967 | orchestrator | 2026-01-02 03:08:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:08:16.587007 | orchestrator | 2026-01-02 03:08:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:08:19.640566 | orchestrator | 2026-01-02 03:08:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:08:19.643465 | orchestrator | 2026-01-02 03:08:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:08:19.643556 | orchestrator | 2026-01-02 03:08:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:08:22.693095 | orchestrator | 2026-01-02 03:08:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:08:22.698249 | orchestrator | 2026-01-02 03:08:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:08:22.698278 | orchestrator | 2026-01-02 03:08:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:08:25.745551 | orchestrator | 2026-01-02 03:08:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:08:25.747234 | orchestrator | 2026-01-02 03:08:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:08:25.747318 | orchestrator | 2026-01-02 03:08:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:08:28.792464 | orchestrator | 2026-01-02 03:08:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:08:28.793537 | orchestrator | 2026-01-02 03:08:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:08:28.793592 | orchestrator | 2026-01-02 03:08:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:08:31.837503 | orchestrator | 2026-01-02 03:08:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:08:31.837622 | orchestrator | 2026-01-02 03:08:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:08:31.837782 | orchestrator | 2026-01-02 03:08:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:08:34.889511 | orchestrator | 2026-01-02 03:08:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:08:34.891064 | orchestrator | 2026-01-02 03:08:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:08:34.891096 | orchestrator | 2026-01-02 03:08:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:08:37.940506 | orchestrator | 2026-01-02 03:08:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:08:37.942202 | orchestrator | 2026-01-02 03:08:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:08:37.942274 | orchestrator | 2026-01-02 03:08:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:08:40.996034 | orchestrator | 2026-01-02 03:08:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:08:40.997427 | orchestrator | 2026-01-02 03:08:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:08:40.997482 | orchestrator | 2026-01-02 03:08:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:08:44.079498 | orchestrator | 2026-01-02 03:08:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:08:44.079603 | orchestrator | 2026-01-02 03:08:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:08:44.079723 | orchestrator | 2026-01-02 03:08:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:08:47.132660 | orchestrator | 2026-01-02 03:08:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:08:47.133691 | orchestrator | 2026-01-02 03:08:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:08:47.133718 | orchestrator | 2026-01-02 03:08:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:08:50.182334 | orchestrator | 2026-01-02 03:08:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:08:50.185016 | orchestrator | 2026-01-02 03:08:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:08:50.185087 | orchestrator | 2026-01-02 03:08:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:08:53.230787 | orchestrator | 2026-01-02 03:08:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:08:53.232207 | orchestrator | 2026-01-02 03:08:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:08:53.232335 | orchestrator | 2026-01-02 03:08:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:08:56.276623 | orchestrator | 2026-01-02 03:08:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:08:56.277256 | orchestrator | 2026-01-02 03:08:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:08:56.277289 | orchestrator | 2026-01-02 03:08:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:08:59.314309 | orchestrator | 2026-01-02 03:08:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:08:59.314648 | orchestrator | 2026-01-02 03:08:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:08:59.314695 | orchestrator | 2026-01-02 03:08:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:09:02.367355 | orchestrator | 2026-01-02 03:09:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:09:02.368075 | orchestrator | 2026-01-02 03:09:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:09:02.368149 | orchestrator | 2026-01-02 03:09:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:09:05.419291 | orchestrator | 2026-01-02 03:09:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:09:05.421110 | orchestrator | 2026-01-02 03:09:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:09:05.421508 | orchestrator | 2026-01-02 03:09:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:09:08.465309 | orchestrator | 2026-01-02 03:09:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:09:08.466558 | orchestrator | 2026-01-02 03:09:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:09:08.466732 | orchestrator | 2026-01-02 03:09:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:09:11.511705 | orchestrator | 2026-01-02 03:09:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:09:11.513191 | orchestrator | 2026-01-02 03:09:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:09:11.513224 | orchestrator | 2026-01-02 03:09:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:09:14.559438 | orchestrator | 2026-01-02 03:09:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:09:14.561060 | orchestrator | 2026-01-02 03:09:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:09:14.561111 | orchestrator | 2026-01-02 03:09:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:09:17.609777 | orchestrator | 2026-01-02 03:09:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:09:17.611318 | orchestrator | 2026-01-02 03:09:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:09:17.611374 | orchestrator | 2026-01-02 03:09:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:09:20.660247 | orchestrator | 2026-01-02 03:09:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:09:20.661173 | orchestrator | 2026-01-02 03:09:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:09:20.661222 | orchestrator | 2026-01-02 03:09:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:09:23.705110 | orchestrator | 2026-01-02 03:09:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:09:23.706767 | orchestrator | 2026-01-02 03:09:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:09:23.707089 | orchestrator | 2026-01-02 03:09:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:09:26.753443 | orchestrator | 2026-01-02 03:09:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:09:26.754930 | orchestrator | 2026-01-02 03:09:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:09:26.754979 | orchestrator | 2026-01-02 03:09:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:09:29.810733 | orchestrator | 2026-01-02 03:09:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:09:29.812358 | orchestrator | 2026-01-02 03:09:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:09:29.812389 | orchestrator | 2026-01-02 03:09:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:09:32.865121 | orchestrator | 2026-01-02 03:09:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:09:32.866469 | orchestrator | 2026-01-02 03:09:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:09:32.866829 | orchestrator | 2026-01-02 03:09:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:09:35.916675 | orchestrator | 2026-01-02 03:09:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:09:35.920772 | orchestrator | 2026-01-02 03:09:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:09:35.920803 | orchestrator | 2026-01-02 03:09:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:09:38.973713 | orchestrator | 2026-01-02 03:09:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:09:38.975239 | orchestrator | 2026-01-02 03:09:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:09:38.975269 | orchestrator | 2026-01-02 03:09:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:09:42.031965 | orchestrator | 2026-01-02 03:09:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:09:42.032054 | orchestrator | 2026-01-02 03:09:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:09:42.032071 | orchestrator | 2026-01-02 03:09:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:09:45.084244 | orchestrator | 2026-01-02 03:09:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:09:45.085331 | orchestrator | 2026-01-02 03:09:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:09:45.085360 | orchestrator | 2026-01-02 03:09:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:09:48.138637 | orchestrator | 2026-01-02 03:09:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:09:48.140885 | orchestrator | 2026-01-02 03:09:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:09:48.140972 | orchestrator | 2026-01-02 03:09:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:09:51.202630 | orchestrator | 2026-01-02 03:09:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:09:51.205028 | orchestrator | 2026-01-02 03:09:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:09:51.205063 | orchestrator | 2026-01-02 03:09:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:09:54.255161 | orchestrator | 2026-01-02 03:09:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:09:54.256853 | orchestrator | 2026-01-02 03:09:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:09:54.256950 | orchestrator | 2026-01-02 03:09:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:09:57.310777 | orchestrator | 2026-01-02 03:09:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:09:57.312818 | orchestrator | 2026-01-02 03:09:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:09:57.312862 | orchestrator | 2026-01-02 03:09:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:10:00.368776 | orchestrator | 2026-01-02 03:10:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:10:00.370342 | orchestrator | 2026-01-02 03:10:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:10:00.370971 | orchestrator | 2026-01-02 03:10:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:10:03.419972 | orchestrator | 2026-01-02 03:10:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:10:03.422945 | orchestrator | 2026-01-02 03:10:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:10:03.423016 | orchestrator | 2026-01-02 03:10:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:10:06.469103 | orchestrator | 2026-01-02 03:10:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:10:06.471569 | orchestrator | 2026-01-02 03:10:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:10:06.471643 | orchestrator | 2026-01-02 03:10:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:10:09.513237 | orchestrator | 2026-01-02 03:10:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:10:09.514776 | orchestrator | 2026-01-02 03:10:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:10:09.514935 | orchestrator | 2026-01-02 03:10:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:10:12.564594 | orchestrator | 2026-01-02 03:10:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:10:12.565871 | orchestrator | 2026-01-02 03:10:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:10:12.565928 | orchestrator | 2026-01-02 03:10:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:10:15.609917 | orchestrator | 2026-01-02 03:10:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:10:15.610063 | orchestrator | 2026-01-02 03:10:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:10:15.610079 | orchestrator | 2026-01-02 03:10:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:10:18.656962 | orchestrator | 2026-01-02 03:10:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:10:18.658567 | orchestrator | 2026-01-02 03:10:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:10:18.658638 | orchestrator | 2026-01-02 03:10:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:10:21.704329 | orchestrator | 2026-01-02 03:10:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:10:21.705085 | orchestrator | 2026-01-02 03:10:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:10:21.705102 | orchestrator | 2026-01-02 03:10:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:10:24.750863 | orchestrator | 2026-01-02 03:10:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:10:24.752559 | orchestrator | 2026-01-02 03:10:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:10:24.752777 | orchestrator | 2026-01-02 03:10:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:10:27.801498 | orchestrator | 2026-01-02 03:10:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:10:27.803123 | orchestrator | 2026-01-02 03:10:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:10:27.803354 | orchestrator | 2026-01-02 03:10:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:10:30.848509 | orchestrator | 2026-01-02 03:10:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:10:30.849844 | orchestrator | 2026-01-02 03:10:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:10:30.849880 | orchestrator | 2026-01-02 03:10:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:10:33.896263 | orchestrator | 2026-01-02 03:10:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:10:33.897813 | orchestrator | 2026-01-02 03:10:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:10:33.897848 | orchestrator | 2026-01-02 03:10:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:10:36.955153 | orchestrator | 2026-01-02 03:10:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:10:36.957463 | orchestrator | 2026-01-02 03:10:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:10:36.957580 | orchestrator | 2026-01-02 03:10:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:10:40.009786 | orchestrator | 2026-01-02 03:10:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:10:40.011787 | orchestrator | 2026-01-02 03:10:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:10:40.011866 | orchestrator | 2026-01-02 03:10:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:10:43.066162 | orchestrator | 2026-01-02 03:10:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:10:43.066470 | orchestrator | 2026-01-02 03:10:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:10:43.067257 | orchestrator | 2026-01-02 03:10:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:10:46.113983 | orchestrator | 2026-01-02 03:10:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:10:46.116544 | orchestrator | 2026-01-02 03:10:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:10:46.116594 | orchestrator | 2026-01-02 03:10:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:10:49.153092 | orchestrator | 2026-01-02 03:10:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:10:49.155035 | orchestrator | 2026-01-02 03:10:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:10:49.155108 | orchestrator | 2026-01-02 03:10:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:10:52.208637 | orchestrator | 2026-01-02 03:10:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:10:52.357490 | orchestrator | 2026-01-02 03:10:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:10:52.357568 | orchestrator | 2026-01-02 03:10:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:10:55.253516 | orchestrator | 2026-01-02 03:10:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:10:55.254296 | orchestrator | 2026-01-02 03:10:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:10:55.254452 | orchestrator | 2026-01-02 03:10:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:10:58.305978 | orchestrator | 2026-01-02 03:10:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:10:58.308590 | orchestrator | 2026-01-02 03:10:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:10:58.308632 | orchestrator | 2026-01-02 03:10:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:11:01.355810 | orchestrator | 2026-01-02 03:11:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:11:01.357549 | orchestrator | 2026-01-02 03:11:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:11:01.357579 | orchestrator | 2026-01-02 03:11:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:11:04.410085 | orchestrator | 2026-01-02 03:11:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:11:04.411512 | orchestrator | 2026-01-02 03:11:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:11:04.411550 | orchestrator | 2026-01-02 03:11:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:11:07.464544 | orchestrator | 2026-01-02 03:11:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:11:07.466433 | orchestrator | 2026-01-02 03:11:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:11:07.466506 | orchestrator | 2026-01-02 03:11:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:11:10.523863 | orchestrator | 2026-01-02 03:11:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:11:10.525917 | orchestrator | 2026-01-02 03:11:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:11:10.525963 | orchestrator | 2026-01-02 03:11:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:11:13.568108 | orchestrator | 2026-01-02 03:11:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:11:13.570776 | orchestrator | 2026-01-02 03:11:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:11:13.570831 | orchestrator | 2026-01-02 03:11:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:11:16.617503 | orchestrator | 2026-01-02 03:11:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:11:16.618713 | orchestrator | 2026-01-02 03:11:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:11:16.618758 | orchestrator | 2026-01-02 03:11:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:11:19.672440 | orchestrator | 2026-01-02 03:11:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:11:19.674733 | orchestrator | 2026-01-02 03:11:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:11:19.674844 | orchestrator | 2026-01-02 03:11:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:11:22.715199 | orchestrator | 2026-01-02 03:11:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:11:22.716193 | orchestrator | 2026-01-02 03:11:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:11:22.716270 | orchestrator | 2026-01-02 03:11:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:11:25.769309 | orchestrator | 2026-01-02 03:11:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:11:25.770970 | orchestrator | 2026-01-02 03:11:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:11:25.771057 | orchestrator | 2026-01-02 03:11:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:11:28.823560 | orchestrator | 2026-01-02 03:11:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:11:28.823841 | orchestrator | 2026-01-02 03:11:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:11:28.824449 | orchestrator | 2026-01-02 03:11:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:11:31.879093 | orchestrator | 2026-01-02 03:11:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:11:31.881272 | orchestrator | 2026-01-02 03:11:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:11:31.881335 | orchestrator | 2026-01-02 03:11:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:11:34.931240 | orchestrator | 2026-01-02 03:11:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:11:34.932851 | orchestrator | 2026-01-02 03:11:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:11:34.933000 | orchestrator | 2026-01-02 03:11:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:11:37.988051 | orchestrator | 2026-01-02 03:11:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:11:37.989726 | orchestrator | 2026-01-02 03:11:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:11:37.989805 | orchestrator | 2026-01-02 03:11:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:11:41.045442 | orchestrator | 2026-01-02 03:11:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:11:41.047677 | orchestrator | 2026-01-02 03:11:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:11:41.047719 | orchestrator | 2026-01-02 03:11:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:11:44.093013 | orchestrator | 2026-01-02 03:11:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:11:44.094376 | orchestrator | 2026-01-02 03:11:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:11:44.094416 | orchestrator | 2026-01-02 03:11:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:11:47.146749 | orchestrator | 2026-01-02 03:11:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:11:47.149182 | orchestrator | 2026-01-02 03:11:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:11:47.149229 | orchestrator | 2026-01-02 03:11:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:11:50.195777 | orchestrator | 2026-01-02 03:11:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:11:50.197571 | orchestrator | 2026-01-02 03:11:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:11:50.197622 | orchestrator | 2026-01-02 03:11:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:11:53.246326 | orchestrator | 2026-01-02 03:11:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:11:53.247525 | orchestrator | 2026-01-02 03:11:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:11:53.247583 | orchestrator | 2026-01-02 03:11:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:11:56.289701 | orchestrator | 2026-01-02 03:11:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:11:56.290234 | orchestrator | 2026-01-02 03:11:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:11:56.290254 | orchestrator | 2026-01-02 03:11:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:11:59.344725 | orchestrator | 2026-01-02 03:11:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:11:59.346808 | orchestrator | 2026-01-02 03:11:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:11:59.347019 | orchestrator | 2026-01-02 03:11:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:12:02.397466 | orchestrator | 2026-01-02 03:12:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:12:02.399556 | orchestrator | 2026-01-02 03:12:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:12:02.400164 | orchestrator | 2026-01-02 03:12:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:12:05.448358 | orchestrator | 2026-01-02 03:12:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:12:05.448528 | orchestrator | 2026-01-02 03:12:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:12:05.448563 | orchestrator | 2026-01-02 03:12:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:12:08.501390 | orchestrator | 2026-01-02 03:12:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:12:08.502641 | orchestrator | 2026-01-02 03:12:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:12:08.503533 | orchestrator | 2026-01-02 03:12:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:12:11.551631 | orchestrator | 2026-01-02 03:12:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:12:11.553986 | orchestrator | 2026-01-02 03:12:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:12:11.554145 | orchestrator | 2026-01-02 03:12:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:12:14.602321 | orchestrator | 2026-01-02 03:12:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:12:14.603279 | orchestrator | 2026-01-02 03:12:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:12:14.603315 | orchestrator | 2026-01-02 03:12:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:12:17.654571 | orchestrator | 2026-01-02 03:12:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:12:17.656565 | orchestrator | 2026-01-02 03:12:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:12:17.656598 | orchestrator | 2026-01-02 03:12:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:12:20.707237 | orchestrator | 2026-01-02 03:12:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:12:20.708010 | orchestrator | 2026-01-02 03:12:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:12:20.708048 | orchestrator | 2026-01-02 03:12:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:12:23.747562 | orchestrator | 2026-01-02 03:12:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:12:23.748471 | orchestrator | 2026-01-02 03:12:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:12:23.748527 | orchestrator | 2026-01-02 03:12:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:12:26.803167 | orchestrator | 2026-01-02 03:12:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:12:26.805033 | orchestrator | 2026-01-02 03:12:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:12:26.805089 | orchestrator | 2026-01-02 03:12:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:12:29.846086 | orchestrator | 2026-01-02 03:12:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:12:29.846457 | orchestrator | 2026-01-02 03:12:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:12:29.846487 | orchestrator | 2026-01-02 03:12:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:12:32.894726 | orchestrator | 2026-01-02 03:12:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:12:32.896128 | orchestrator | 2026-01-02 03:12:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:12:32.896242 | orchestrator | 2026-01-02 03:12:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:12:35.943077 | orchestrator | 2026-01-02 03:12:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:12:35.943505 | orchestrator | 2026-01-02 03:12:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:12:35.943726 | orchestrator | 2026-01-02 03:12:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:12:38.993291 | orchestrator | 2026-01-02 03:12:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:12:38.994621 | orchestrator | 2026-01-02 03:12:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:12:38.994661 | orchestrator | 2026-01-02 03:12:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:12:42.048274 | orchestrator | 2026-01-02 03:12:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:12:42.048536 | orchestrator | 2026-01-02 03:12:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:12:42.048574 | orchestrator | 2026-01-02 03:12:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:12:45.099482 | orchestrator | 2026-01-02 03:12:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:12:45.101234 | orchestrator | 2026-01-02 03:12:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:12:45.101252 | orchestrator | 2026-01-02 03:12:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:12:48.146284 | orchestrator | 2026-01-02 03:12:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:12:48.148294 | orchestrator | 2026-01-02 03:12:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:12:48.148418 | orchestrator | 2026-01-02 03:12:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:12:51.187459 | orchestrator | 2026-01-02 03:12:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:12:51.188486 | orchestrator | 2026-01-02 03:12:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:12:51.188514 | orchestrator | 2026-01-02 03:12:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:12:54.237662 | orchestrator | 2026-01-02 03:12:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:12:54.239462 | orchestrator | 2026-01-02 03:12:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:12:54.239590 | orchestrator | 2026-01-02 03:12:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:12:57.292343 | orchestrator | 2026-01-02 03:12:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:12:57.294080 | orchestrator | 2026-01-02 03:12:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:12:57.294184 | orchestrator | 2026-01-02 03:12:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:13:00.348628 | orchestrator | 2026-01-02 03:13:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:13:00.351014 | orchestrator | 2026-01-02 03:13:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:13:00.351244 | orchestrator | 2026-01-02 03:13:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:13:03.406810 | orchestrator | 2026-01-02 03:13:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:13:03.407659 | orchestrator | 2026-01-02 03:13:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:13:03.407705 | orchestrator | 2026-01-02 03:13:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:13:06.459636 | orchestrator | 2026-01-02 03:13:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:13:06.461716 | orchestrator | 2026-01-02 03:13:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:13:06.461793 | orchestrator | 2026-01-02 03:13:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:13:09.505502 | orchestrator | 2026-01-02 03:13:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:13:09.506756 | orchestrator | 2026-01-02 03:13:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:13:09.506797 | orchestrator | 2026-01-02 03:13:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:13:12.557300 | orchestrator | 2026-01-02 03:13:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:13:12.559516 | orchestrator | 2026-01-02 03:13:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:13:12.559549 | orchestrator | 2026-01-02 03:13:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:13:15.603709 | orchestrator | 2026-01-02 03:13:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:13:15.604399 | orchestrator | 2026-01-02 03:13:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:13:15.604685 | orchestrator | 2026-01-02 03:13:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:13:18.656449 | orchestrator | 2026-01-02 03:13:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:13:18.658259 | orchestrator | 2026-01-02 03:13:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:13:18.658294 | orchestrator | 2026-01-02 03:13:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:13:21.706595 | orchestrator | 2026-01-02 03:13:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:13:21.707777 | orchestrator | 2026-01-02 03:13:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:13:21.707894 | orchestrator | 2026-01-02 03:13:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:13:24.754679 | orchestrator | 2026-01-02 03:13:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:13:24.756216 | orchestrator | 2026-01-02 03:13:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:13:24.756409 | orchestrator | 2026-01-02 03:13:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:13:27.804042 | orchestrator | 2026-01-02 03:13:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:13:27.805787 | orchestrator | 2026-01-02 03:13:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:13:27.805970 | orchestrator | 2026-01-02 03:13:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:13:30.858101 | orchestrator | 2026-01-02 03:13:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:13:30.859916 | orchestrator | 2026-01-02 03:13:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:13:30.859988 | orchestrator | 2026-01-02 03:13:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:13:33.910713 | orchestrator | 2026-01-02 03:13:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:13:33.913743 | orchestrator | 2026-01-02 03:13:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:13:33.913807 | orchestrator | 2026-01-02 03:13:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:13:36.957076 | orchestrator | 2026-01-02 03:13:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:13:36.957231 | orchestrator | 2026-01-02 03:13:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:13:36.957249 | orchestrator | 2026-01-02 03:13:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:13:39.998085 | orchestrator | 2026-01-02 03:13:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:13:40.000333 | orchestrator | 2026-01-02 03:13:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:13:40.000399 | orchestrator | 2026-01-02 03:13:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:13:43.054853 | orchestrator | 2026-01-02 03:13:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:13:43.057172 | orchestrator | 2026-01-02 03:13:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:13:43.057228 | orchestrator | 2026-01-02 03:13:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:13:46.108473 | orchestrator | 2026-01-02 03:13:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:13:46.111545 | orchestrator | 2026-01-02 03:13:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:13:46.111647 | orchestrator | 2026-01-02 03:13:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:13:49.158344 | orchestrator | 2026-01-02 03:13:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:13:49.160661 | orchestrator | 2026-01-02 03:13:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:13:49.160722 | orchestrator | 2026-01-02 03:13:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:13:52.208645 | orchestrator | 2026-01-02 03:13:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:13:52.209322 | orchestrator | 2026-01-02 03:13:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:13:52.209367 | orchestrator | 2026-01-02 03:13:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:13:55.251490 | orchestrator | 2026-01-02 03:13:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:13:55.252322 | orchestrator | 2026-01-02 03:13:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:13:55.252475 | orchestrator | 2026-01-02 03:13:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:13:58.298433 | orchestrator | 2026-01-02 03:13:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:13:58.300165 | orchestrator | 2026-01-02 03:13:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:13:58.300197 | orchestrator | 2026-01-02 03:13:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:14:01.362626 | orchestrator | 2026-01-02 03:14:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:14:01.365245 | orchestrator | 2026-01-02 03:14:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:14:01.365370 | orchestrator | 2026-01-02 03:14:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:14:04.422274 | orchestrator | 2026-01-02 03:14:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:14:04.423033 | orchestrator | 2026-01-02 03:14:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:14:04.423075 | orchestrator | 2026-01-02 03:14:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:14:07.473899 | orchestrator | 2026-01-02 03:14:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:14:07.475052 | orchestrator | 2026-01-02 03:14:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:14:07.475104 | orchestrator | 2026-01-02 03:14:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:14:10.522746 | orchestrator | 2026-01-02 03:14:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:14:10.523043 | orchestrator | 2026-01-02 03:14:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:14:10.523067 | orchestrator | 2026-01-02 03:14:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:14:13.572245 | orchestrator | 2026-01-02 03:14:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:14:13.573098 | orchestrator | 2026-01-02 03:14:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:14:13.573214 | orchestrator | 2026-01-02 03:14:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:14:16.624749 | orchestrator | 2026-01-02 03:14:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:14:16.626443 | orchestrator | 2026-01-02 03:14:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:14:16.626531 | orchestrator | 2026-01-02 03:14:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:14:19.675293 | orchestrator | 2026-01-02 03:14:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:14:19.677357 | orchestrator | 2026-01-02 03:14:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:14:19.677493 | orchestrator | 2026-01-02 03:14:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:14:22.717549 | orchestrator | 2026-01-02 03:14:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:14:22.720349 | orchestrator | 2026-01-02 03:14:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:14:22.721209 | orchestrator | 2026-01-02 03:14:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:14:25.773299 | orchestrator | 2026-01-02 03:14:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:14:25.773548 | orchestrator | 2026-01-02 03:14:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:14:25.773584 | orchestrator | 2026-01-02 03:14:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:14:28.825698 | orchestrator | 2026-01-02 03:14:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:14:28.826385 | orchestrator | 2026-01-02 03:14:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:14:28.826422 | orchestrator | 2026-01-02 03:14:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:14:31.879296 | orchestrator | 2026-01-02 03:14:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:14:31.880813 | orchestrator | 2026-01-02 03:14:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:14:31.880846 | orchestrator | 2026-01-02 03:14:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:14:34.931265 | orchestrator | 2026-01-02 03:14:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:14:34.935610 | orchestrator | 2026-01-02 03:14:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:14:34.935678 | orchestrator | 2026-01-02 03:14:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:14:37.981484 | orchestrator | 2026-01-02 03:14:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:14:37.982279 | orchestrator | 2026-01-02 03:14:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:14:37.982607 | orchestrator | 2026-01-02 03:14:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:14:41.033263 | orchestrator | 2026-01-02 03:14:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:14:41.034305 | orchestrator | 2026-01-02 03:14:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:14:41.034336 | orchestrator | 2026-01-02 03:14:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:14:44.081958 | orchestrator | 2026-01-02 03:14:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:14:44.083663 | orchestrator | 2026-01-02 03:14:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:14:44.083698 | orchestrator | 2026-01-02 03:14:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:14:47.133626 | orchestrator | 2026-01-02 03:14:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:14:47.136017 | orchestrator | 2026-01-02 03:14:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:14:47.136297 | orchestrator | 2026-01-02 03:14:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:14:50.181932 | orchestrator | 2026-01-02 03:14:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:14:50.184848 | orchestrator | 2026-01-02 03:14:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:14:50.184978 | orchestrator | 2026-01-02 03:14:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:14:53.236611 | orchestrator | 2026-01-02 03:14:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:14:53.237754 | orchestrator | 2026-01-02 03:14:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:14:53.237786 | orchestrator | 2026-01-02 03:14:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:14:56.286280 | orchestrator | 2026-01-02 03:14:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:14:56.286933 | orchestrator | 2026-01-02 03:14:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:14:56.286967 | orchestrator | 2026-01-02 03:14:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:14:59.336419 | orchestrator | 2026-01-02 03:14:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:14:59.338701 | orchestrator | 2026-01-02 03:14:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:14:59.338975 | orchestrator | 2026-01-02 03:14:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:15:02.381009 | orchestrator | 2026-01-02 03:15:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:15:02.381792 | orchestrator | 2026-01-02 03:15:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:15:02.382105 | orchestrator | 2026-01-02 03:15:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:15:05.432736 | orchestrator | 2026-01-02 03:15:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:15:05.434093 | orchestrator | 2026-01-02 03:15:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:15:05.434140 | orchestrator | 2026-01-02 03:15:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:15:08.484828 | orchestrator | 2026-01-02 03:15:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:15:08.488647 | orchestrator | 2026-01-02 03:15:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:15:08.488980 | orchestrator | 2026-01-02 03:15:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:15:11.531253 | orchestrator | 2026-01-02 03:15:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:15:11.532713 | orchestrator | 2026-01-02 03:15:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:15:11.532772 | orchestrator | 2026-01-02 03:15:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:15:14.581667 | orchestrator | 2026-01-02 03:15:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:15:14.583564 | orchestrator | 2026-01-02 03:15:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:15:14.583622 | orchestrator | 2026-01-02 03:15:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:15:17.630697 | orchestrator | 2026-01-02 03:15:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:15:17.631603 | orchestrator | 2026-01-02 03:15:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:15:17.631689 | orchestrator | 2026-01-02 03:15:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:15:20.678739 | orchestrator | 2026-01-02 03:15:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:15:20.680085 | orchestrator | 2026-01-02 03:15:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:15:20.680135 | orchestrator | 2026-01-02 03:15:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:15:23.728184 | orchestrator | 2026-01-02 03:15:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:15:23.729047 | orchestrator | 2026-01-02 03:15:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:15:23.729311 | orchestrator | 2026-01-02 03:15:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:15:26.779375 | orchestrator | 2026-01-02 03:15:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:15:26.780929 | orchestrator | 2026-01-02 03:15:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:15:26.781371 | orchestrator | 2026-01-02 03:15:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:15:29.833367 | orchestrator | 2026-01-02 03:15:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:15:29.834397 | orchestrator | 2026-01-02 03:15:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:15:29.834551 | orchestrator | 2026-01-02 03:15:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:15:32.879114 | orchestrator | 2026-01-02 03:15:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:15:32.881354 | orchestrator | 2026-01-02 03:15:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:15:32.881393 | orchestrator | 2026-01-02 03:15:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:15:35.929007 | orchestrator | 2026-01-02 03:15:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:15:35.930312 | orchestrator | 2026-01-02 03:15:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:15:35.930353 | orchestrator | 2026-01-02 03:15:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:15:38.987283 | orchestrator | 2026-01-02 03:15:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:15:38.989143 | orchestrator | 2026-01-02 03:15:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:15:38.989169 | orchestrator | 2026-01-02 03:15:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:15:42.044746 | orchestrator | 2026-01-02 03:15:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:15:42.044956 | orchestrator | 2026-01-02 03:15:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:15:42.044985 | orchestrator | 2026-01-02 03:15:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:15:45.095144 | orchestrator | 2026-01-02 03:15:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:15:45.095328 | orchestrator | 2026-01-02 03:15:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:15:45.095346 | orchestrator | 2026-01-02 03:15:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:15:48.146704 | orchestrator | 2026-01-02 03:15:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:15:48.149456 | orchestrator | 2026-01-02 03:15:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:15:48.149609 | orchestrator | 2026-01-02 03:15:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:15:51.200426 | orchestrator | 2026-01-02 03:15:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:15:51.201374 | orchestrator | 2026-01-02 03:15:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:15:51.201432 | orchestrator | 2026-01-02 03:15:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:15:54.255183 | orchestrator | 2026-01-02 03:15:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:15:54.257117 | orchestrator | 2026-01-02 03:15:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:15:54.257194 | orchestrator | 2026-01-02 03:15:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:15:57.305950 | orchestrator | 2026-01-02 03:15:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:15:57.307123 | orchestrator | 2026-01-02 03:15:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:15:57.307162 | orchestrator | 2026-01-02 03:15:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:16:00.364196 | orchestrator | 2026-01-02 03:16:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:16:00.364375 | orchestrator | 2026-01-02 03:16:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:16:00.364397 | orchestrator | 2026-01-02 03:16:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:16:03.411920 | orchestrator | 2026-01-02 03:16:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:16:03.413274 | orchestrator | 2026-01-02 03:16:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:16:03.413306 | orchestrator | 2026-01-02 03:16:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:16:06.467833 | orchestrator | 2026-01-02 03:16:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:16:06.469262 | orchestrator | 2026-01-02 03:16:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:16:06.469317 | orchestrator | 2026-01-02 03:16:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:16:09.524055 | orchestrator | 2026-01-02 03:16:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:16:09.524143 | orchestrator | 2026-01-02 03:16:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:16:09.524154 | orchestrator | 2026-01-02 03:16:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:16:12.571693 | orchestrator | 2026-01-02 03:16:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:16:12.574150 | orchestrator | 2026-01-02 03:16:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:16:12.574197 | orchestrator | 2026-01-02 03:16:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:16:15.618481 | orchestrator | 2026-01-02 03:16:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:16:15.621197 | orchestrator | 2026-01-02 03:16:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:16:15.621255 | orchestrator | 2026-01-02 03:16:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:16:18.667839 | orchestrator | 2026-01-02 03:16:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:16:18.669277 | orchestrator | 2026-01-02 03:16:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:16:18.669302 | orchestrator | 2026-01-02 03:16:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:16:21.711074 | orchestrator | 2026-01-02 03:16:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:16:21.711281 | orchestrator | 2026-01-02 03:16:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:16:21.711302 | orchestrator | 2026-01-02 03:16:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:16:24.760557 | orchestrator | 2026-01-02 03:16:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:16:24.762249 | orchestrator | 2026-01-02 03:16:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:16:24.762304 | orchestrator | 2026-01-02 03:16:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:16:27.817724 | orchestrator | 2026-01-02 03:16:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:16:27.820437 | orchestrator | 2026-01-02 03:16:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:16:27.820522 | orchestrator | 2026-01-02 03:16:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:16:30.869575 | orchestrator | 2026-01-02 03:16:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:16:30.870641 | orchestrator | 2026-01-02 03:16:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:16:30.870828 | orchestrator | 2026-01-02 03:16:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:16:33.919499 | orchestrator | 2026-01-02 03:16:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:16:33.920413 | orchestrator | 2026-01-02 03:16:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:16:33.920500 | orchestrator | 2026-01-02 03:16:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:16:36.976441 | orchestrator | 2026-01-02 03:16:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:16:36.977388 | orchestrator | 2026-01-02 03:16:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:16:36.977420 | orchestrator | 2026-01-02 03:16:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:16:40.027789 | orchestrator | 2026-01-02 03:16:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:16:40.029368 | orchestrator | 2026-01-02 03:16:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:16:40.029534 | orchestrator | 2026-01-02 03:16:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:16:43.072690 | orchestrator | 2026-01-02 03:16:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:16:43.074124 | orchestrator | 2026-01-02 03:16:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:16:43.074172 | orchestrator | 2026-01-02 03:16:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:16:46.125190 | orchestrator | 2026-01-02 03:16:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:16:46.127471 | orchestrator | 2026-01-02 03:16:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:16:46.127827 | orchestrator | 2026-01-02 03:16:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:16:49.179174 | orchestrator | 2026-01-02 03:16:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:16:49.182240 | orchestrator | 2026-01-02 03:16:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:16:49.182285 | orchestrator | 2026-01-02 03:16:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:16:52.223016 | orchestrator | 2026-01-02 03:16:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:16:52.224958 | orchestrator | 2026-01-02 03:16:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:16:52.225026 | orchestrator | 2026-01-02 03:16:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:16:55.269076 | orchestrator | 2026-01-02 03:16:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:16:55.269720 | orchestrator | 2026-01-02 03:16:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:16:55.269771 | orchestrator | 2026-01-02 03:16:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:16:58.319485 | orchestrator | 2026-01-02 03:16:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:16:58.322254 | orchestrator | 2026-01-02 03:16:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:16:58.322320 | orchestrator | 2026-01-02 03:16:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:17:01.368380 | orchestrator | 2026-01-02 03:17:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:17:01.370455 | orchestrator | 2026-01-02 03:17:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:17:01.370537 | orchestrator | 2026-01-02 03:17:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:17:04.414999 | orchestrator | 2026-01-02 03:17:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:17:04.416060 | orchestrator | 2026-01-02 03:17:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:17:04.416114 | orchestrator | 2026-01-02 03:17:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:17:07.460231 | orchestrator | 2026-01-02 03:17:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:17:07.461341 | orchestrator | 2026-01-02 03:17:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:17:07.461434 | orchestrator | 2026-01-02 03:17:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:17:10.506524 | orchestrator | 2026-01-02 03:17:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:17:10.508978 | orchestrator | 2026-01-02 03:17:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:17:10.509049 | orchestrator | 2026-01-02 03:17:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:17:13.556136 | orchestrator | 2026-01-02 03:17:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:17:13.557192 | orchestrator | 2026-01-02 03:17:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:17:13.557232 | orchestrator | 2026-01-02 03:17:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:17:16.601305 | orchestrator | 2026-01-02 03:17:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:17:16.603016 | orchestrator | 2026-01-02 03:17:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:17:16.603062 | orchestrator | 2026-01-02 03:17:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:17:19.651080 | orchestrator | 2026-01-02 03:17:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:17:19.653692 | orchestrator | 2026-01-02 03:17:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:17:19.653725 | orchestrator | 2026-01-02 03:17:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:17:22.709687 | orchestrator | 2026-01-02 03:17:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:17:22.711005 | orchestrator | 2026-01-02 03:17:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:17:22.711075 | orchestrator | 2026-01-02 03:17:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:17:25.763571 | orchestrator | 2026-01-02 03:17:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:17:25.764113 | orchestrator | 2026-01-02 03:17:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:17:25.764200 | orchestrator | 2026-01-02 03:17:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:17:28.816757 | orchestrator | 2026-01-02 03:17:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:17:28.817704 | orchestrator | 2026-01-02 03:17:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:17:28.817774 | orchestrator | 2026-01-02 03:17:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:17:31.873708 | orchestrator | 2026-01-02 03:17:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:17:31.875795 | orchestrator | 2026-01-02 03:17:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:17:31.875851 | orchestrator | 2026-01-02 03:17:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:17:34.923057 | orchestrator | 2026-01-02 03:17:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:17:34.923740 | orchestrator | 2026-01-02 03:17:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:17:34.923752 | orchestrator | 2026-01-02 03:17:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:17:37.974569 | orchestrator | 2026-01-02 03:17:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:17:37.975922 | orchestrator | 2026-01-02 03:17:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:17:37.975965 | orchestrator | 2026-01-02 03:17:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:17:41.023653 | orchestrator | 2026-01-02 03:17:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:17:41.027269 | orchestrator | 2026-01-02 03:17:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:17:41.027332 | orchestrator | 2026-01-02 03:17:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:17:44.074437 | orchestrator | 2026-01-02 03:17:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:17:44.075595 | orchestrator | 2026-01-02 03:17:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:17:44.075632 | orchestrator | 2026-01-02 03:17:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:17:47.116772 | orchestrator | 2026-01-02 03:17:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:17:47.117408 | orchestrator | 2026-01-02 03:17:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:17:47.117489 | orchestrator | 2026-01-02 03:17:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:17:50.169234 | orchestrator | 2026-01-02 03:17:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:17:50.171069 | orchestrator | 2026-01-02 03:17:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:17:50.171116 | orchestrator | 2026-01-02 03:17:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:17:53.220166 | orchestrator | 2026-01-02 03:17:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:17:53.222518 | orchestrator | 2026-01-02 03:17:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:17:53.222556 | orchestrator | 2026-01-02 03:17:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:17:56.263555 | orchestrator | 2026-01-02 03:17:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:17:56.264439 | orchestrator | 2026-01-02 03:17:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:17:56.264699 | orchestrator | 2026-01-02 03:17:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:17:59.315264 | orchestrator | 2026-01-02 03:17:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:17:59.316736 | orchestrator | 2026-01-02 03:17:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:17:59.316786 | orchestrator | 2026-01-02 03:17:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:18:02.356783 | orchestrator | 2026-01-02 03:18:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:18:02.357429 | orchestrator | 2026-01-02 03:18:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:18:02.357457 | orchestrator | 2026-01-02 03:18:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:18:05.409279 | orchestrator | 2026-01-02 03:18:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:18:05.412059 | orchestrator | 2026-01-02 03:18:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:18:05.412121 | orchestrator | 2026-01-02 03:18:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:18:08.460977 | orchestrator | 2026-01-02 03:18:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:18:08.462651 | orchestrator | 2026-01-02 03:18:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:18:08.462798 | orchestrator | 2026-01-02 03:18:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:18:11.513357 | orchestrator | 2026-01-02 03:18:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:18:11.513538 | orchestrator | 2026-01-02 03:18:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:18:11.513662 | orchestrator | 2026-01-02 03:18:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:18:14.565256 | orchestrator | 2026-01-02 03:18:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:18:14.566682 | orchestrator | 2026-01-02 03:18:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:18:14.566742 | orchestrator | 2026-01-02 03:18:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:18:17.605424 | orchestrator | 2026-01-02 03:18:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:18:17.606131 | orchestrator | 2026-01-02 03:18:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:18:17.606158 | orchestrator | 2026-01-02 03:18:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:18:20.657656 | orchestrator | 2026-01-02 03:18:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:18:20.657813 | orchestrator | 2026-01-02 03:18:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:18:20.657863 | orchestrator | 2026-01-02 03:18:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:18:23.715085 | orchestrator | 2026-01-02 03:18:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:18:23.716819 | orchestrator | 2026-01-02 03:18:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:18:23.716917 | orchestrator | 2026-01-02 03:18:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:18:26.769391 | orchestrator | 2026-01-02 03:18:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:18:26.770821 | orchestrator | 2026-01-02 03:18:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:18:26.770985 | orchestrator | 2026-01-02 03:18:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:18:29.824536 | orchestrator | 2026-01-02 03:18:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:18:29.826363 | orchestrator | 2026-01-02 03:18:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:18:29.826519 | orchestrator | 2026-01-02 03:18:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:18:32.878178 | orchestrator | 2026-01-02 03:18:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:18:32.880052 | orchestrator | 2026-01-02 03:18:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:18:32.880132 | orchestrator | 2026-01-02 03:18:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:18:35.928625 | orchestrator | 2026-01-02 03:18:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:18:35.930632 | orchestrator | 2026-01-02 03:18:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:18:35.930708 | orchestrator | 2026-01-02 03:18:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:18:38.983143 | orchestrator | 2026-01-02 03:18:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:18:38.984929 | orchestrator | 2026-01-02 03:18:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:18:38.984976 | orchestrator | 2026-01-02 03:18:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:18:42.034422 | orchestrator | 2026-01-02 03:18:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:18:42.037109 | orchestrator | 2026-01-02 03:18:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:18:42.037253 | orchestrator | 2026-01-02 03:18:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:18:45.087235 | orchestrator | 2026-01-02 03:18:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:18:45.089673 | orchestrator | 2026-01-02 03:18:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:18:45.090396 | orchestrator | 2026-01-02 03:18:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:18:48.134078 | orchestrator | 2026-01-02 03:18:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:18:48.135051 | orchestrator | 2026-01-02 03:18:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:18:48.135108 | orchestrator | 2026-01-02 03:18:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:18:51.186237 | orchestrator | 2026-01-02 03:18:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:18:51.187583 | orchestrator | 2026-01-02 03:18:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:18:51.187637 | orchestrator | 2026-01-02 03:18:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:18:54.234344 | orchestrator | 2026-01-02 03:18:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:18:54.236002 | orchestrator | 2026-01-02 03:18:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:18:54.236076 | orchestrator | 2026-01-02 03:18:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:18:57.288149 | orchestrator | 2026-01-02 03:18:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:18:57.290407 | orchestrator | 2026-01-02 03:18:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:18:57.291080 | orchestrator | 2026-01-02 03:18:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:19:00.339828 | orchestrator | 2026-01-02 03:19:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:19:00.340699 | orchestrator | 2026-01-02 03:19:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:19:00.340941 | orchestrator | 2026-01-02 03:19:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:19:03.391915 | orchestrator | 2026-01-02 03:19:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:19:03.392754 | orchestrator | 2026-01-02 03:19:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:19:03.392784 | orchestrator | 2026-01-02 03:19:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:19:06.444666 | orchestrator | 2026-01-02 03:19:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:19:06.445669 | orchestrator | 2026-01-02 03:19:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:19:06.446198 | orchestrator | 2026-01-02 03:19:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:19:09.499525 | orchestrator | 2026-01-02 03:19:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:19:09.519987 | orchestrator | 2026-01-02 03:19:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:19:09.520073 | orchestrator | 2026-01-02 03:19:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:19:12.549261 | orchestrator | 2026-01-02 03:19:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:19:12.551829 | orchestrator | 2026-01-02 03:19:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:19:12.552011 | orchestrator | 2026-01-02 03:19:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:19:15.599249 | orchestrator | 2026-01-02 03:19:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:19:15.600071 | orchestrator | 2026-01-02 03:19:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:19:15.600153 | orchestrator | 2026-01-02 03:19:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:19:18.645064 | orchestrator | 2026-01-02 03:19:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:19:18.646150 | orchestrator | 2026-01-02 03:19:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:19:18.646200 | orchestrator | 2026-01-02 03:19:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:19:21.698280 | orchestrator | 2026-01-02 03:19:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:19:21.698378 | orchestrator | 2026-01-02 03:19:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:19:21.698420 | orchestrator | 2026-01-02 03:19:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:19:24.749873 | orchestrator | 2026-01-02 03:19:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:19:24.754040 | orchestrator | 2026-01-02 03:19:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:19:24.754107 | orchestrator | 2026-01-02 03:19:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:19:27.809998 | orchestrator | 2026-01-02 03:19:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:19:27.812618 | orchestrator | 2026-01-02 03:19:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:19:27.812714 | orchestrator | 2026-01-02 03:19:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:19:30.859427 | orchestrator | 2026-01-02 03:19:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:19:30.861084 | orchestrator | 2026-01-02 03:19:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:19:30.861140 | orchestrator | 2026-01-02 03:19:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:19:33.913713 | orchestrator | 2026-01-02 03:19:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:19:33.917074 | orchestrator | 2026-01-02 03:19:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:19:33.917134 | orchestrator | 2026-01-02 03:19:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:19:36.964740 | orchestrator | 2026-01-02 03:19:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:19:36.966146 | orchestrator | 2026-01-02 03:19:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:19:36.966260 | orchestrator | 2026-01-02 03:19:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:19:40.020360 | orchestrator | 2026-01-02 03:19:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:19:40.022430 | orchestrator | 2026-01-02 03:19:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:19:40.022512 | orchestrator | 2026-01-02 03:19:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:19:43.075514 | orchestrator | 2026-01-02 03:19:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:19:43.076536 | orchestrator | 2026-01-02 03:19:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:19:43.076585 | orchestrator | 2026-01-02 03:19:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:19:46.132845 | orchestrator | 2026-01-02 03:19:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:19:46.135060 | orchestrator | 2026-01-02 03:19:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:19:46.135125 | orchestrator | 2026-01-02 03:19:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:19:49.187491 | orchestrator | 2026-01-02 03:19:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:19:49.189330 | orchestrator | 2026-01-02 03:19:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:19:49.189425 | orchestrator | 2026-01-02 03:19:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:19:52.238834 | orchestrator | 2026-01-02 03:19:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:19:52.239903 | orchestrator | 2026-01-02 03:19:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:19:52.239982 | orchestrator | 2026-01-02 03:19:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:19:55.288259 | orchestrator | 2026-01-02 03:19:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:19:55.290512 | orchestrator | 2026-01-02 03:19:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:19:55.290971 | orchestrator | 2026-01-02 03:19:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:19:58.342118 | orchestrator | 2026-01-02 03:19:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:19:58.344931 | orchestrator | 2026-01-02 03:19:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:19:58.345456 | orchestrator | 2026-01-02 03:19:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:20:01.394264 | orchestrator | 2026-01-02 03:20:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:20:01.396538 | orchestrator | 2026-01-02 03:20:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:20:01.396994 | orchestrator | 2026-01-02 03:20:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:20:04.448090 | orchestrator | 2026-01-02 03:20:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:20:04.448233 | orchestrator | 2026-01-02 03:20:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:20:04.448247 | orchestrator | 2026-01-02 03:20:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:20:07.497192 | orchestrator | 2026-01-02 03:20:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:20:07.499372 | orchestrator | 2026-01-02 03:20:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:20:07.499455 | orchestrator | 2026-01-02 03:20:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:20:10.553994 | orchestrator | 2026-01-02 03:20:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:20:10.554781 | orchestrator | 2026-01-02 03:20:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:20:10.554815 | orchestrator | 2026-01-02 03:20:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:20:13.600746 | orchestrator | 2026-01-02 03:20:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:20:13.603638 | orchestrator | 2026-01-02 03:20:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:20:13.603703 | orchestrator | 2026-01-02 03:20:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:20:16.655366 | orchestrator | 2026-01-02 03:20:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:20:16.657414 | orchestrator | 2026-01-02 03:20:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:20:16.657473 | orchestrator | 2026-01-02 03:20:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:20:19.705324 | orchestrator | 2026-01-02 03:20:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:20:19.706452 | orchestrator | 2026-01-02 03:20:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:20:19.706750 | orchestrator | 2026-01-02 03:20:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:20:22.754539 | orchestrator | 2026-01-02 03:20:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:20:22.755719 | orchestrator | 2026-01-02 03:20:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:20:22.755758 | orchestrator | 2026-01-02 03:20:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:20:25.807412 | orchestrator | 2026-01-02 03:20:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:20:25.807685 | orchestrator | 2026-01-02 03:20:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:20:25.807754 | orchestrator | 2026-01-02 03:20:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:20:28.862516 | orchestrator | 2026-01-02 03:20:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:20:28.864595 | orchestrator | 2026-01-02 03:20:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:20:28.864672 | orchestrator | 2026-01-02 03:20:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:20:31.912353 | orchestrator | 2026-01-02 03:20:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:20:31.912649 | orchestrator | 2026-01-02 03:20:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:20:31.912680 | orchestrator | 2026-01-02 03:20:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:20:34.959416 | orchestrator | 2026-01-02 03:20:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:20:34.961204 | orchestrator | 2026-01-02 03:20:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:20:34.961254 | orchestrator | 2026-01-02 03:20:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:20:38.006593 | orchestrator | 2026-01-02 03:20:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:20:38.006702 | orchestrator | 2026-01-02 03:20:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:20:38.006719 | orchestrator | 2026-01-02 03:20:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:20:41.057299 | orchestrator | 2026-01-02 03:20:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:20:41.058802 | orchestrator | 2026-01-02 03:20:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:20:41.058859 | orchestrator | 2026-01-02 03:20:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:20:44.098495 | orchestrator | 2026-01-02 03:20:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:20:44.100506 | orchestrator | 2026-01-02 03:20:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:20:44.100554 | orchestrator | 2026-01-02 03:20:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:20:47.150174 | orchestrator | 2026-01-02 03:20:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:20:47.150771 | orchestrator | 2026-01-02 03:20:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:20:47.150858 | orchestrator | 2026-01-02 03:20:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:20:50.202675 | orchestrator | 2026-01-02 03:20:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:20:50.204468 | orchestrator | 2026-01-02 03:20:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:20:50.204512 | orchestrator | 2026-01-02 03:20:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:20:53.249488 | orchestrator | 2026-01-02 03:20:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:20:53.251218 | orchestrator | 2026-01-02 03:20:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:20:53.251339 | orchestrator | 2026-01-02 03:20:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:20:56.297870 | orchestrator | 2026-01-02 03:20:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:20:56.298889 | orchestrator | 2026-01-02 03:20:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:20:56.298953 | orchestrator | 2026-01-02 03:20:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:20:59.349391 | orchestrator | 2026-01-02 03:20:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:20:59.351008 | orchestrator | 2026-01-02 03:20:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:20:59.351090 | orchestrator | 2026-01-02 03:20:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:21:02.401106 | orchestrator | 2026-01-02 03:21:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:21:02.402529 | orchestrator | 2026-01-02 03:21:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:21:02.402818 | orchestrator | 2026-01-02 03:21:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:21:05.455291 | orchestrator | 2026-01-02 03:21:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:21:05.457050 | orchestrator | 2026-01-02 03:21:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:21:05.457089 | orchestrator | 2026-01-02 03:21:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:21:08.512358 | orchestrator | 2026-01-02 03:21:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:21:08.515482 | orchestrator | 2026-01-02 03:21:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:21:08.515943 | orchestrator | 2026-01-02 03:21:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:21:11.565702 | orchestrator | 2026-01-02 03:21:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:21:11.567833 | orchestrator | 2026-01-02 03:21:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:21:11.568191 | orchestrator | 2026-01-02 03:21:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:21:14.622720 | orchestrator | 2026-01-02 03:21:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:21:14.624477 | orchestrator | 2026-01-02 03:21:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:21:14.624686 | orchestrator | 2026-01-02 03:21:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:21:17.668415 | orchestrator | 2026-01-02 03:21:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:21:17.669602 | orchestrator | 2026-01-02 03:21:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:21:17.669642 | orchestrator | 2026-01-02 03:21:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:21:20.720527 | orchestrator | 2026-01-02 03:21:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:21:20.721559 | orchestrator | 2026-01-02 03:21:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:21:20.721591 | orchestrator | 2026-01-02 03:21:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:21:23.770568 | orchestrator | 2026-01-02 03:21:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:21:23.771659 | orchestrator | 2026-01-02 03:21:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:21:23.771888 | orchestrator | 2026-01-02 03:21:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:21:26.823791 | orchestrator | 2026-01-02 03:21:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:21:26.826584 | orchestrator | 2026-01-02 03:21:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:21:26.826832 | orchestrator | 2026-01-02 03:21:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:21:29.878379 | orchestrator | 2026-01-02 03:21:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:21:29.878846 | orchestrator | 2026-01-02 03:21:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:21:29.878941 | orchestrator | 2026-01-02 03:21:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:21:32.924342 | orchestrator | 2026-01-02 03:21:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:21:32.927117 | orchestrator | 2026-01-02 03:21:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:21:32.927172 | orchestrator | 2026-01-02 03:21:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:21:35.979190 | orchestrator | 2026-01-02 03:21:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:21:35.981165 | orchestrator | 2026-01-02 03:21:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:21:35.981406 | orchestrator | 2026-01-02 03:21:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:21:39.030228 | orchestrator | 2026-01-02 03:21:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:21:39.031427 | orchestrator | 2026-01-02 03:21:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:21:39.031578 | orchestrator | 2026-01-02 03:21:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:21:42.077782 | orchestrator | 2026-01-02 03:21:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:21:42.079415 | orchestrator | 2026-01-02 03:21:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:21:42.079462 | orchestrator | 2026-01-02 03:21:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:21:45.133060 | orchestrator | 2026-01-02 03:21:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:21:45.134802 | orchestrator | 2026-01-02 03:21:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:21:45.134830 | orchestrator | 2026-01-02 03:21:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:21:48.185496 | orchestrator | 2026-01-02 03:21:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:21:48.186732 | orchestrator | 2026-01-02 03:21:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:21:48.186785 | orchestrator | 2026-01-02 03:21:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:21:51.234286 | orchestrator | 2026-01-02 03:21:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:21:51.235190 | orchestrator | 2026-01-02 03:21:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:21:51.235866 | orchestrator | 2026-01-02 03:21:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:21:54.284763 | orchestrator | 2026-01-02 03:21:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:21:54.284859 | orchestrator | 2026-01-02 03:21:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:21:54.284870 | orchestrator | 2026-01-02 03:21:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:21:57.337568 | orchestrator | 2026-01-02 03:21:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:21:57.340064 | orchestrator | 2026-01-02 03:21:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:21:57.340114 | orchestrator | 2026-01-02 03:21:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:22:00.389688 | orchestrator | 2026-01-02 03:22:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:22:00.391307 | orchestrator | 2026-01-02 03:22:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:22:00.391380 | orchestrator | 2026-01-02 03:22:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:22:03.441207 | orchestrator | 2026-01-02 03:22:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:22:03.444139 | orchestrator | 2026-01-02 03:22:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:22:03.444258 | orchestrator | 2026-01-02 03:22:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:22:06.495038 | orchestrator | 2026-01-02 03:22:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:22:06.495880 | orchestrator | 2026-01-02 03:22:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:22:06.496149 | orchestrator | 2026-01-02 03:22:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:22:09.544253 | orchestrator | 2026-01-02 03:22:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:22:09.545412 | orchestrator | 2026-01-02 03:22:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:22:09.545474 | orchestrator | 2026-01-02 03:22:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:22:12.590158 | orchestrator | 2026-01-02 03:22:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:22:12.590742 | orchestrator | 2026-01-02 03:22:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:22:12.590776 | orchestrator | 2026-01-02 03:22:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:22:15.646532 | orchestrator | 2026-01-02 03:22:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:22:15.648758 | orchestrator | 2026-01-02 03:22:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:22:15.648811 | orchestrator | 2026-01-02 03:22:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:22:18.699978 | orchestrator | 2026-01-02 03:22:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:22:18.701080 | orchestrator | 2026-01-02 03:22:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:22:18.701177 | orchestrator | 2026-01-02 03:22:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:22:21.752944 | orchestrator | 2026-01-02 03:22:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:22:21.754646 | orchestrator | 2026-01-02 03:22:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:22:21.754677 | orchestrator | 2026-01-02 03:22:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:22:24.806680 | orchestrator | 2026-01-02 03:22:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:22:24.809001 | orchestrator | 2026-01-02 03:22:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:22:24.809049 | orchestrator | 2026-01-02 03:22:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:22:27.858422 | orchestrator | 2026-01-02 03:22:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:22:27.859007 | orchestrator | 2026-01-02 03:22:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:22:27.859306 | orchestrator | 2026-01-02 03:22:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:22:30.914329 | orchestrator | 2026-01-02 03:22:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:22:30.914426 | orchestrator | 2026-01-02 03:22:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:22:30.915069 | orchestrator | 2026-01-02 03:22:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:22:33.973953 | orchestrator | 2026-01-02 03:22:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:22:33.975919 | orchestrator | 2026-01-02 03:22:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:22:33.975990 | orchestrator | 2026-01-02 03:22:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:22:37.025239 | orchestrator | 2026-01-02 03:22:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:22:37.026781 | orchestrator | 2026-01-02 03:22:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:22:37.027409 | orchestrator | 2026-01-02 03:22:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:22:40.082531 | orchestrator | 2026-01-02 03:22:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:22:40.087230 | orchestrator | 2026-01-02 03:22:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:22:40.087297 | orchestrator | 2026-01-02 03:22:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:22:43.134380 | orchestrator | 2026-01-02 03:22:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:22:43.136371 | orchestrator | 2026-01-02 03:22:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:22:43.136406 | orchestrator | 2026-01-02 03:22:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:22:46.188694 | orchestrator | 2026-01-02 03:22:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:22:46.190088 | orchestrator | 2026-01-02 03:22:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:22:46.190125 | orchestrator | 2026-01-02 03:22:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:22:49.241957 | orchestrator | 2026-01-02 03:22:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:22:49.244335 | orchestrator | 2026-01-02 03:22:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:22:49.244392 | orchestrator | 2026-01-02 03:22:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:22:52.293047 | orchestrator | 2026-01-02 03:22:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:22:52.293670 | orchestrator | 2026-01-02 03:22:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:22:52.294128 | orchestrator | 2026-01-02 03:22:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:22:55.341541 | orchestrator | 2026-01-02 03:22:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:22:55.343185 | orchestrator | 2026-01-02 03:22:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:22:55.343218 | orchestrator | 2026-01-02 03:22:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:22:58.394083 | orchestrator | 2026-01-02 03:22:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:22:58.396241 | orchestrator | 2026-01-02 03:22:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:22:58.396288 | orchestrator | 2026-01-02 03:22:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:23:01.456865 | orchestrator | 2026-01-02 03:23:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:23:01.459647 | orchestrator | 2026-01-02 03:23:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:23:01.459710 | orchestrator | 2026-01-02 03:23:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:23:04.510636 | orchestrator | 2026-01-02 03:23:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:23:04.512634 | orchestrator | 2026-01-02 03:23:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:23:04.512706 | orchestrator | 2026-01-02 03:23:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:23:07.566566 | orchestrator | 2026-01-02 03:23:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:23:07.567688 | orchestrator | 2026-01-02 03:23:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:23:07.567723 | orchestrator | 2026-01-02 03:23:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:23:10.614262 | orchestrator | 2026-01-02 03:23:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:23:10.614505 | orchestrator | 2026-01-02 03:23:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:23:10.614738 | orchestrator | 2026-01-02 03:23:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:23:13.660201 | orchestrator | 2026-01-02 03:23:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:23:13.661767 | orchestrator | 2026-01-02 03:23:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:23:13.661804 | orchestrator | 2026-01-02 03:23:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:23:16.711639 | orchestrator | 2026-01-02 03:23:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:23:16.712861 | orchestrator | 2026-01-02 03:23:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:23:16.712918 | orchestrator | 2026-01-02 03:23:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:23:19.766336 | orchestrator | 2026-01-02 03:23:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:23:19.766981 | orchestrator | 2026-01-02 03:23:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:23:19.767263 | orchestrator | 2026-01-02 03:23:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:23:22.811235 | orchestrator | 2026-01-02 03:23:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:23:22.811338 | orchestrator | 2026-01-02 03:23:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:23:22.811426 | orchestrator | 2026-01-02 03:23:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:23:25.865889 | orchestrator | 2026-01-02 03:23:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:23:25.866431 | orchestrator | 2026-01-02 03:23:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:23:25.867388 | orchestrator | 2026-01-02 03:23:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:23:28.923553 | orchestrator | 2026-01-02 03:23:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:23:28.923655 | orchestrator | 2026-01-02 03:23:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:23:28.923738 | orchestrator | 2026-01-02 03:23:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:23:31.970794 | orchestrator | 2026-01-02 03:23:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:23:31.977150 | orchestrator | 2026-01-02 03:23:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:23:31.977258 | orchestrator | 2026-01-02 03:23:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:23:35.031828 | orchestrator | 2026-01-02 03:23:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:23:35.033189 | orchestrator | 2026-01-02 03:23:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:23:35.033272 | orchestrator | 2026-01-02 03:23:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:23:38.087665 | orchestrator | 2026-01-02 03:23:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:23:38.087770 | orchestrator | 2026-01-02 03:23:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:23:38.087785 | orchestrator | 2026-01-02 03:23:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:23:41.137752 | orchestrator | 2026-01-02 03:23:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:23:41.138549 | orchestrator | 2026-01-02 03:23:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:23:41.138578 | orchestrator | 2026-01-02 03:23:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:23:44.178822 | orchestrator | 2026-01-02 03:23:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:23:44.180629 | orchestrator | 2026-01-02 03:23:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:23:44.180668 | orchestrator | 2026-01-02 03:23:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:23:47.228994 | orchestrator | 2026-01-02 03:23:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:23:47.229729 | orchestrator | 2026-01-02 03:23:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:23:47.229859 | orchestrator | 2026-01-02 03:23:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:23:50.280902 | orchestrator | 2026-01-02 03:23:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:23:50.282935 | orchestrator | 2026-01-02 03:23:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:23:50.282984 | orchestrator | 2026-01-02 03:23:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:23:53.334834 | orchestrator | 2026-01-02 03:23:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:23:53.336571 | orchestrator | 2026-01-02 03:23:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:23:53.336671 | orchestrator | 2026-01-02 03:23:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:23:56.387802 | orchestrator | 2026-01-02 03:23:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:23:56.390531 | orchestrator | 2026-01-02 03:23:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:23:56.390649 | orchestrator | 2026-01-02 03:23:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:23:59.445096 | orchestrator | 2026-01-02 03:23:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:23:59.447155 | orchestrator | 2026-01-02 03:23:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:23:59.447501 | orchestrator | 2026-01-02 03:23:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:24:02.502116 | orchestrator | 2026-01-02 03:24:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:24:02.506516 | orchestrator | 2026-01-02 03:24:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:24:02.506578 | orchestrator | 2026-01-02 03:24:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:24:05.557965 | orchestrator | 2026-01-02 03:24:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:24:05.558372 | orchestrator | 2026-01-02 03:24:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:24:05.558403 | orchestrator | 2026-01-02 03:24:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:24:08.616262 | orchestrator | 2026-01-02 03:24:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:24:08.618400 | orchestrator | 2026-01-02 03:24:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:24:08.618485 | orchestrator | 2026-01-02 03:24:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:24:11.672162 | orchestrator | 2026-01-02 03:24:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:24:11.673608 | orchestrator | 2026-01-02 03:24:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:24:11.673676 | orchestrator | 2026-01-02 03:24:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:24:14.721475 | orchestrator | 2026-01-02 03:24:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:24:14.722392 | orchestrator | 2026-01-02 03:24:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:24:14.722470 | orchestrator | 2026-01-02 03:24:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:24:17.772511 | orchestrator | 2026-01-02 03:24:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:24:17.772826 | orchestrator | 2026-01-02 03:24:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:24:17.772846 | orchestrator | 2026-01-02 03:24:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:24:20.825861 | orchestrator | 2026-01-02 03:24:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:24:20.827504 | orchestrator | 2026-01-02 03:24:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:24:20.827552 | orchestrator | 2026-01-02 03:24:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:24:23.872187 | orchestrator | 2026-01-02 03:24:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:24:23.873197 | orchestrator | 2026-01-02 03:24:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:24:23.873725 | orchestrator | 2026-01-02 03:24:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:24:26.921939 | orchestrator | 2026-01-02 03:24:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:24:26.923278 | orchestrator | 2026-01-02 03:24:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:24:26.923318 | orchestrator | 2026-01-02 03:24:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:24:29.973756 | orchestrator | 2026-01-02 03:24:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:24:29.974796 | orchestrator | 2026-01-02 03:24:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:24:29.974868 | orchestrator | 2026-01-02 03:24:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:24:33.041749 | orchestrator | 2026-01-02 03:24:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:24:33.043404 | orchestrator | 2026-01-02 03:24:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:24:33.043439 | orchestrator | 2026-01-02 03:24:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:24:36.088802 | orchestrator | 2026-01-02 03:24:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:24:36.090175 | orchestrator | 2026-01-02 03:24:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:24:36.090213 | orchestrator | 2026-01-02 03:24:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:24:39.134747 | orchestrator | 2026-01-02 03:24:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:24:39.135056 | orchestrator | 2026-01-02 03:24:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:24:39.135080 | orchestrator | 2026-01-02 03:24:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:24:42.181563 | orchestrator | 2026-01-02 03:24:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:24:42.183961 | orchestrator | 2026-01-02 03:24:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:24:42.184042 | orchestrator | 2026-01-02 03:24:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:24:45.237631 | orchestrator | 2026-01-02 03:24:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:24:45.239745 | orchestrator | 2026-01-02 03:24:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:24:45.239824 | orchestrator | 2026-01-02 03:24:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:24:48.296671 | orchestrator | 2026-01-02 03:24:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:24:48.298846 | orchestrator | 2026-01-02 03:24:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:24:48.298973 | orchestrator | 2026-01-02 03:24:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:24:51.348081 | orchestrator | 2026-01-02 03:24:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:24:51.350456 | orchestrator | 2026-01-02 03:24:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:24:51.350580 | orchestrator | 2026-01-02 03:24:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:24:54.402233 | orchestrator | 2026-01-02 03:24:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:24:54.403011 | orchestrator | 2026-01-02 03:24:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:24:54.403046 | orchestrator | 2026-01-02 03:24:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:24:57.444647 | orchestrator | 2026-01-02 03:24:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:24:57.446201 | orchestrator | 2026-01-02 03:24:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:24:57.446255 | orchestrator | 2026-01-02 03:24:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:25:00.502514 | orchestrator | 2026-01-02 03:25:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:25:00.504691 | orchestrator | 2026-01-02 03:25:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:25:00.504742 | orchestrator | 2026-01-02 03:25:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:25:03.552678 | orchestrator | 2026-01-02 03:25:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:25:03.557769 | orchestrator | 2026-01-02 03:25:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:25:03.557841 | orchestrator | 2026-01-02 03:25:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:25:06.602557 | orchestrator | 2026-01-02 03:25:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:25:06.604852 | orchestrator | 2026-01-02 03:25:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:25:06.604921 | orchestrator | 2026-01-02 03:25:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:25:09.655649 | orchestrator | 2026-01-02 03:25:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:25:09.658131 | orchestrator | 2026-01-02 03:25:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:25:09.658184 | orchestrator | 2026-01-02 03:25:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:25:12.709802 | orchestrator | 2026-01-02 03:25:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:25:12.711257 | orchestrator | 2026-01-02 03:25:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:25:12.711319 | orchestrator | 2026-01-02 03:25:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:25:15.761508 | orchestrator | 2026-01-02 03:25:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:25:15.763401 | orchestrator | 2026-01-02 03:25:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:25:15.763482 | orchestrator | 2026-01-02 03:25:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:25:18.814550 | orchestrator | 2026-01-02 03:25:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:25:18.815683 | orchestrator | 2026-01-02 03:25:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:25:18.815826 | orchestrator | 2026-01-02 03:25:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:25:21.867109 | orchestrator | 2026-01-02 03:25:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:25:21.869746 | orchestrator | 2026-01-02 03:25:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:25:21.870188 | orchestrator | 2026-01-02 03:25:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:25:24.924705 | orchestrator | 2026-01-02 03:25:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:25:24.926791 | orchestrator | 2026-01-02 03:25:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:25:24.926906 | orchestrator | 2026-01-02 03:25:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:25:27.975044 | orchestrator | 2026-01-02 03:25:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:25:27.976692 | orchestrator | 2026-01-02 03:25:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:25:27.976746 | orchestrator | 2026-01-02 03:25:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:25:31.029553 | orchestrator | 2026-01-02 03:25:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:25:31.031389 | orchestrator | 2026-01-02 03:25:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:25:31.031449 | orchestrator | 2026-01-02 03:25:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:25:34.083873 | orchestrator | 2026-01-02 03:25:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:25:34.085690 | orchestrator | 2026-01-02 03:25:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:25:34.085742 | orchestrator | 2026-01-02 03:25:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:25:37.133575 | orchestrator | 2026-01-02 03:25:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:25:37.135499 | orchestrator | 2026-01-02 03:25:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:25:37.135615 | orchestrator | 2026-01-02 03:25:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:25:40.178378 | orchestrator | 2026-01-02 03:25:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:25:40.181261 | orchestrator | 2026-01-02 03:25:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:25:40.181397 | orchestrator | 2026-01-02 03:25:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:25:43.226603 | orchestrator | 2026-01-02 03:25:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:25:43.230116 | orchestrator | 2026-01-02 03:25:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:25:43.230185 | orchestrator | 2026-01-02 03:25:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:25:46.280192 | orchestrator | 2026-01-02 03:25:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:25:46.281835 | orchestrator | 2026-01-02 03:25:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:25:46.281919 | orchestrator | 2026-01-02 03:25:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:25:49.338900 | orchestrator | 2026-01-02 03:25:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:25:49.340279 | orchestrator | 2026-01-02 03:25:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:25:49.340589 | orchestrator | 2026-01-02 03:25:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:25:52.397170 | orchestrator | 2026-01-02 03:25:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:25:52.399715 | orchestrator | 2026-01-02 03:25:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:25:52.399814 | orchestrator | 2026-01-02 03:25:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:25:55.453466 | orchestrator | 2026-01-02 03:25:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:25:55.455795 | orchestrator | 2026-01-02 03:25:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:25:55.455952 | orchestrator | 2026-01-02 03:25:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:25:58.507933 | orchestrator | 2026-01-02 03:25:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:25:58.510441 | orchestrator | 2026-01-02 03:25:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:25:58.510548 | orchestrator | 2026-01-02 03:25:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:26:01.558853 | orchestrator | 2026-01-02 03:26:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:26:01.563729 | orchestrator | 2026-01-02 03:26:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:26:01.563785 | orchestrator | 2026-01-02 03:26:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:26:04.628224 | orchestrator | 2026-01-02 03:26:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:26:04.630100 | orchestrator | 2026-01-02 03:26:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:26:04.630140 | orchestrator | 2026-01-02 03:26:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:26:07.680495 | orchestrator | 2026-01-02 03:26:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:26:07.682822 | orchestrator | 2026-01-02 03:26:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:26:07.682903 | orchestrator | 2026-01-02 03:26:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:26:10.738886 | orchestrator | 2026-01-02 03:26:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:26:10.740948 | orchestrator | 2026-01-02 03:26:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:26:10.741003 | orchestrator | 2026-01-02 03:26:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:26:13.790126 | orchestrator | 2026-01-02 03:26:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:26:13.794718 | orchestrator | 2026-01-02 03:26:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:26:13.794824 | orchestrator | 2026-01-02 03:26:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:26:16.842249 | orchestrator | 2026-01-02 03:26:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:26:16.842558 | orchestrator | 2026-01-02 03:26:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:26:16.842590 | orchestrator | 2026-01-02 03:26:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:26:19.894649 | orchestrator | 2026-01-02 03:26:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:26:19.896394 | orchestrator | 2026-01-02 03:26:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:26:19.896451 | orchestrator | 2026-01-02 03:26:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:26:22.942217 | orchestrator | 2026-01-02 03:26:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:26:22.943695 | orchestrator | 2026-01-02 03:26:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:26:22.943740 | orchestrator | 2026-01-02 03:26:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:26:25.996606 | orchestrator | 2026-01-02 03:26:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:26:25.998930 | orchestrator | 2026-01-02 03:26:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:26:25.999096 | orchestrator | 2026-01-02 03:26:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:26:29.048754 | orchestrator | 2026-01-02 03:26:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:26:29.050280 | orchestrator | 2026-01-02 03:26:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:26:29.050453 | orchestrator | 2026-01-02 03:26:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:26:32.098929 | orchestrator | 2026-01-02 03:26:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:26:32.099640 | orchestrator | 2026-01-02 03:26:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:26:32.099672 | orchestrator | 2026-01-02 03:26:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:26:35.149779 | orchestrator | 2026-01-02 03:26:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:26:35.151197 | orchestrator | 2026-01-02 03:26:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:26:35.151467 | orchestrator | 2026-01-02 03:26:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:26:38.202778 | orchestrator | 2026-01-02 03:26:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:26:38.203174 | orchestrator | 2026-01-02 03:26:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:26:38.203208 | orchestrator | 2026-01-02 03:26:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:26:41.255337 | orchestrator | 2026-01-02 03:26:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:26:41.258269 | orchestrator | 2026-01-02 03:26:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:26:41.258320 | orchestrator | 2026-01-02 03:26:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:26:44.311943 | orchestrator | 2026-01-02 03:26:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:26:44.312992 | orchestrator | 2026-01-02 03:26:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:26:44.313079 | orchestrator | 2026-01-02 03:26:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:26:47.364518 | orchestrator | 2026-01-02 03:26:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:26:47.367466 | orchestrator | 2026-01-02 03:26:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:26:47.367574 | orchestrator | 2026-01-02 03:26:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:26:50.417091 | orchestrator | 2026-01-02 03:26:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:26:50.417684 | orchestrator | 2026-01-02 03:26:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:26:50.417743 | orchestrator | 2026-01-02 03:26:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:26:53.461846 | orchestrator | 2026-01-02 03:26:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:26:53.464005 | orchestrator | 2026-01-02 03:26:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:26:53.464132 | orchestrator | 2026-01-02 03:26:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:26:56.510633 | orchestrator | 2026-01-02 03:26:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:26:56.511968 | orchestrator | 2026-01-02 03:26:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:26:56.512012 | orchestrator | 2026-01-02 03:26:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:26:59.552666 | orchestrator | 2026-01-02 03:26:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:26:59.553575 | orchestrator | 2026-01-02 03:26:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:26:59.553668 | orchestrator | 2026-01-02 03:26:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:27:02.597277 | orchestrator | 2026-01-02 03:27:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:27:02.599332 | orchestrator | 2026-01-02 03:27:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:27:02.599437 | orchestrator | 2026-01-02 03:27:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:27:05.647683 | orchestrator | 2026-01-02 03:27:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:27:05.649834 | orchestrator | 2026-01-02 03:27:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:27:05.649863 | orchestrator | 2026-01-02 03:27:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:27:08.708661 | orchestrator | 2026-01-02 03:27:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:27:08.710517 | orchestrator | 2026-01-02 03:27:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:27:08.710569 | orchestrator | 2026-01-02 03:27:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:27:11.760553 | orchestrator | 2026-01-02 03:27:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:27:11.762341 | orchestrator | 2026-01-02 03:27:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:27:11.762403 | orchestrator | 2026-01-02 03:27:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:27:14.814215 | orchestrator | 2026-01-02 03:27:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:27:14.818225 | orchestrator | 2026-01-02 03:27:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:27:14.818281 | orchestrator | 2026-01-02 03:27:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:27:17.870539 | orchestrator | 2026-01-02 03:27:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:27:17.872981 | orchestrator | 2026-01-02 03:27:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:27:17.873036 | orchestrator | 2026-01-02 03:27:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:27:20.924429 | orchestrator | 2026-01-02 03:27:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:27:20.925820 | orchestrator | 2026-01-02 03:27:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:27:20.925864 | orchestrator | 2026-01-02 03:27:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:27:23.978841 | orchestrator | 2026-01-02 03:27:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:27:23.980439 | orchestrator | 2026-01-02 03:27:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:27:23.980500 | orchestrator | 2026-01-02 03:27:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:27:27.021741 | orchestrator | 2026-01-02 03:27:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:27:27.023709 | orchestrator | 2026-01-02 03:27:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:27:27.023769 | orchestrator | 2026-01-02 03:27:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:27:30.070278 | orchestrator | 2026-01-02 03:27:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:27:30.072828 | orchestrator | 2026-01-02 03:27:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:27:30.072887 | orchestrator | 2026-01-02 03:27:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:27:33.113543 | orchestrator | 2026-01-02 03:27:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:27:33.115376 | orchestrator | 2026-01-02 03:27:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:27:33.115462 | orchestrator | 2026-01-02 03:27:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:27:36.165096 | orchestrator | 2026-01-02 03:27:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:27:36.168696 | orchestrator | 2026-01-02 03:27:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:27:36.168780 | orchestrator | 2026-01-02 03:27:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:27:39.220815 | orchestrator | 2026-01-02 03:27:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:27:39.221980 | orchestrator | 2026-01-02 03:27:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:27:39.222057 | orchestrator | 2026-01-02 03:27:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:27:42.265179 | orchestrator | 2026-01-02 03:27:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:27:42.266914 | orchestrator | 2026-01-02 03:27:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:27:42.266968 | orchestrator | 2026-01-02 03:27:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:27:45.313247 | orchestrator | 2026-01-02 03:27:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:27:45.314186 | orchestrator | 2026-01-02 03:27:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:27:45.314464 | orchestrator | 2026-01-02 03:27:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:27:48.359571 | orchestrator | 2026-01-02 03:27:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:27:48.361179 | orchestrator | 2026-01-02 03:27:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:27:48.361225 | orchestrator | 2026-01-02 03:27:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:27:51.407251 | orchestrator | 2026-01-02 03:27:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:27:51.409154 | orchestrator | 2026-01-02 03:27:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:27:51.409237 | orchestrator | 2026-01-02 03:27:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:27:54.466509 | orchestrator | 2026-01-02 03:27:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:27:54.467614 | orchestrator | 2026-01-02 03:27:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:27:54.467644 | orchestrator | 2026-01-02 03:27:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:27:57.515592 | orchestrator | 2026-01-02 03:27:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:27:57.517550 | orchestrator | 2026-01-02 03:27:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:27:57.517586 | orchestrator | 2026-01-02 03:27:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:28:00.560243 | orchestrator | 2026-01-02 03:28:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:28:00.561884 | orchestrator | 2026-01-02 03:28:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:28:00.562169 | orchestrator | 2026-01-02 03:28:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:28:03.606948 | orchestrator | 2026-01-02 03:28:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:28:03.608233 | orchestrator | 2026-01-02 03:28:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:28:03.608279 | orchestrator | 2026-01-02 03:28:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:28:06.648995 | orchestrator | 2026-01-02 03:28:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:28:06.651531 | orchestrator | 2026-01-02 03:28:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:28:06.651589 | orchestrator | 2026-01-02 03:28:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:28:09.703495 | orchestrator | 2026-01-02 03:28:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:28:09.705166 | orchestrator | 2026-01-02 03:28:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:28:09.705196 | orchestrator | 2026-01-02 03:28:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:28:12.759575 | orchestrator | 2026-01-02 03:28:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:28:12.761795 | orchestrator | 2026-01-02 03:28:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:28:12.761917 | orchestrator | 2026-01-02 03:28:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:28:15.810275 | orchestrator | 2026-01-02 03:28:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:28:15.812802 | orchestrator | 2026-01-02 03:28:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:28:15.812901 | orchestrator | 2026-01-02 03:28:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:28:18.864573 | orchestrator | 2026-01-02 03:28:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:28:18.867599 | orchestrator | 2026-01-02 03:28:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:28:18.867659 | orchestrator | 2026-01-02 03:28:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:28:21.914940 | orchestrator | 2026-01-02 03:28:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:28:21.916659 | orchestrator | 2026-01-02 03:28:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:28:21.916695 | orchestrator | 2026-01-02 03:28:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:28:24.962698 | orchestrator | 2026-01-02 03:28:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:28:24.963862 | orchestrator | 2026-01-02 03:28:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:28:24.963928 | orchestrator | 2026-01-02 03:28:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:28:28.022359 | orchestrator | 2026-01-02 03:28:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:28:28.025017 | orchestrator | 2026-01-02 03:28:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:28:28.025073 | orchestrator | 2026-01-02 03:28:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:28:31.072793 | orchestrator | 2026-01-02 03:28:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:28:31.073667 | orchestrator | 2026-01-02 03:28:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:28:31.073695 | orchestrator | 2026-01-02 03:28:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:28:34.117994 | orchestrator | 2026-01-02 03:28:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:28:34.119753 | orchestrator | 2026-01-02 03:28:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:28:34.119827 | orchestrator | 2026-01-02 03:28:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:28:37.169264 | orchestrator | 2026-01-02 03:28:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:28:37.170928 | orchestrator | 2026-01-02 03:28:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:28:37.171084 | orchestrator | 2026-01-02 03:28:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:28:40.218125 | orchestrator | 2026-01-02 03:28:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:28:40.220530 | orchestrator | 2026-01-02 03:28:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:28:40.221161 | orchestrator | 2026-01-02 03:28:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:28:43.273267 | orchestrator | 2026-01-02 03:28:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:28:43.275068 | orchestrator | 2026-01-02 03:28:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:28:43.275364 | orchestrator | 2026-01-02 03:28:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:28:46.325060 | orchestrator | 2026-01-02 03:28:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:28:46.328194 | orchestrator | 2026-01-02 03:28:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:28:46.328243 | orchestrator | 2026-01-02 03:28:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:28:49.383176 | orchestrator | 2026-01-02 03:28:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:28:49.386983 | orchestrator | 2026-01-02 03:28:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:28:49.387083 | orchestrator | 2026-01-02 03:28:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:28:52.436643 | orchestrator | 2026-01-02 03:28:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:28:52.438479 | orchestrator | 2026-01-02 03:28:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:28:52.438515 | orchestrator | 2026-01-02 03:28:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:28:55.491747 | orchestrator | 2026-01-02 03:28:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:28:55.493865 | orchestrator | 2026-01-02 03:28:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:28:55.493900 | orchestrator | 2026-01-02 03:28:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:28:58.541266 | orchestrator | 2026-01-02 03:28:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:28:58.543019 | orchestrator | 2026-01-02 03:28:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:28:58.543073 | orchestrator | 2026-01-02 03:28:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:29:01.590703 | orchestrator | 2026-01-02 03:29:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:29:01.592788 | orchestrator | 2026-01-02 03:29:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:29:01.592822 | orchestrator | 2026-01-02 03:29:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:29:04.635795 | orchestrator | 2026-01-02 03:29:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:29:04.637308 | orchestrator | 2026-01-02 03:29:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:29:04.637362 | orchestrator | 2026-01-02 03:29:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:29:07.684608 | orchestrator | 2026-01-02 03:29:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:29:07.687027 | orchestrator | 2026-01-02 03:29:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:29:07.687072 | orchestrator | 2026-01-02 03:29:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:29:10.736214 | orchestrator | 2026-01-02 03:29:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:29:10.738314 | orchestrator | 2026-01-02 03:29:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:29:10.738361 | orchestrator | 2026-01-02 03:29:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:29:13.790963 | orchestrator | 2026-01-02 03:29:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:29:13.793540 | orchestrator | 2026-01-02 03:29:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:29:13.793682 | orchestrator | 2026-01-02 03:29:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:29:16.839365 | orchestrator | 2026-01-02 03:29:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:29:16.841499 | orchestrator | 2026-01-02 03:29:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:29:16.841556 | orchestrator | 2026-01-02 03:29:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:29:19.893662 | orchestrator | 2026-01-02 03:29:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:29:19.894011 | orchestrator | 2026-01-02 03:29:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:29:19.894144 | orchestrator | 2026-01-02 03:29:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:29:22.945539 | orchestrator | 2026-01-02 03:29:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:29:22.948038 | orchestrator | 2026-01-02 03:29:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:29:22.948116 | orchestrator | 2026-01-02 03:29:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:29:26.001711 | orchestrator | 2026-01-02 03:29:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:29:26.003212 | orchestrator | 2026-01-02 03:29:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:29:26.003378 | orchestrator | 2026-01-02 03:29:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:29:29.051776 | orchestrator | 2026-01-02 03:29:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:29:29.055291 | orchestrator | 2026-01-02 03:29:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:29:29.055374 | orchestrator | 2026-01-02 03:29:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:29:32.097409 | orchestrator | 2026-01-02 03:29:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:29:32.098753 | orchestrator | 2026-01-02 03:29:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:29:32.098835 | orchestrator | 2026-01-02 03:29:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:29:35.145076 | orchestrator | 2026-01-02 03:29:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:29:35.148183 | orchestrator | 2026-01-02 03:29:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:29:35.148276 | orchestrator | 2026-01-02 03:29:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:29:38.197567 | orchestrator | 2026-01-02 03:29:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:29:38.200499 | orchestrator | 2026-01-02 03:29:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:29:38.200562 | orchestrator | 2026-01-02 03:29:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:29:41.258829 | orchestrator | 2026-01-02 03:29:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:29:41.260422 | orchestrator | 2026-01-02 03:29:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:29:41.260525 | orchestrator | 2026-01-02 03:29:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:29:44.311800 | orchestrator | 2026-01-02 03:29:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:29:44.312646 | orchestrator | 2026-01-02 03:29:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:29:44.312682 | orchestrator | 2026-01-02 03:29:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:29:47.362245 | orchestrator | 2026-01-02 03:29:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:29:47.364554 | orchestrator | 2026-01-02 03:29:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:29:47.364577 | orchestrator | 2026-01-02 03:29:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:29:50.417012 | orchestrator | 2026-01-02 03:29:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:29:50.418810 | orchestrator | 2026-01-02 03:29:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:29:50.418898 | orchestrator | 2026-01-02 03:29:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:29:53.469757 | orchestrator | 2026-01-02 03:29:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:29:53.471413 | orchestrator | 2026-01-02 03:29:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:29:53.471636 | orchestrator | 2026-01-02 03:29:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:29:56.520721 | orchestrator | 2026-01-02 03:29:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:29:56.521813 | orchestrator | 2026-01-02 03:29:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:29:56.521902 | orchestrator | 2026-01-02 03:29:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:29:59.558649 | orchestrator | 2026-01-02 03:29:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:29:59.559793 | orchestrator | 2026-01-02 03:29:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:29:59.559837 | orchestrator | 2026-01-02 03:29:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:30:02.609638 | orchestrator | 2026-01-02 03:30:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:30:02.612345 | orchestrator | 2026-01-02 03:30:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:30:02.612653 | orchestrator | 2026-01-02 03:30:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:30:05.661864 | orchestrator | 2026-01-02 03:30:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:30:05.663947 | orchestrator | 2026-01-02 03:30:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:30:05.664062 | orchestrator | 2026-01-02 03:30:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:30:08.715209 | orchestrator | 2026-01-02 03:30:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:30:08.717280 | orchestrator | 2026-01-02 03:30:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:30:08.717575 | orchestrator | 2026-01-02 03:30:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:30:11.765155 | orchestrator | 2026-01-02 03:30:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:30:11.766171 | orchestrator | 2026-01-02 03:30:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:30:11.766227 | orchestrator | 2026-01-02 03:30:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:30:14.827804 | orchestrator | 2026-01-02 03:30:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:30:14.828382 | orchestrator | 2026-01-02 03:30:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:30:14.828656 | orchestrator | 2026-01-02 03:30:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:30:17.876874 | orchestrator | 2026-01-02 03:30:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:30:17.879075 | orchestrator | 2026-01-02 03:30:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:30:17.879146 | orchestrator | 2026-01-02 03:30:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:30:20.927517 | orchestrator | 2026-01-02 03:30:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:30:20.928992 | orchestrator | 2026-01-02 03:30:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:30:20.929026 | orchestrator | 2026-01-02 03:30:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:30:23.975859 | orchestrator | 2026-01-02 03:30:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:30:23.976577 | orchestrator | 2026-01-02 03:30:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:30:23.976702 | orchestrator | 2026-01-02 03:30:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:30:27.022261 | orchestrator | 2026-01-02 03:30:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:30:27.023298 | orchestrator | 2026-01-02 03:30:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:30:27.023387 | orchestrator | 2026-01-02 03:30:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:30:30.068341 | orchestrator | 2026-01-02 03:30:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:30:30.068866 | orchestrator | 2026-01-02 03:30:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:30:30.069034 | orchestrator | 2026-01-02 03:30:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:30:33.115231 | orchestrator | 2026-01-02 03:30:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:30:33.117298 | orchestrator | 2026-01-02 03:30:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:30:33.117365 | orchestrator | 2026-01-02 03:30:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:30:36.169397 | orchestrator | 2026-01-02 03:30:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:30:36.169623 | orchestrator | 2026-01-02 03:30:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:30:36.169648 | orchestrator | 2026-01-02 03:30:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:30:39.212461 | orchestrator | 2026-01-02 03:30:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:30:39.213344 | orchestrator | 2026-01-02 03:30:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:30:39.213384 | orchestrator | 2026-01-02 03:30:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:30:42.260829 | orchestrator | 2026-01-02 03:30:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:30:42.262366 | orchestrator | 2026-01-02 03:30:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:30:42.262504 | orchestrator | 2026-01-02 03:30:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:30:45.312359 | orchestrator | 2026-01-02 03:30:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:30:45.314274 | orchestrator | 2026-01-02 03:30:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:30:45.314318 | orchestrator | 2026-01-02 03:30:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:30:48.360415 | orchestrator | 2026-01-02 03:30:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:30:48.362318 | orchestrator | 2026-01-02 03:30:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:30:48.362671 | orchestrator | 2026-01-02 03:30:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:30:51.407604 | orchestrator | 2026-01-02 03:30:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:30:51.408986 | orchestrator | 2026-01-02 03:30:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:30:51.409022 | orchestrator | 2026-01-02 03:30:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:30:54.457577 | orchestrator | 2026-01-02 03:30:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:30:54.459716 | orchestrator | 2026-01-02 03:30:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:30:54.459753 | orchestrator | 2026-01-02 03:30:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:30:57.510975 | orchestrator | 2026-01-02 03:30:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:30:57.511909 | orchestrator | 2026-01-02 03:30:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:30:57.511961 | orchestrator | 2026-01-02 03:30:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:31:00.568643 | orchestrator | 2026-01-02 03:31:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:31:00.569762 | orchestrator | 2026-01-02 03:31:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:31:00.569931 | orchestrator | 2026-01-02 03:31:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:31:03.618356 | orchestrator | 2026-01-02 03:31:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:31:03.618891 | orchestrator | 2026-01-02 03:31:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:31:03.619001 | orchestrator | 2026-01-02 03:31:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:31:06.668193 | orchestrator | 2026-01-02 03:31:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:31:06.670412 | orchestrator | 2026-01-02 03:31:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:31:06.670463 | orchestrator | 2026-01-02 03:31:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:31:09.721438 | orchestrator | 2026-01-02 03:31:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:31:09.723326 | orchestrator | 2026-01-02 03:31:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:31:09.723371 | orchestrator | 2026-01-02 03:31:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:31:12.774595 | orchestrator | 2026-01-02 03:31:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:31:12.775888 | orchestrator | 2026-01-02 03:31:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:31:12.775934 | orchestrator | 2026-01-02 03:31:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:31:15.819873 | orchestrator | 2026-01-02 03:31:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:31:15.820088 | orchestrator | 2026-01-02 03:31:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:31:15.820112 | orchestrator | 2026-01-02 03:31:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:31:18.872371 | orchestrator | 2026-01-02 03:31:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:31:18.874176 | orchestrator | 2026-01-02 03:31:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:31:18.874236 | orchestrator | 2026-01-02 03:31:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:31:21.923888 | orchestrator | 2026-01-02 03:31:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:31:21.926399 | orchestrator | 2026-01-02 03:31:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:31:21.926748 | orchestrator | 2026-01-02 03:31:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:31:24.967208 | orchestrator | 2026-01-02 03:31:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:31:24.969589 | orchestrator | 2026-01-02 03:31:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:31:24.969701 | orchestrator | 2026-01-02 03:31:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:31:28.025157 | orchestrator | 2026-01-02 03:31:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:31:28.026974 | orchestrator | 2026-01-02 03:31:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:31:28.027133 | orchestrator | 2026-01-02 03:31:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:31:31.078843 | orchestrator | 2026-01-02 03:31:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:31:31.082543 | orchestrator | 2026-01-02 03:31:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:31:31.082601 | orchestrator | 2026-01-02 03:31:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:31:34.136287 | orchestrator | 2026-01-02 03:31:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:31:34.138679 | orchestrator | 2026-01-02 03:31:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:31:34.138732 | orchestrator | 2026-01-02 03:31:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:31:37.187345 | orchestrator | 2026-01-02 03:31:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:31:37.189003 | orchestrator | 2026-01-02 03:31:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:31:37.189139 | orchestrator | 2026-01-02 03:31:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:31:40.238886 | orchestrator | 2026-01-02 03:31:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:31:40.242652 | orchestrator | 2026-01-02 03:31:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:31:40.242771 | orchestrator | 2026-01-02 03:31:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:31:43.293445 | orchestrator | 2026-01-02 03:31:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:31:43.295831 | orchestrator | 2026-01-02 03:31:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:31:43.295876 | orchestrator | 2026-01-02 03:31:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:31:46.344437 | orchestrator | 2026-01-02 03:31:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:31:46.345423 | orchestrator | 2026-01-02 03:31:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:31:46.345621 | orchestrator | 2026-01-02 03:31:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:31:49.388439 | orchestrator | 2026-01-02 03:31:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:31:49.389337 | orchestrator | 2026-01-02 03:31:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:31:49.389542 | orchestrator | 2026-01-02 03:31:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:31:52.441454 | orchestrator | 2026-01-02 03:31:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:31:52.443846 | orchestrator | 2026-01-02 03:31:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:31:52.443910 | orchestrator | 2026-01-02 03:31:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:31:55.500635 | orchestrator | 2026-01-02 03:31:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:31:55.501049 | orchestrator | 2026-01-02 03:31:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:31:55.501117 | orchestrator | 2026-01-02 03:31:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:31:58.542750 | orchestrator | 2026-01-02 03:31:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:31:58.545448 | orchestrator | 2026-01-02 03:31:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:31:58.545554 | orchestrator | 2026-01-02 03:31:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:32:01.603175 | orchestrator | 2026-01-02 03:32:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:32:01.605479 | orchestrator | 2026-01-02 03:32:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:32:01.606120 | orchestrator | 2026-01-02 03:32:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:32:04.658968 | orchestrator | 2026-01-02 03:32:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:32:04.660833 | orchestrator | 2026-01-02 03:32:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:32:04.660898 | orchestrator | 2026-01-02 03:32:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:32:07.711402 | orchestrator | 2026-01-02 03:32:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:32:07.714010 | orchestrator | 2026-01-02 03:32:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:32:07.714130 | orchestrator | 2026-01-02 03:32:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:32:10.756626 | orchestrator | 2026-01-02 03:32:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:32:10.759166 | orchestrator | 2026-01-02 03:32:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:32:10.759227 | orchestrator | 2026-01-02 03:32:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:32:13.809664 | orchestrator | 2026-01-02 03:32:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:32:13.811404 | orchestrator | 2026-01-02 03:32:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:32:13.811436 | orchestrator | 2026-01-02 03:32:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:32:16.860319 | orchestrator | 2026-01-02 03:32:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:32:16.862647 | orchestrator | 2026-01-02 03:32:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:32:16.862708 | orchestrator | 2026-01-02 03:32:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:32:19.912328 | orchestrator | 2026-01-02 03:32:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:32:19.913437 | orchestrator | 2026-01-02 03:32:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:32:19.913473 | orchestrator | 2026-01-02 03:32:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:32:22.965375 | orchestrator | 2026-01-02 03:32:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:32:22.966179 | orchestrator | 2026-01-02 03:32:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:32:22.966227 | orchestrator | 2026-01-02 03:32:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:32:26.013012 | orchestrator | 2026-01-02 03:32:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:32:26.013926 | orchestrator | 2026-01-02 03:32:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:32:26.013957 | orchestrator | 2026-01-02 03:32:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:32:29.061983 | orchestrator | 2026-01-02 03:32:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:32:29.063268 | orchestrator | 2026-01-02 03:32:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:32:29.063313 | orchestrator | 2026-01-02 03:32:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:32:32.116951 | orchestrator | 2026-01-02 03:32:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:32:32.118562 | orchestrator | 2026-01-02 03:32:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:32:32.118671 | orchestrator | 2026-01-02 03:32:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:32:35.165753 | orchestrator | 2026-01-02 03:32:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:32:35.167257 | orchestrator | 2026-01-02 03:32:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:32:35.167393 | orchestrator | 2026-01-02 03:32:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:32:38.207921 | orchestrator | 2026-01-02 03:32:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:32:38.208026 | orchestrator | 2026-01-02 03:32:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:32:38.208117 | orchestrator | 2026-01-02 03:32:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:32:41.244919 | orchestrator | 2026-01-02 03:32:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:32:41.246139 | orchestrator | 2026-01-02 03:32:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:32:41.246227 | orchestrator | 2026-01-02 03:32:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:32:44.289881 | orchestrator | 2026-01-02 03:32:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:32:44.291942 | orchestrator | 2026-01-02 03:32:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:32:44.292400 | orchestrator | 2026-01-02 03:32:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:32:47.332871 | orchestrator | 2026-01-02 03:32:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:32:47.333954 | orchestrator | 2026-01-02 03:32:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:32:47.333989 | orchestrator | 2026-01-02 03:32:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:32:50.381919 | orchestrator | 2026-01-02 03:32:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:32:50.384496 | orchestrator | 2026-01-02 03:32:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:32:50.385132 | orchestrator | 2026-01-02 03:32:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:32:53.433180 | orchestrator | 2026-01-02 03:32:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:32:53.435923 | orchestrator | 2026-01-02 03:32:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:32:53.435967 | orchestrator | 2026-01-02 03:32:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:32:56.484427 | orchestrator | 2026-01-02 03:32:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:32:56.486300 | orchestrator | 2026-01-02 03:32:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:32:56.486386 | orchestrator | 2026-01-02 03:32:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:32:59.533921 | orchestrator | 2026-01-02 03:32:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:32:59.537277 | orchestrator | 2026-01-02 03:32:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:32:59.537771 | orchestrator | 2026-01-02 03:32:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:33:02.583924 | orchestrator | 2026-01-02 03:33:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:33:02.584751 | orchestrator | 2026-01-02 03:33:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:33:02.584784 | orchestrator | 2026-01-02 03:33:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:33:05.637219 | orchestrator | 2026-01-02 03:33:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:33:05.638848 | orchestrator | 2026-01-02 03:33:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:33:05.638893 | orchestrator | 2026-01-02 03:33:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:33:08.689808 | orchestrator | 2026-01-02 03:33:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:33:08.692582 | orchestrator | 2026-01-02 03:33:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:33:08.692678 | orchestrator | 2026-01-02 03:33:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:33:11.745463 | orchestrator | 2026-01-02 03:33:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:33:11.746816 | orchestrator | 2026-01-02 03:33:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:33:11.746844 | orchestrator | 2026-01-02 03:33:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:33:14.795322 | orchestrator | 2026-01-02 03:33:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:33:14.797098 | orchestrator | 2026-01-02 03:33:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:33:14.797129 | orchestrator | 2026-01-02 03:33:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:33:17.846904 | orchestrator | 2026-01-02 03:33:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:33:17.849472 | orchestrator | 2026-01-02 03:33:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:33:17.849544 | orchestrator | 2026-01-02 03:33:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:33:20.901006 | orchestrator | 2026-01-02 03:33:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:33:20.902725 | orchestrator | 2026-01-02 03:33:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:33:20.902797 | orchestrator | 2026-01-02 03:33:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:33:23.948062 | orchestrator | 2026-01-02 03:33:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:33:23.950323 | orchestrator | 2026-01-02 03:33:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:33:23.950417 | orchestrator | 2026-01-02 03:33:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:33:26.995533 | orchestrator | 2026-01-02 03:33:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:33:26.997527 | orchestrator | 2026-01-02 03:33:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:33:26.997634 | orchestrator | 2026-01-02 03:33:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:33:30.048394 | orchestrator | 2026-01-02 03:33:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:33:30.050835 | orchestrator | 2026-01-02 03:33:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:33:30.050878 | orchestrator | 2026-01-02 03:33:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:33:33.100992 | orchestrator | 2026-01-02 03:33:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:33:33.102237 | orchestrator | 2026-01-02 03:33:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:33:33.102436 | orchestrator | 2026-01-02 03:33:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:33:36.155353 | orchestrator | 2026-01-02 03:33:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:33:36.158127 | orchestrator | 2026-01-02 03:33:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:33:36.158325 | orchestrator | 2026-01-02 03:33:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:33:39.206635 | orchestrator | 2026-01-02 03:33:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:33:39.209355 | orchestrator | 2026-01-02 03:33:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:33:39.209849 | orchestrator | 2026-01-02 03:33:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:33:42.261194 | orchestrator | 2026-01-02 03:33:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:33:42.262730 | orchestrator | 2026-01-02 03:33:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:33:42.262768 | orchestrator | 2026-01-02 03:33:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:33:45.306092 | orchestrator | 2026-01-02 03:33:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:33:45.307697 | orchestrator | 2026-01-02 03:33:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:33:45.307729 | orchestrator | 2026-01-02 03:33:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:33:48.356392 | orchestrator | 2026-01-02 03:33:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:33:48.358906 | orchestrator | 2026-01-02 03:33:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:33:48.359155 | orchestrator | 2026-01-02 03:33:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:33:51.406941 | orchestrator | 2026-01-02 03:33:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:33:51.409303 | orchestrator | 2026-01-02 03:33:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:33:51.409325 | orchestrator | 2026-01-02 03:33:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:33:54.461258 | orchestrator | 2026-01-02 03:33:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:33:54.462472 | orchestrator | 2026-01-02 03:33:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:33:54.462525 | orchestrator | 2026-01-02 03:33:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:33:57.508090 | orchestrator | 2026-01-02 03:33:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:33:57.510332 | orchestrator | 2026-01-02 03:33:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:33:57.510397 | orchestrator | 2026-01-02 03:33:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:34:00.557756 | orchestrator | 2026-01-02 03:34:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:34:00.558339 | orchestrator | 2026-01-02 03:34:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:34:00.558364 | orchestrator | 2026-01-02 03:34:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:34:03.610269 | orchestrator | 2026-01-02 03:34:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:34:03.611999 | orchestrator | 2026-01-02 03:34:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:34:03.612139 | orchestrator | 2026-01-02 03:34:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:34:06.654886 | orchestrator | 2026-01-02 03:34:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:34:06.657940 | orchestrator | 2026-01-02 03:34:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:34:06.657972 | orchestrator | 2026-01-02 03:34:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:34:09.708885 | orchestrator | 2026-01-02 03:34:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:34:09.710118 | orchestrator | 2026-01-02 03:34:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:34:09.710210 | orchestrator | 2026-01-02 03:34:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:34:12.755703 | orchestrator | 2026-01-02 03:34:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:34:12.757008 | orchestrator | 2026-01-02 03:34:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:34:12.757085 | orchestrator | 2026-01-02 03:34:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:34:15.807154 | orchestrator | 2026-01-02 03:34:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:34:15.808489 | orchestrator | 2026-01-02 03:34:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:34:15.808675 | orchestrator | 2026-01-02 03:34:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:34:18.860925 | orchestrator | 2026-01-02 03:34:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:34:18.862297 | orchestrator | 2026-01-02 03:34:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:34:18.862356 | orchestrator | 2026-01-02 03:34:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:34:21.908666 | orchestrator | 2026-01-02 03:34:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:34:21.909646 | orchestrator | 2026-01-02 03:34:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:34:21.909687 | orchestrator | 2026-01-02 03:34:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:34:24.959690 | orchestrator | 2026-01-02 03:34:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:34:24.960909 | orchestrator | 2026-01-02 03:34:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:34:24.960934 | orchestrator | 2026-01-02 03:34:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:34:28.003098 | orchestrator | 2026-01-02 03:34:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:34:28.005897 | orchestrator | 2026-01-02 03:34:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:34:28.005933 | orchestrator | 2026-01-02 03:34:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:34:31.046968 | orchestrator | 2026-01-02 03:34:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:34:31.048349 | orchestrator | 2026-01-02 03:34:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:34:31.048398 | orchestrator | 2026-01-02 03:34:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:34:34.100265 | orchestrator | 2026-01-02 03:34:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:34:34.101771 | orchestrator | 2026-01-02 03:34:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:34:34.101901 | orchestrator | 2026-01-02 03:34:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:34:37.146983 | orchestrator | 2026-01-02 03:34:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:34:37.147949 | orchestrator | 2026-01-02 03:34:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:34:37.148036 | orchestrator | 2026-01-02 03:34:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:34:40.191711 | orchestrator | 2026-01-02 03:34:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:34:40.193539 | orchestrator | 2026-01-02 03:34:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:34:40.193761 | orchestrator | 2026-01-02 03:34:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:34:43.238945 | orchestrator | 2026-01-02 03:34:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:34:43.242559 | orchestrator | 2026-01-02 03:34:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:34:43.242645 | orchestrator | 2026-01-02 03:34:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:34:46.288943 | orchestrator | 2026-01-02 03:34:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:34:46.290905 | orchestrator | 2026-01-02 03:34:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:34:46.290962 | orchestrator | 2026-01-02 03:34:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:34:49.332942 | orchestrator | 2026-01-02 03:34:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:34:49.334527 | orchestrator | 2026-01-02 03:34:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:34:49.334556 | orchestrator | 2026-01-02 03:34:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:34:52.387115 | orchestrator | 2026-01-02 03:34:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:34:52.388862 | orchestrator | 2026-01-02 03:34:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:34:52.389181 | orchestrator | 2026-01-02 03:34:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:34:55.440000 | orchestrator | 2026-01-02 03:34:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:34:55.443003 | orchestrator | 2026-01-02 03:34:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:34:55.443123 | orchestrator | 2026-01-02 03:34:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:34:58.492490 | orchestrator | 2026-01-02 03:34:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:34:58.494682 | orchestrator | 2026-01-02 03:34:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:34:58.494739 | orchestrator | 2026-01-02 03:34:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:35:01.545382 | orchestrator | 2026-01-02 03:35:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:35:01.547566 | orchestrator | 2026-01-02 03:35:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:35:01.547696 | orchestrator | 2026-01-02 03:35:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:35:04.599566 | orchestrator | 2026-01-02 03:35:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:35:04.602010 | orchestrator | 2026-01-02 03:35:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:35:04.602226 | orchestrator | 2026-01-02 03:35:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:35:07.644947 | orchestrator | 2026-01-02 03:35:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:35:07.647126 | orchestrator | 2026-01-02 03:35:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:35:07.647178 | orchestrator | 2026-01-02 03:35:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:35:10.691494 | orchestrator | 2026-01-02 03:35:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:35:10.693544 | orchestrator | 2026-01-02 03:35:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:35:10.693581 | orchestrator | 2026-01-02 03:35:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:35:13.742792 | orchestrator | 2026-01-02 03:35:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:35:13.745006 | orchestrator | 2026-01-02 03:35:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:35:13.745139 | orchestrator | 2026-01-02 03:35:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:35:16.797747 | orchestrator | 2026-01-02 03:35:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:35:16.799697 | orchestrator | 2026-01-02 03:35:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:35:16.800054 | orchestrator | 2026-01-02 03:35:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:35:19.853005 | orchestrator | 2026-01-02 03:35:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:35:19.854918 | orchestrator | 2026-01-02 03:35:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:35:19.854994 | orchestrator | 2026-01-02 03:35:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:35:22.905388 | orchestrator | 2026-01-02 03:35:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:35:22.906861 | orchestrator | 2026-01-02 03:35:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:35:22.906915 | orchestrator | 2026-01-02 03:35:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:35:25.950556 | orchestrator | 2026-01-02 03:35:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:35:25.951508 | orchestrator | 2026-01-02 03:35:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:35:25.951549 | orchestrator | 2026-01-02 03:35:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:35:28.996421 | orchestrator | 2026-01-02 03:35:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:35:28.998238 | orchestrator | 2026-01-02 03:35:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:35:28.998639 | orchestrator | 2026-01-02 03:35:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:35:32.045939 | orchestrator | 2026-01-02 03:35:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:35:32.048153 | orchestrator | 2026-01-02 03:35:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:35:32.048212 | orchestrator | 2026-01-02 03:35:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:35:35.101466 | orchestrator | 2026-01-02 03:35:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:35:35.105053 | orchestrator | 2026-01-02 03:35:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:35:35.105093 | orchestrator | 2026-01-02 03:35:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:35:38.157513 | orchestrator | 2026-01-02 03:35:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:35:38.158733 | orchestrator | 2026-01-02 03:35:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:35:38.158945 | orchestrator | 2026-01-02 03:35:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:35:41.208003 | orchestrator | 2026-01-02 03:35:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:35:41.211345 | orchestrator | 2026-01-02 03:35:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:35:41.211397 | orchestrator | 2026-01-02 03:35:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:35:44.265501 | orchestrator | 2026-01-02 03:35:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:35:44.267699 | orchestrator | 2026-01-02 03:35:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:35:44.267775 | orchestrator | 2026-01-02 03:35:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:35:47.322104 | orchestrator | 2026-01-02 03:35:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:35:47.323637 | orchestrator | 2026-01-02 03:35:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:35:47.323691 | orchestrator | 2026-01-02 03:35:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:35:50.381887 | orchestrator | 2026-01-02 03:35:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:35:50.383981 | orchestrator | 2026-01-02 03:35:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:35:50.384092 | orchestrator | 2026-01-02 03:35:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:35:53.434074 | orchestrator | 2026-01-02 03:35:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:35:53.436541 | orchestrator | 2026-01-02 03:35:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:35:53.436584 | orchestrator | 2026-01-02 03:35:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:35:56.488532 | orchestrator | 2026-01-02 03:35:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:35:56.490294 | orchestrator | 2026-01-02 03:35:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:35:56.490328 | orchestrator | 2026-01-02 03:35:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:35:59.543060 | orchestrator | 2026-01-02 03:35:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:35:59.546197 | orchestrator | 2026-01-02 03:35:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:35:59.546249 | orchestrator | 2026-01-02 03:35:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:36:02.593856 | orchestrator | 2026-01-02 03:36:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:36:02.595063 | orchestrator | 2026-01-02 03:36:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:36:02.595094 | orchestrator | 2026-01-02 03:36:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:36:05.653453 | orchestrator | 2026-01-02 03:36:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:36:05.657257 | orchestrator | 2026-01-02 03:36:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:36:05.657360 | orchestrator | 2026-01-02 03:36:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:36:08.715687 | orchestrator | 2026-01-02 03:36:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:36:08.719820 | orchestrator | 2026-01-02 03:36:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:36:08.719918 | orchestrator | 2026-01-02 03:36:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:36:11.767334 | orchestrator | 2026-01-02 03:36:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:36:11.769015 | orchestrator | 2026-01-02 03:36:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:36:11.769162 | orchestrator | 2026-01-02 03:36:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:36:14.820760 | orchestrator | 2026-01-02 03:36:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:36:14.825110 | orchestrator | 2026-01-02 03:36:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:36:14.825159 | orchestrator | 2026-01-02 03:36:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:36:17.872888 | orchestrator | 2026-01-02 03:36:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:36:17.874534 | orchestrator | 2026-01-02 03:36:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:36:17.874589 | orchestrator | 2026-01-02 03:36:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:36:20.924319 | orchestrator | 2026-01-02 03:36:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:36:20.927012 | orchestrator | 2026-01-02 03:36:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:36:20.927467 | orchestrator | 2026-01-02 03:36:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:36:23.978894 | orchestrator | 2026-01-02 03:36:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:36:23.981758 | orchestrator | 2026-01-02 03:36:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:36:23.981827 | orchestrator | 2026-01-02 03:36:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:36:27.034717 | orchestrator | 2026-01-02 03:36:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:36:27.035844 | orchestrator | 2026-01-02 03:36:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:36:27.035880 | orchestrator | 2026-01-02 03:36:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:36:30.082347 | orchestrator | 2026-01-02 03:36:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:36:30.083205 | orchestrator | 2026-01-02 03:36:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:36:30.083267 | orchestrator | 2026-01-02 03:36:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:36:33.129864 | orchestrator | 2026-01-02 03:36:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:36:33.132460 | orchestrator | 2026-01-02 03:36:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:36:33.132506 | orchestrator | 2026-01-02 03:36:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:36:36.185927 | orchestrator | 2026-01-02 03:36:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:36:36.187361 | orchestrator | 2026-01-02 03:36:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:36:36.187443 | orchestrator | 2026-01-02 03:36:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:36:39.226707 | orchestrator | 2026-01-02 03:36:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:36:39.229140 | orchestrator | 2026-01-02 03:36:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:36:39.229215 | orchestrator | 2026-01-02 03:36:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:36:42.276832 | orchestrator | 2026-01-02 03:36:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:36:42.278758 | orchestrator | 2026-01-02 03:36:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:36:42.278903 | orchestrator | 2026-01-02 03:36:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:36:45.329368 | orchestrator | 2026-01-02 03:36:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:36:45.331597 | orchestrator | 2026-01-02 03:36:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:36:45.331710 | orchestrator | 2026-01-02 03:36:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:36:48.383070 | orchestrator | 2026-01-02 03:36:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:36:48.385191 | orchestrator | 2026-01-02 03:36:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:36:48.385226 | orchestrator | 2026-01-02 03:36:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:36:51.437228 | orchestrator | 2026-01-02 03:36:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:36:51.438460 | orchestrator | 2026-01-02 03:36:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:36:51.438516 | orchestrator | 2026-01-02 03:36:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:36:54.485362 | orchestrator | 2026-01-02 03:36:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:36:54.487955 | orchestrator | 2026-01-02 03:36:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:36:54.488130 | orchestrator | 2026-01-02 03:36:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:36:57.532868 | orchestrator | 2026-01-02 03:36:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:36:57.534551 | orchestrator | 2026-01-02 03:36:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:36:57.534833 | orchestrator | 2026-01-02 03:36:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:37:00.586422 | orchestrator | 2026-01-02 03:37:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:37:00.588690 | orchestrator | 2026-01-02 03:37:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:37:00.588781 | orchestrator | 2026-01-02 03:37:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:37:03.637227 | orchestrator | 2026-01-02 03:37:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:37:03.638544 | orchestrator | 2026-01-02 03:37:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:37:03.638598 | orchestrator | 2026-01-02 03:37:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:37:06.696374 | orchestrator | 2026-01-02 03:37:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:37:06.697432 | orchestrator | 2026-01-02 03:37:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:37:06.698089 | orchestrator | 2026-01-02 03:37:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:37:09.740840 | orchestrator | 2026-01-02 03:37:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:37:09.742552 | orchestrator | 2026-01-02 03:37:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:37:09.742595 | orchestrator | 2026-01-02 03:37:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:37:12.790627 | orchestrator | 2026-01-02 03:37:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:37:12.792674 | orchestrator | 2026-01-02 03:37:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:37:12.792792 | orchestrator | 2026-01-02 03:37:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:37:15.826006 | orchestrator | 2026-01-02 03:37:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:37:15.828316 | orchestrator | 2026-01-02 03:37:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:37:15.828367 | orchestrator | 2026-01-02 03:37:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:37:18.874529 | orchestrator | 2026-01-02 03:37:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:37:18.876702 | orchestrator | 2026-01-02 03:37:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:37:18.876763 | orchestrator | 2026-01-02 03:37:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:37:21.933900 | orchestrator | 2026-01-02 03:37:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:37:21.934483 | orchestrator | 2026-01-02 03:37:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:37:21.934782 | orchestrator | 2026-01-02 03:37:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:37:24.980158 | orchestrator | 2026-01-02 03:37:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:37:24.984259 | orchestrator | 2026-01-02 03:37:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:37:24.984330 | orchestrator | 2026-01-02 03:37:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:37:28.034559 | orchestrator | 2026-01-02 03:37:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:37:28.036560 | orchestrator | 2026-01-02 03:37:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:37:28.036593 | orchestrator | 2026-01-02 03:37:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:37:31.090957 | orchestrator | 2026-01-02 03:37:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:37:31.094872 | orchestrator | 2026-01-02 03:37:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:37:31.095025 | orchestrator | 2026-01-02 03:37:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:37:34.138212 | orchestrator | 2026-01-02 03:37:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:37:34.139811 | orchestrator | 2026-01-02 03:37:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:37:34.139872 | orchestrator | 2026-01-02 03:37:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:37:37.202114 | orchestrator | 2026-01-02 03:37:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:37:37.204928 | orchestrator | 2026-01-02 03:37:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:37:37.204990 | orchestrator | 2026-01-02 03:37:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:37:40.265111 | orchestrator | 2026-01-02 03:37:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:37:40.266368 | orchestrator | 2026-01-02 03:37:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:37:40.266448 | orchestrator | 2026-01-02 03:37:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:37:43.317639 | orchestrator | 2026-01-02 03:37:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:37:43.318978 | orchestrator | 2026-01-02 03:37:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:37:43.319219 | orchestrator | 2026-01-02 03:37:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:37:46.370351 | orchestrator | 2026-01-02 03:37:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:37:46.372273 | orchestrator | 2026-01-02 03:37:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:37:46.372330 | orchestrator | 2026-01-02 03:37:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:37:49.418304 | orchestrator | 2026-01-02 03:37:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:37:49.419942 | orchestrator | 2026-01-02 03:37:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:37:49.420080 | orchestrator | 2026-01-02 03:37:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:37:52.461236 | orchestrator | 2026-01-02 03:37:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:37:52.462221 | orchestrator | 2026-01-02 03:37:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:37:52.462381 | orchestrator | 2026-01-02 03:37:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:37:55.514591 | orchestrator | 2026-01-02 03:37:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:37:55.516582 | orchestrator | 2026-01-02 03:37:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:37:55.516647 | orchestrator | 2026-01-02 03:37:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:37:58.562371 | orchestrator | 2026-01-02 03:37:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:37:58.564868 | orchestrator | 2026-01-02 03:37:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:37:58.565008 | orchestrator | 2026-01-02 03:37:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:38:01.612985 | orchestrator | 2026-01-02 03:38:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:38:01.613820 | orchestrator | 2026-01-02 03:38:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:38:01.613901 | orchestrator | 2026-01-02 03:38:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:38:04.667228 | orchestrator | 2026-01-02 03:38:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:38:04.669237 | orchestrator | 2026-01-02 03:38:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:38:04.669325 | orchestrator | 2026-01-02 03:38:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:38:07.722924 | orchestrator | 2026-01-02 03:38:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:38:07.725067 | orchestrator | 2026-01-02 03:38:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:38:07.725145 | orchestrator | 2026-01-02 03:38:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:38:10.781021 | orchestrator | 2026-01-02 03:38:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:38:10.781298 | orchestrator | 2026-01-02 03:38:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:38:10.781325 | orchestrator | 2026-01-02 03:38:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:38:13.830304 | orchestrator | 2026-01-02 03:38:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:38:13.832753 | orchestrator | 2026-01-02 03:38:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:38:13.832789 | orchestrator | 2026-01-02 03:38:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:38:16.882088 | orchestrator | 2026-01-02 03:38:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:38:16.884622 | orchestrator | 2026-01-02 03:38:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:38:16.884707 | orchestrator | 2026-01-02 03:38:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:38:19.936139 | orchestrator | 2026-01-02 03:38:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:38:19.938082 | orchestrator | 2026-01-02 03:38:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:38:19.938145 | orchestrator | 2026-01-02 03:38:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:38:22.984523 | orchestrator | 2026-01-02 03:38:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:38:22.985599 | orchestrator | 2026-01-02 03:38:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:38:22.985769 | orchestrator | 2026-01-02 03:38:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:38:26.030506 | orchestrator | 2026-01-02 03:38:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:38:26.031956 | orchestrator | 2026-01-02 03:38:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:38:26.032118 | orchestrator | 2026-01-02 03:38:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:38:29.075847 | orchestrator | 2026-01-02 03:38:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:38:29.077531 | orchestrator | 2026-01-02 03:38:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:38:29.077604 | orchestrator | 2026-01-02 03:38:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:38:32.130817 | orchestrator | 2026-01-02 03:38:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:38:32.133372 | orchestrator | 2026-01-02 03:38:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:38:32.133442 | orchestrator | 2026-01-02 03:38:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:38:35.175829 | orchestrator | 2026-01-02 03:38:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:38:35.177434 | orchestrator | 2026-01-02 03:38:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:38:35.177612 | orchestrator | 2026-01-02 03:38:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:38:38.230647 | orchestrator | 2026-01-02 03:38:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:38:38.232962 | orchestrator | 2026-01-02 03:38:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:38:38.233005 | orchestrator | 2026-01-02 03:38:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:38:41.286821 | orchestrator | 2026-01-02 03:38:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:38:41.288790 | orchestrator | 2026-01-02 03:38:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:38:41.288842 | orchestrator | 2026-01-02 03:38:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:38:44.342373 | orchestrator | 2026-01-02 03:38:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:38:44.344554 | orchestrator | 2026-01-02 03:38:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:38:44.344659 | orchestrator | 2026-01-02 03:38:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:38:47.399785 | orchestrator | 2026-01-02 03:38:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:38:47.401602 | orchestrator | 2026-01-02 03:38:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:38:47.401806 | orchestrator | 2026-01-02 03:38:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:38:50.448804 | orchestrator | 2026-01-02 03:38:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:38:50.451578 | orchestrator | 2026-01-02 03:38:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:38:50.451625 | orchestrator | 2026-01-02 03:38:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:38:53.507643 | orchestrator | 2026-01-02 03:38:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:38:53.508875 | orchestrator | 2026-01-02 03:38:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:38:53.508918 | orchestrator | 2026-01-02 03:38:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:38:56.563505 | orchestrator | 2026-01-02 03:38:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:38:56.566838 | orchestrator | 2026-01-02 03:38:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:38:56.566914 | orchestrator | 2026-01-02 03:38:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:38:59.616877 | orchestrator | 2026-01-02 03:38:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:38:59.618315 | orchestrator | 2026-01-02 03:38:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:38:59.618892 | orchestrator | 2026-01-02 03:38:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:39:02.660465 | orchestrator | 2026-01-02 03:39:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:39:02.661999 | orchestrator | 2026-01-02 03:39:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:39:02.662111 | orchestrator | 2026-01-02 03:39:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:39:05.708267 | orchestrator | 2026-01-02 03:39:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:39:05.709817 | orchestrator | 2026-01-02 03:39:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:39:05.709847 | orchestrator | 2026-01-02 03:39:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:39:08.751363 | orchestrator | 2026-01-02 03:39:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:39:08.752221 | orchestrator | 2026-01-02 03:39:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:39:08.752263 | orchestrator | 2026-01-02 03:39:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:39:11.806227 | orchestrator | 2026-01-02 03:39:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:39:11.807549 | orchestrator | 2026-01-02 03:39:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:39:11.807794 | orchestrator | 2026-01-02 03:39:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:39:14.852660 | orchestrator | 2026-01-02 03:39:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:39:14.854591 | orchestrator | 2026-01-02 03:39:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:39:14.854619 | orchestrator | 2026-01-02 03:39:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:39:17.908907 | orchestrator | 2026-01-02 03:39:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:39:17.910286 | orchestrator | 2026-01-02 03:39:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:39:17.910383 | orchestrator | 2026-01-02 03:39:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:39:20.956182 | orchestrator | 2026-01-02 03:39:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:39:20.958151 | orchestrator | 2026-01-02 03:39:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:39:20.958381 | orchestrator | 2026-01-02 03:39:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:39:24.004017 | orchestrator | 2026-01-02 03:39:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:39:24.005188 | orchestrator | 2026-01-02 03:39:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:39:24.005250 | orchestrator | 2026-01-02 03:39:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:39:27.058410 | orchestrator | 2026-01-02 03:39:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:39:27.059515 | orchestrator | 2026-01-02 03:39:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:39:27.059551 | orchestrator | 2026-01-02 03:39:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:39:30.109493 | orchestrator | 2026-01-02 03:39:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:39:30.111935 | orchestrator | 2026-01-02 03:39:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:39:30.112067 | orchestrator | 2026-01-02 03:39:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:39:33.166142 | orchestrator | 2026-01-02 03:39:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:39:33.168290 | orchestrator | 2026-01-02 03:39:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:39:33.168426 | orchestrator | 2026-01-02 03:39:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:39:36.218508 | orchestrator | 2026-01-02 03:39:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:39:36.221112 | orchestrator | 2026-01-02 03:39:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:39:36.221170 | orchestrator | 2026-01-02 03:39:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:39:39.276140 | orchestrator | 2026-01-02 03:39:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:39:39.279879 | orchestrator | 2026-01-02 03:39:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:39:39.280214 | orchestrator | 2026-01-02 03:39:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:39:42.328650 | orchestrator | 2026-01-02 03:39:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:39:42.329836 | orchestrator | 2026-01-02 03:39:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:39:42.329872 | orchestrator | 2026-01-02 03:39:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:39:45.374873 | orchestrator | 2026-01-02 03:39:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:39:45.375898 | orchestrator | 2026-01-02 03:39:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:39:45.376096 | orchestrator | 2026-01-02 03:39:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:39:48.425217 | orchestrator | 2026-01-02 03:39:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:39:48.428456 | orchestrator | 2026-01-02 03:39:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:39:48.428531 | orchestrator | 2026-01-02 03:39:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:39:51.472607 | orchestrator | 2026-01-02 03:39:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:39:51.475081 | orchestrator | 2026-01-02 03:39:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:39:51.475161 | orchestrator | 2026-01-02 03:39:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:39:54.518511 | orchestrator | 2026-01-02 03:39:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:39:54.520448 | orchestrator | 2026-01-02 03:39:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:39:54.520517 | orchestrator | 2026-01-02 03:39:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:39:57.572470 | orchestrator | 2026-01-02 03:39:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:39:57.573933 | orchestrator | 2026-01-02 03:39:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:39:57.574133 | orchestrator | 2026-01-02 03:39:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:40:00.628649 | orchestrator | 2026-01-02 03:40:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:40:00.630745 | orchestrator | 2026-01-02 03:40:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:40:00.630826 | orchestrator | 2026-01-02 03:40:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:40:03.679798 | orchestrator | 2026-01-02 03:40:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:40:03.683892 | orchestrator | 2026-01-02 03:40:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:40:03.683973 | orchestrator | 2026-01-02 03:40:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:40:06.734752 | orchestrator | 2026-01-02 03:40:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:40:06.738333 | orchestrator | 2026-01-02 03:40:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:40:06.738410 | orchestrator | 2026-01-02 03:40:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:40:09.787199 | orchestrator | 2026-01-02 03:40:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:40:09.788660 | orchestrator | 2026-01-02 03:40:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:40:09.788793 | orchestrator | 2026-01-02 03:40:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:40:12.842790 | orchestrator | 2026-01-02 03:40:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:40:12.845120 | orchestrator | 2026-01-02 03:40:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:40:12.845181 | orchestrator | 2026-01-02 03:40:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:40:15.893030 | orchestrator | 2026-01-02 03:40:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:40:15.893864 | orchestrator | 2026-01-02 03:40:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:40:15.894109 | orchestrator | 2026-01-02 03:40:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:40:18.940769 | orchestrator | 2026-01-02 03:40:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:40:18.941611 | orchestrator | 2026-01-02 03:40:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:40:18.941749 | orchestrator | 2026-01-02 03:40:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:40:21.988275 | orchestrator | 2026-01-02 03:40:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:40:21.989606 | orchestrator | 2026-01-02 03:40:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:40:21.989709 | orchestrator | 2026-01-02 03:40:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:40:25.039928 | orchestrator | 2026-01-02 03:40:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:40:25.041435 | orchestrator | 2026-01-02 03:40:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:40:25.041501 | orchestrator | 2026-01-02 03:40:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:40:28.088702 | orchestrator | 2026-01-02 03:40:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:40:28.089883 | orchestrator | 2026-01-02 03:40:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:40:28.089911 | orchestrator | 2026-01-02 03:40:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:40:31.135736 | orchestrator | 2026-01-02 03:40:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:40:31.137831 | orchestrator | 2026-01-02 03:40:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:40:31.137916 | orchestrator | 2026-01-02 03:40:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:40:34.180587 | orchestrator | 2026-01-02 03:40:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:40:34.181882 | orchestrator | 2026-01-02 03:40:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:40:34.181920 | orchestrator | 2026-01-02 03:40:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:40:37.240429 | orchestrator | 2026-01-02 03:40:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:40:37.242252 | orchestrator | 2026-01-02 03:40:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:40:37.242458 | orchestrator | 2026-01-02 03:40:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:40:40.289940 | orchestrator | 2026-01-02 03:40:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:40:40.290729 | orchestrator | 2026-01-02 03:40:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:40:40.290759 | orchestrator | 2026-01-02 03:40:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:40:43.339415 | orchestrator | 2026-01-02 03:40:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:40:43.341161 | orchestrator | 2026-01-02 03:40:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:40:43.341327 | orchestrator | 2026-01-02 03:40:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:40:46.385796 | orchestrator | 2026-01-02 03:40:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:40:46.387221 | orchestrator | 2026-01-02 03:40:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:40:46.387272 | orchestrator | 2026-01-02 03:40:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:40:49.435642 | orchestrator | 2026-01-02 03:40:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:40:49.437008 | orchestrator | 2026-01-02 03:40:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:40:49.437059 | orchestrator | 2026-01-02 03:40:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:40:52.483530 | orchestrator | 2026-01-02 03:40:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:40:52.485523 | orchestrator | 2026-01-02 03:40:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:40:52.485564 | orchestrator | 2026-01-02 03:40:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:40:55.535032 | orchestrator | 2026-01-02 03:40:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:40:55.535906 | orchestrator | 2026-01-02 03:40:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:40:55.535940 | orchestrator | 2026-01-02 03:40:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:40:58.586920 | orchestrator | 2026-01-02 03:40:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:40:58.589741 | orchestrator | 2026-01-02 03:40:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:40:58.589799 | orchestrator | 2026-01-02 03:40:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:41:01.635265 | orchestrator | 2026-01-02 03:41:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:41:01.637509 | orchestrator | 2026-01-02 03:41:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:41:01.637600 | orchestrator | 2026-01-02 03:41:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:41:04.688402 | orchestrator | 2026-01-02 03:41:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:41:04.690879 | orchestrator | 2026-01-02 03:41:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:41:04.691096 | orchestrator | 2026-01-02 03:41:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:41:07.740872 | orchestrator | 2026-01-02 03:41:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:41:07.742882 | orchestrator | 2026-01-02 03:41:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:41:07.743076 | orchestrator | 2026-01-02 03:41:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:41:10.785993 | orchestrator | 2026-01-02 03:41:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:41:10.787007 | orchestrator | 2026-01-02 03:41:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:41:10.787145 | orchestrator | 2026-01-02 03:41:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:41:13.836676 | orchestrator | 2026-01-02 03:41:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:41:13.837790 | orchestrator | 2026-01-02 03:41:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:41:13.837939 | orchestrator | 2026-01-02 03:41:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:41:16.890890 | orchestrator | 2026-01-02 03:41:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:41:16.893133 | orchestrator | 2026-01-02 03:41:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:41:16.893189 | orchestrator | 2026-01-02 03:41:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:41:19.941321 | orchestrator | 2026-01-02 03:41:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:41:19.943000 | orchestrator | 2026-01-02 03:41:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:41:19.943425 | orchestrator | 2026-01-02 03:41:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:41:22.995100 | orchestrator | 2026-01-02 03:41:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:41:22.997013 | orchestrator | 2026-01-02 03:41:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:41:22.997060 | orchestrator | 2026-01-02 03:41:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:41:26.046546 | orchestrator | 2026-01-02 03:41:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:41:26.050983 | orchestrator | 2026-01-02 03:41:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:41:26.051044 | orchestrator | 2026-01-02 03:41:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:41:29.097371 | orchestrator | 2026-01-02 03:41:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:41:29.100148 | orchestrator | 2026-01-02 03:41:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:41:29.100190 | orchestrator | 2026-01-02 03:41:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:41:32.141318 | orchestrator | 2026-01-02 03:41:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:41:32.142359 | orchestrator | 2026-01-02 03:41:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:41:32.142396 | orchestrator | 2026-01-02 03:41:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:41:35.192169 | orchestrator | 2026-01-02 03:41:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:41:35.193414 | orchestrator | 2026-01-02 03:41:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:41:35.193554 | orchestrator | 2026-01-02 03:41:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:41:38.237594 | orchestrator | 2026-01-02 03:41:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:41:38.240075 | orchestrator | 2026-01-02 03:41:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:41:38.240127 | orchestrator | 2026-01-02 03:41:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:41:41.287758 | orchestrator | 2026-01-02 03:41:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:41:41.288608 | orchestrator | 2026-01-02 03:41:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:41:41.288637 | orchestrator | 2026-01-02 03:41:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:41:44.346306 | orchestrator | 2026-01-02 03:41:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:41:44.347366 | orchestrator | 2026-01-02 03:41:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:41:44.347458 | orchestrator | 2026-01-02 03:41:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:41:47.391039 | orchestrator | 2026-01-02 03:41:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:41:47.392975 | orchestrator | 2026-01-02 03:41:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:41:47.393047 | orchestrator | 2026-01-02 03:41:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:41:50.444465 | orchestrator | 2026-01-02 03:41:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:41:50.446458 | orchestrator | 2026-01-02 03:41:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:41:50.446562 | orchestrator | 2026-01-02 03:41:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:41:53.497099 | orchestrator | 2026-01-02 03:41:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:41:53.498493 | orchestrator | 2026-01-02 03:41:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:41:53.498537 | orchestrator | 2026-01-02 03:41:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:41:56.551489 | orchestrator | 2026-01-02 03:41:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:41:56.553344 | orchestrator | 2026-01-02 03:41:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:41:56.553399 | orchestrator | 2026-01-02 03:41:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:41:59.611365 | orchestrator | 2026-01-02 03:41:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:41:59.612927 | orchestrator | 2026-01-02 03:41:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:41:59.613026 | orchestrator | 2026-01-02 03:41:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:42:02.657463 | orchestrator | 2026-01-02 03:42:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:42:02.658136 | orchestrator | 2026-01-02 03:42:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:42:02.658282 | orchestrator | 2026-01-02 03:42:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:42:05.705806 | orchestrator | 2026-01-02 03:42:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:42:05.708124 | orchestrator | 2026-01-02 03:42:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:42:05.708284 | orchestrator | 2026-01-02 03:42:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:42:08.761339 | orchestrator | 2026-01-02 03:42:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:42:08.763328 | orchestrator | 2026-01-02 03:42:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:42:08.763398 | orchestrator | 2026-01-02 03:42:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:42:11.812883 | orchestrator | 2026-01-02 03:42:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:42:11.814500 | orchestrator | 2026-01-02 03:42:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:42:11.814550 | orchestrator | 2026-01-02 03:42:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:42:14.861171 | orchestrator | 2026-01-02 03:42:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:42:14.861330 | orchestrator | 2026-01-02 03:42:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:42:14.861340 | orchestrator | 2026-01-02 03:42:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:42:17.907656 | orchestrator | 2026-01-02 03:42:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:42:17.908656 | orchestrator | 2026-01-02 03:42:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:42:17.908816 | orchestrator | 2026-01-02 03:42:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:42:20.956155 | orchestrator | 2026-01-02 03:42:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:42:20.958901 | orchestrator | 2026-01-02 03:42:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:42:20.958937 | orchestrator | 2026-01-02 03:42:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:42:24.002458 | orchestrator | 2026-01-02 03:42:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:42:24.004417 | orchestrator | 2026-01-02 03:42:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:42:24.004488 | orchestrator | 2026-01-02 03:42:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:42:27.050192 | orchestrator | 2026-01-02 03:42:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:42:27.051362 | orchestrator | 2026-01-02 03:42:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:42:27.051504 | orchestrator | 2026-01-02 03:42:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:42:30.100476 | orchestrator | 2026-01-02 03:42:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:42:30.103466 | orchestrator | 2026-01-02 03:42:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:42:30.103539 | orchestrator | 2026-01-02 03:42:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:42:33.155571 | orchestrator | 2026-01-02 03:42:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:42:33.158642 | orchestrator | 2026-01-02 03:42:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:42:33.158683 | orchestrator | 2026-01-02 03:42:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:42:36.207216 | orchestrator | 2026-01-02 03:42:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:42:36.211071 | orchestrator | 2026-01-02 03:42:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:42:36.211114 | orchestrator | 2026-01-02 03:42:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:42:39.268195 | orchestrator | 2026-01-02 03:42:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:42:39.270475 | orchestrator | 2026-01-02 03:42:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:42:39.270536 | orchestrator | 2026-01-02 03:42:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:42:42.316028 | orchestrator | 2026-01-02 03:42:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:42:42.318425 | orchestrator | 2026-01-02 03:42:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:42:42.318476 | orchestrator | 2026-01-02 03:42:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:42:45.366508 | orchestrator | 2026-01-02 03:42:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:42:45.367843 | orchestrator | 2026-01-02 03:42:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:42:45.367949 | orchestrator | 2026-01-02 03:42:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:42:48.416158 | orchestrator | 2026-01-02 03:42:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:42:48.418262 | orchestrator | 2026-01-02 03:42:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:42:48.418301 | orchestrator | 2026-01-02 03:42:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:42:51.465987 | orchestrator | 2026-01-02 03:42:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:42:51.467997 | orchestrator | 2026-01-02 03:42:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:42:51.468256 | orchestrator | 2026-01-02 03:42:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:42:54.518156 | orchestrator | 2026-01-02 03:42:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:42:54.520531 | orchestrator | 2026-01-02 03:42:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:42:54.520806 | orchestrator | 2026-01-02 03:42:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:42:57.569274 | orchestrator | 2026-01-02 03:42:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:42:57.570666 | orchestrator | 2026-01-02 03:42:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:42:57.570768 | orchestrator | 2026-01-02 03:42:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:43:00.621454 | orchestrator | 2026-01-02 03:43:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:43:00.624462 | orchestrator | 2026-01-02 03:43:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:43:00.624525 | orchestrator | 2026-01-02 03:43:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:43:03.667357 | orchestrator | 2026-01-02 03:43:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:43:03.668588 | orchestrator | 2026-01-02 03:43:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:43:03.668641 | orchestrator | 2026-01-02 03:43:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:43:06.715731 | orchestrator | 2026-01-02 03:43:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:43:06.717473 | orchestrator | 2026-01-02 03:43:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:43:06.717545 | orchestrator | 2026-01-02 03:43:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:43:09.763760 | orchestrator | 2026-01-02 03:43:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:43:09.764668 | orchestrator | 2026-01-02 03:43:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:43:09.764735 | orchestrator | 2026-01-02 03:43:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:43:12.812542 | orchestrator | 2026-01-02 03:43:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:43:12.815216 | orchestrator | 2026-01-02 03:43:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:43:12.815464 | orchestrator | 2026-01-02 03:43:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:43:15.874456 | orchestrator | 2026-01-02 03:43:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:43:15.875984 | orchestrator | 2026-01-02 03:43:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:43:15.876071 | orchestrator | 2026-01-02 03:43:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:43:18.930338 | orchestrator | 2026-01-02 03:43:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:43:18.932409 | orchestrator | 2026-01-02 03:43:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:43:18.932459 | orchestrator | 2026-01-02 03:43:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:43:21.974674 | orchestrator | 2026-01-02 03:43:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:43:21.975382 | orchestrator | 2026-01-02 03:43:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:43:21.975423 | orchestrator | 2026-01-02 03:43:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:43:25.023883 | orchestrator | 2026-01-02 03:43:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:43:25.025267 | orchestrator | 2026-01-02 03:43:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:43:25.025353 | orchestrator | 2026-01-02 03:43:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:43:28.071570 | orchestrator | 2026-01-02 03:43:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:43:28.074655 | orchestrator | 2026-01-02 03:43:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:43:28.074816 | orchestrator | 2026-01-02 03:43:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:43:31.126336 | orchestrator | 2026-01-02 03:43:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:43:31.126869 | orchestrator | 2026-01-02 03:43:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:43:31.126983 | orchestrator | 2026-01-02 03:43:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:43:34.173959 | orchestrator | 2026-01-02 03:43:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:43:34.175946 | orchestrator | 2026-01-02 03:43:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:43:34.175992 | orchestrator | 2026-01-02 03:43:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:43:37.218722 | orchestrator | 2026-01-02 03:43:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:43:37.220289 | orchestrator | 2026-01-02 03:43:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:43:37.220333 | orchestrator | 2026-01-02 03:43:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:43:40.268394 | orchestrator | 2026-01-02 03:43:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:43:40.271153 | orchestrator | 2026-01-02 03:43:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:43:40.271263 | orchestrator | 2026-01-02 03:43:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:43:43.319310 | orchestrator | 2026-01-02 03:43:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:43:43.321457 | orchestrator | 2026-01-02 03:43:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:43:43.321506 | orchestrator | 2026-01-02 03:43:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:43:46.378294 | orchestrator | 2026-01-02 03:43:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:43:46.381083 | orchestrator | 2026-01-02 03:43:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:43:46.381244 | orchestrator | 2026-01-02 03:43:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:43:49.433498 | orchestrator | 2026-01-02 03:43:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:43:49.436407 | orchestrator | 2026-01-02 03:43:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:43:49.436464 | orchestrator | 2026-01-02 03:43:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:43:52.480311 | orchestrator | 2026-01-02 03:43:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:43:52.482233 | orchestrator | 2026-01-02 03:43:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:43:52.482315 | orchestrator | 2026-01-02 03:43:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:43:55.535684 | orchestrator | 2026-01-02 03:43:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:43:55.535972 | orchestrator | 2026-01-02 03:43:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:43:55.535999 | orchestrator | 2026-01-02 03:43:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:43:58.582755 | orchestrator | 2026-01-02 03:43:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:43:58.585135 | orchestrator | 2026-01-02 03:43:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:43:58.585234 | orchestrator | 2026-01-02 03:43:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:44:01.633398 | orchestrator | 2026-01-02 03:44:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:44:01.635935 | orchestrator | 2026-01-02 03:44:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:44:01.636072 | orchestrator | 2026-01-02 03:44:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:44:04.681981 | orchestrator | 2026-01-02 03:44:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:44:04.683858 | orchestrator | 2026-01-02 03:44:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:44:04.683917 | orchestrator | 2026-01-02 03:44:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:44:07.732635 | orchestrator | 2026-01-02 03:44:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:44:07.734980 | orchestrator | 2026-01-02 03:44:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:44:07.735031 | orchestrator | 2026-01-02 03:44:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:44:10.774555 | orchestrator | 2026-01-02 03:44:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:44:10.777429 | orchestrator | 2026-01-02 03:44:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:44:10.777738 | orchestrator | 2026-01-02 03:44:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:44:13.828219 | orchestrator | 2026-01-02 03:44:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:44:13.829930 | orchestrator | 2026-01-02 03:44:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:44:13.829986 | orchestrator | 2026-01-02 03:44:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:44:16.885486 | orchestrator | 2026-01-02 03:44:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:44:16.887332 | orchestrator | 2026-01-02 03:44:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:44:16.887379 | orchestrator | 2026-01-02 03:44:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:44:19.937155 | orchestrator | 2026-01-02 03:44:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:44:19.937861 | orchestrator | 2026-01-02 03:44:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:44:19.937898 | orchestrator | 2026-01-02 03:44:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:44:22.988451 | orchestrator | 2026-01-02 03:44:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:44:22.989974 | orchestrator | 2026-01-02 03:44:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:44:22.990011 | orchestrator | 2026-01-02 03:44:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:44:26.042326 | orchestrator | 2026-01-02 03:44:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:44:26.044345 | orchestrator | 2026-01-02 03:44:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:44:26.044398 | orchestrator | 2026-01-02 03:44:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:44:29.091010 | orchestrator | 2026-01-02 03:44:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:44:29.091661 | orchestrator | 2026-01-02 03:44:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:44:29.091723 | orchestrator | 2026-01-02 03:44:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:44:32.134346 | orchestrator | 2026-01-02 03:44:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:44:32.135233 | orchestrator | 2026-01-02 03:44:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:44:32.135342 | orchestrator | 2026-01-02 03:44:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:44:35.182131 | orchestrator | 2026-01-02 03:44:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:44:35.183511 | orchestrator | 2026-01-02 03:44:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:44:35.183574 | orchestrator | 2026-01-02 03:44:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:44:38.233289 | orchestrator | 2026-01-02 03:44:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:44:38.234799 | orchestrator | 2026-01-02 03:44:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:44:38.234829 | orchestrator | 2026-01-02 03:44:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:44:41.278091 | orchestrator | 2026-01-02 03:44:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:44:41.280023 | orchestrator | 2026-01-02 03:44:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:44:41.280055 | orchestrator | 2026-01-02 03:44:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:44:44.324557 | orchestrator | 2026-01-02 03:44:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:44:44.326472 | orchestrator | 2026-01-02 03:44:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:44:44.326565 | orchestrator | 2026-01-02 03:44:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:44:47.380874 | orchestrator | 2026-01-02 03:44:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:44:47.382291 | orchestrator | 2026-01-02 03:44:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:44:47.382355 | orchestrator | 2026-01-02 03:44:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:44:50.428359 | orchestrator | 2026-01-02 03:44:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:44:50.430340 | orchestrator | 2026-01-02 03:44:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:44:50.430409 | orchestrator | 2026-01-02 03:44:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:44:53.477878 | orchestrator | 2026-01-02 03:44:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:44:53.480058 | orchestrator | 2026-01-02 03:44:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:44:53.480098 | orchestrator | 2026-01-02 03:44:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:44:56.526576 | orchestrator | 2026-01-02 03:44:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:44:56.529669 | orchestrator | 2026-01-02 03:44:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:44:56.529805 | orchestrator | 2026-01-02 03:44:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:44:59.582203 | orchestrator | 2026-01-02 03:44:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:44:59.583564 | orchestrator | 2026-01-02 03:44:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:44:59.583688 | orchestrator | 2026-01-02 03:44:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:45:02.628871 | orchestrator | 2026-01-02 03:45:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:45:02.630880 | orchestrator | 2026-01-02 03:45:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:45:02.630923 | orchestrator | 2026-01-02 03:45:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:45:05.681130 | orchestrator | 2026-01-02 03:45:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:45:05.683084 | orchestrator | 2026-01-02 03:45:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:45:05.683142 | orchestrator | 2026-01-02 03:45:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:45:08.723911 | orchestrator | 2026-01-02 03:45:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:45:08.725025 | orchestrator | 2026-01-02 03:45:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:45:08.725081 | orchestrator | 2026-01-02 03:45:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:45:11.769454 | orchestrator | 2026-01-02 03:45:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:45:11.772079 | orchestrator | 2026-01-02 03:45:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:45:11.772464 | orchestrator | 2026-01-02 03:45:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:45:14.826430 | orchestrator | 2026-01-02 03:45:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:45:14.828757 | orchestrator | 2026-01-02 03:45:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:45:14.828825 | orchestrator | 2026-01-02 03:45:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:45:17.874103 | orchestrator | 2026-01-02 03:45:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:45:17.876913 | orchestrator | 2026-01-02 03:45:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:45:17.876988 | orchestrator | 2026-01-02 03:45:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:45:20.927039 | orchestrator | 2026-01-02 03:45:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:45:20.929845 | orchestrator | 2026-01-02 03:45:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:45:20.929904 | orchestrator | 2026-01-02 03:45:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:45:23.977777 | orchestrator | 2026-01-02 03:45:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:45:23.979440 | orchestrator | 2026-01-02 03:45:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:45:23.979490 | orchestrator | 2026-01-02 03:45:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:45:27.027071 | orchestrator | 2026-01-02 03:45:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:45:27.028948 | orchestrator | 2026-01-02 03:45:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:45:27.029028 | orchestrator | 2026-01-02 03:45:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:45:30.078557 | orchestrator | 2026-01-02 03:45:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:45:30.081111 | orchestrator | 2026-01-02 03:45:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:45:30.081257 | orchestrator | 2026-01-02 03:45:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:45:33.125170 | orchestrator | 2026-01-02 03:45:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:45:33.127442 | orchestrator | 2026-01-02 03:45:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:45:33.127486 | orchestrator | 2026-01-02 03:45:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:45:36.174690 | orchestrator | 2026-01-02 03:45:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:45:36.181620 | orchestrator | 2026-01-02 03:45:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:45:36.181824 | orchestrator | 2026-01-02 03:45:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:45:39.229772 | orchestrator | 2026-01-02 03:45:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:45:39.231216 | orchestrator | 2026-01-02 03:45:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:45:39.232106 | orchestrator | 2026-01-02 03:45:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:45:42.286901 | orchestrator | 2026-01-02 03:45:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:45:42.288262 | orchestrator | 2026-01-02 03:45:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:45:42.288408 | orchestrator | 2026-01-02 03:45:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:45:45.338405 | orchestrator | 2026-01-02 03:45:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:45:45.343080 | orchestrator | 2026-01-02 03:45:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:45:45.343840 | orchestrator | 2026-01-02 03:45:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:45:48.389171 | orchestrator | 2026-01-02 03:45:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:45:48.390427 | orchestrator | 2026-01-02 03:45:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:45:48.390511 | orchestrator | 2026-01-02 03:45:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:45:51.441389 | orchestrator | 2026-01-02 03:45:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:45:51.443128 | orchestrator | 2026-01-02 03:45:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:45:51.443158 | orchestrator | 2026-01-02 03:45:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:45:54.499678 | orchestrator | 2026-01-02 03:45:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:45:54.503733 | orchestrator | 2026-01-02 03:45:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:45:54.503884 | orchestrator | 2026-01-02 03:45:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:45:57.556149 | orchestrator | 2026-01-02 03:45:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:45:57.558740 | orchestrator | 2026-01-02 03:45:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:45:57.558858 | orchestrator | 2026-01-02 03:45:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:46:00.612796 | orchestrator | 2026-01-02 03:46:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:46:00.614870 | orchestrator | 2026-01-02 03:46:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:46:00.614957 | orchestrator | 2026-01-02 03:46:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:46:03.664396 | orchestrator | 2026-01-02 03:46:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:46:03.665838 | orchestrator | 2026-01-02 03:46:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:46:03.665881 | orchestrator | 2026-01-02 03:46:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:46:06.717192 | orchestrator | 2026-01-02 03:46:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:46:06.718873 | orchestrator | 2026-01-02 03:46:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:46:06.718914 | orchestrator | 2026-01-02 03:46:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:46:09.773936 | orchestrator | 2026-01-02 03:46:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:46:09.776497 | orchestrator | 2026-01-02 03:46:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:46:09.776590 | orchestrator | 2026-01-02 03:46:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:46:12.834068 | orchestrator | 2026-01-02 03:46:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:46:12.835645 | orchestrator | 2026-01-02 03:46:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:46:12.835892 | orchestrator | 2026-01-02 03:46:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:46:15.888664 | orchestrator | 2026-01-02 03:46:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:46:15.890587 | orchestrator | 2026-01-02 03:46:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:46:15.890641 | orchestrator | 2026-01-02 03:46:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:46:18.940100 | orchestrator | 2026-01-02 03:46:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:46:18.942001 | orchestrator | 2026-01-02 03:46:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:46:18.942113 | orchestrator | 2026-01-02 03:46:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:46:21.987208 | orchestrator | 2026-01-02 03:46:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:46:21.988672 | orchestrator | 2026-01-02 03:46:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:46:21.988793 | orchestrator | 2026-01-02 03:46:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:46:25.033999 | orchestrator | 2026-01-02 03:46:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:46:25.035782 | orchestrator | 2026-01-02 03:46:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:46:25.035820 | orchestrator | 2026-01-02 03:46:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:46:28.078124 | orchestrator | 2026-01-02 03:46:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:46:28.079330 | orchestrator | 2026-01-02 03:46:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:46:28.079370 | orchestrator | 2026-01-02 03:46:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:46:31.127493 | orchestrator | 2026-01-02 03:46:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:46:31.130233 | orchestrator | 2026-01-02 03:46:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:46:31.130313 | orchestrator | 2026-01-02 03:46:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:46:34.171675 | orchestrator | 2026-01-02 03:46:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:46:34.173067 | orchestrator | 2026-01-02 03:46:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:46:34.173125 | orchestrator | 2026-01-02 03:46:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:46:37.226643 | orchestrator | 2026-01-02 03:46:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:46:37.229111 | orchestrator | 2026-01-02 03:46:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:46:37.229145 | orchestrator | 2026-01-02 03:46:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:46:40.282446 | orchestrator | 2026-01-02 03:46:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:46:40.283445 | orchestrator | 2026-01-02 03:46:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:46:40.283474 | orchestrator | 2026-01-02 03:46:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:46:43.335790 | orchestrator | 2026-01-02 03:46:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:46:43.338287 | orchestrator | 2026-01-02 03:46:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:46:43.338411 | orchestrator | 2026-01-02 03:46:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:46:46.381954 | orchestrator | 2026-01-02 03:46:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:46:46.383602 | orchestrator | 2026-01-02 03:46:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:46:46.383680 | orchestrator | 2026-01-02 03:46:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:46:49.433824 | orchestrator | 2026-01-02 03:46:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:46:49.435652 | orchestrator | 2026-01-02 03:46:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:46:49.435700 | orchestrator | 2026-01-02 03:46:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:46:52.484541 | orchestrator | 2026-01-02 03:46:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:46:52.486279 | orchestrator | 2026-01-02 03:46:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:46:52.486388 | orchestrator | 2026-01-02 03:46:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:46:55.537597 | orchestrator | 2026-01-02 03:46:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:46:55.540838 | orchestrator | 2026-01-02 03:46:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:46:55.540938 | orchestrator | 2026-01-02 03:46:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:46:58.584538 | orchestrator | 2026-01-02 03:46:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:46:58.585778 | orchestrator | 2026-01-02 03:46:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:46:58.585831 | orchestrator | 2026-01-02 03:46:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:47:01.635213 | orchestrator | 2026-01-02 03:47:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:47:01.637349 | orchestrator | 2026-01-02 03:47:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:47:01.637406 | orchestrator | 2026-01-02 03:47:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:47:04.684575 | orchestrator | 2026-01-02 03:47:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:47:04.686960 | orchestrator | 2026-01-02 03:47:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:47:04.687024 | orchestrator | 2026-01-02 03:47:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:47:07.723398 | orchestrator | 2026-01-02 03:47:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:47:07.724248 | orchestrator | 2026-01-02 03:47:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:47:07.724362 | orchestrator | 2026-01-02 03:47:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:47:10.777557 | orchestrator | 2026-01-02 03:47:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:47:10.781145 | orchestrator | 2026-01-02 03:47:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:47:10.781427 | orchestrator | 2026-01-02 03:47:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:47:13.838093 | orchestrator | 2026-01-02 03:47:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:47:13.841752 | orchestrator | 2026-01-02 03:47:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:47:13.841810 | orchestrator | 2026-01-02 03:47:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:47:16.891938 | orchestrator | 2026-01-02 03:47:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:47:16.893539 | orchestrator | 2026-01-02 03:47:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:47:16.893972 | orchestrator | 2026-01-02 03:47:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:47:19.944662 | orchestrator | 2026-01-02 03:47:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:47:19.947771 | orchestrator | 2026-01-02 03:47:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:47:19.947827 | orchestrator | 2026-01-02 03:47:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:47:22.996524 | orchestrator | 2026-01-02 03:47:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:47:22.999497 | orchestrator | 2026-01-02 03:47:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:47:22.999795 | orchestrator | 2026-01-02 03:47:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:47:26.055057 | orchestrator | 2026-01-02 03:47:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:47:26.058664 | orchestrator | 2026-01-02 03:47:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:47:26.058728 | orchestrator | 2026-01-02 03:47:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:47:29.110595 | orchestrator | 2026-01-02 03:47:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:47:29.112122 | orchestrator | 2026-01-02 03:47:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:47:29.112178 | orchestrator | 2026-01-02 03:47:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:47:32.159085 | orchestrator | 2026-01-02 03:47:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:47:32.162333 | orchestrator | 2026-01-02 03:47:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:47:32.162362 | orchestrator | 2026-01-02 03:47:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:47:35.210305 | orchestrator | 2026-01-02 03:47:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:47:35.211494 | orchestrator | 2026-01-02 03:47:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:47:35.211550 | orchestrator | 2026-01-02 03:47:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:47:38.256502 | orchestrator | 2026-01-02 03:47:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:47:38.257873 | orchestrator | 2026-01-02 03:47:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:47:38.257909 | orchestrator | 2026-01-02 03:47:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:47:41.307786 | orchestrator | 2026-01-02 03:47:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:47:41.310173 | orchestrator | 2026-01-02 03:47:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:47:41.310223 | orchestrator | 2026-01-02 03:47:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:47:44.356522 | orchestrator | 2026-01-02 03:47:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:47:44.357925 | orchestrator | 2026-01-02 03:47:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:47:44.357995 | orchestrator | 2026-01-02 03:47:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:47:47.408207 | orchestrator | 2026-01-02 03:47:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:47:47.410441 | orchestrator | 2026-01-02 03:47:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:47:47.410485 | orchestrator | 2026-01-02 03:47:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:47:50.459237 | orchestrator | 2026-01-02 03:47:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:47:50.462054 | orchestrator | 2026-01-02 03:47:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:47:50.462106 | orchestrator | 2026-01-02 03:47:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:47:53.514873 | orchestrator | 2026-01-02 03:47:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:47:53.516891 | orchestrator | 2026-01-02 03:47:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:47:53.517039 | orchestrator | 2026-01-02 03:47:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:47:56.562808 | orchestrator | 2026-01-02 03:47:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:47:56.564425 | orchestrator | 2026-01-02 03:47:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:47:56.564502 | orchestrator | 2026-01-02 03:47:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:47:59.612071 | orchestrator | 2026-01-02 03:47:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:47:59.614631 | orchestrator | 2026-01-02 03:47:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:47:59.614866 | orchestrator | 2026-01-02 03:47:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:48:02.665504 | orchestrator | 2026-01-02 03:48:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:48:02.667460 | orchestrator | 2026-01-02 03:48:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:48:02.667559 | orchestrator | 2026-01-02 03:48:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:48:05.717301 | orchestrator | 2026-01-02 03:48:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:48:05.718783 | orchestrator | 2026-01-02 03:48:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:48:05.719021 | orchestrator | 2026-01-02 03:48:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:48:08.762352 | orchestrator | 2026-01-02 03:48:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:48:08.764125 | orchestrator | 2026-01-02 03:48:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:48:08.764216 | orchestrator | 2026-01-02 03:48:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:48:11.811602 | orchestrator | 2026-01-02 03:48:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:48:11.813004 | orchestrator | 2026-01-02 03:48:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:48:11.813084 | orchestrator | 2026-01-02 03:48:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:48:14.861375 | orchestrator | 2026-01-02 03:48:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:48:14.862464 | orchestrator | 2026-01-02 03:48:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:48:14.862493 | orchestrator | 2026-01-02 03:48:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:48:17.908815 | orchestrator | 2026-01-02 03:48:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:48:17.910645 | orchestrator | 2026-01-02 03:48:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:48:17.910734 | orchestrator | 2026-01-02 03:48:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:48:20.953295 | orchestrator | 2026-01-02 03:48:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:48:20.954830 | orchestrator | 2026-01-02 03:48:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:48:20.954908 | orchestrator | 2026-01-02 03:48:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:48:24.003187 | orchestrator | 2026-01-02 03:48:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:48:24.005306 | orchestrator | 2026-01-02 03:48:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:48:24.005690 | orchestrator | 2026-01-02 03:48:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:48:27.061274 | orchestrator | 2026-01-02 03:48:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:48:27.061378 | orchestrator | 2026-01-02 03:48:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:48:27.061834 | orchestrator | 2026-01-02 03:48:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:48:30.113676 | orchestrator | 2026-01-02 03:48:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:48:30.117038 | orchestrator | 2026-01-02 03:48:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:48:30.117114 | orchestrator | 2026-01-02 03:48:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:48:33.164419 | orchestrator | 2026-01-02 03:48:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:48:33.165463 | orchestrator | 2026-01-02 03:48:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:48:33.165519 | orchestrator | 2026-01-02 03:48:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:48:36.214297 | orchestrator | 2026-01-02 03:48:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:48:36.216808 | orchestrator | 2026-01-02 03:48:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:48:36.216971 | orchestrator | 2026-01-02 03:48:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:48:39.262646 | orchestrator | 2026-01-02 03:48:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:48:39.263690 | orchestrator | 2026-01-02 03:48:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:48:39.263762 | orchestrator | 2026-01-02 03:48:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:48:42.303702 | orchestrator | 2026-01-02 03:48:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:48:42.304970 | orchestrator | 2026-01-02 03:48:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:48:42.305003 | orchestrator | 2026-01-02 03:48:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:48:45.354732 | orchestrator | 2026-01-02 03:48:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:48:45.356747 | orchestrator | 2026-01-02 03:48:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:48:45.356766 | orchestrator | 2026-01-02 03:48:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:48:48.409409 | orchestrator | 2026-01-02 03:48:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:48:48.411965 | orchestrator | 2026-01-02 03:48:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:48:48.412043 | orchestrator | 2026-01-02 03:48:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:48:51.463086 | orchestrator | 2026-01-02 03:48:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:48:51.464133 | orchestrator | 2026-01-02 03:48:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:48:51.464178 | orchestrator | 2026-01-02 03:48:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:48:54.513501 | orchestrator | 2026-01-02 03:48:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:48:54.516203 | orchestrator | 2026-01-02 03:48:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:48:54.516397 | orchestrator | 2026-01-02 03:48:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:48:57.567423 | orchestrator | 2026-01-02 03:48:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:48:57.568485 | orchestrator | 2026-01-02 03:48:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:48:57.568518 | orchestrator | 2026-01-02 03:48:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:49:00.618850 | orchestrator | 2026-01-02 03:49:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:49:00.620586 | orchestrator | 2026-01-02 03:49:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:49:00.620630 | orchestrator | 2026-01-02 03:49:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:49:03.665250 | orchestrator | 2026-01-02 03:49:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:49:03.667509 | orchestrator | 2026-01-02 03:49:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:49:03.667556 | orchestrator | 2026-01-02 03:49:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:49:06.718261 | orchestrator | 2026-01-02 03:49:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:49:06.720067 | orchestrator | 2026-01-02 03:49:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:49:06.720115 | orchestrator | 2026-01-02 03:49:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:49:09.768628 | orchestrator | 2026-01-02 03:49:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:49:09.770943 | orchestrator | 2026-01-02 03:49:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:49:09.771069 | orchestrator | 2026-01-02 03:49:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:49:12.820323 | orchestrator | 2026-01-02 03:49:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:49:12.822418 | orchestrator | 2026-01-02 03:49:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:49:12.822466 | orchestrator | 2026-01-02 03:49:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:49:15.880268 | orchestrator | 2026-01-02 03:49:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:49:15.881861 | orchestrator | 2026-01-02 03:49:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:49:15.882121 | orchestrator | 2026-01-02 03:49:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:49:18.933227 | orchestrator | 2026-01-02 03:49:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:49:18.934567 | orchestrator | 2026-01-02 03:49:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:49:18.934612 | orchestrator | 2026-01-02 03:49:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:49:21.975128 | orchestrator | 2026-01-02 03:49:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:49:21.976543 | orchestrator | 2026-01-02 03:49:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:49:21.976578 | orchestrator | 2026-01-02 03:49:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:49:25.022262 | orchestrator | 2026-01-02 03:49:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:49:25.023919 | orchestrator | 2026-01-02 03:49:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:49:25.024029 | orchestrator | 2026-01-02 03:49:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:49:28.071466 | orchestrator | 2026-01-02 03:49:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:49:28.074060 | orchestrator | 2026-01-02 03:49:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:49:28.074098 | orchestrator | 2026-01-02 03:49:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:49:31.127628 | orchestrator | 2026-01-02 03:49:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:49:31.128573 | orchestrator | 2026-01-02 03:49:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:49:31.128610 | orchestrator | 2026-01-02 03:49:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:49:34.184389 | orchestrator | 2026-01-02 03:49:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:49:34.187090 | orchestrator | 2026-01-02 03:49:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:49:34.187186 | orchestrator | 2026-01-02 03:49:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:49:37.236794 | orchestrator | 2026-01-02 03:49:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:49:37.238860 | orchestrator | 2026-01-02 03:49:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:49:37.238939 | orchestrator | 2026-01-02 03:49:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:49:40.280600 | orchestrator | 2026-01-02 03:49:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:49:40.281555 | orchestrator | 2026-01-02 03:49:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:49:40.281583 | orchestrator | 2026-01-02 03:49:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:49:43.328218 | orchestrator | 2026-01-02 03:49:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:49:43.329676 | orchestrator | 2026-01-02 03:49:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:49:43.329773 | orchestrator | 2026-01-02 03:49:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:49:46.381356 | orchestrator | 2026-01-02 03:49:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:49:46.383832 | orchestrator | 2026-01-02 03:49:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:49:46.383879 | orchestrator | 2026-01-02 03:49:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:49:49.430849 | orchestrator | 2026-01-02 03:49:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:49:49.432750 | orchestrator | 2026-01-02 03:49:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:49:49.432808 | orchestrator | 2026-01-02 03:49:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:49:52.482648 | orchestrator | 2026-01-02 03:49:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:49:52.483714 | orchestrator | 2026-01-02 03:49:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:49:52.483918 | orchestrator | 2026-01-02 03:49:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:49:55.532091 | orchestrator | 2026-01-02 03:49:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:49:55.533439 | orchestrator | 2026-01-02 03:49:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:49:55.533502 | orchestrator | 2026-01-02 03:49:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:49:58.577493 | orchestrator | 2026-01-02 03:49:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:49:58.579902 | orchestrator | 2026-01-02 03:49:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:49:58.579956 | orchestrator | 2026-01-02 03:49:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:50:01.623250 | orchestrator | 2026-01-02 03:50:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:50:01.624284 | orchestrator | 2026-01-02 03:50:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:50:01.624408 | orchestrator | 2026-01-02 03:50:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:50:04.678085 | orchestrator | 2026-01-02 03:50:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:50:04.678852 | orchestrator | 2026-01-02 03:50:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:50:04.678893 | orchestrator | 2026-01-02 03:50:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:50:07.727708 | orchestrator | 2026-01-02 03:50:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:50:07.729614 | orchestrator | 2026-01-02 03:50:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:50:07.729673 | orchestrator | 2026-01-02 03:50:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:50:10.783323 | orchestrator | 2026-01-02 03:50:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:50:10.787135 | orchestrator | 2026-01-02 03:50:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:50:10.787212 | orchestrator | 2026-01-02 03:50:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:50:13.837356 | orchestrator | 2026-01-02 03:50:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:50:13.840351 | orchestrator | 2026-01-02 03:50:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:50:13.840413 | orchestrator | 2026-01-02 03:50:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:50:16.887591 | orchestrator | 2026-01-02 03:50:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:50:16.889240 | orchestrator | 2026-01-02 03:50:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:50:16.889295 | orchestrator | 2026-01-02 03:50:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:50:19.935948 | orchestrator | 2026-01-02 03:50:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:50:19.937888 | orchestrator | 2026-01-02 03:50:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:50:19.937921 | orchestrator | 2026-01-02 03:50:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:50:22.985918 | orchestrator | 2026-01-02 03:50:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:50:22.986771 | orchestrator | 2026-01-02 03:50:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:50:22.986932 | orchestrator | 2026-01-02 03:50:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:50:26.042713 | orchestrator | 2026-01-02 03:50:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:50:26.043870 | orchestrator | 2026-01-02 03:50:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:50:26.044060 | orchestrator | 2026-01-02 03:50:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:50:29.087766 | orchestrator | 2026-01-02 03:50:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:50:29.089964 | orchestrator | 2026-01-02 03:50:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:50:29.090109 | orchestrator | 2026-01-02 03:50:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:50:32.132231 | orchestrator | 2026-01-02 03:50:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:50:32.134179 | orchestrator | 2026-01-02 03:50:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:50:32.134330 | orchestrator | 2026-01-02 03:50:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:50:35.181110 | orchestrator | 2026-01-02 03:50:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:50:35.182703 | orchestrator | 2026-01-02 03:50:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:50:35.182879 | orchestrator | 2026-01-02 03:50:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:50:38.236586 | orchestrator | 2026-01-02 03:50:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:50:38.238303 | orchestrator | 2026-01-02 03:50:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:50:38.238351 | orchestrator | 2026-01-02 03:50:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:50:41.286215 | orchestrator | 2026-01-02 03:50:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:50:41.288788 | orchestrator | 2026-01-02 03:50:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:50:41.288847 | orchestrator | 2026-01-02 03:50:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:50:44.341121 | orchestrator | 2026-01-02 03:50:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:50:44.342532 | orchestrator | 2026-01-02 03:50:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:50:44.342927 | orchestrator | 2026-01-02 03:50:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:50:47.391450 | orchestrator | 2026-01-02 03:50:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:50:47.392159 | orchestrator | 2026-01-02 03:50:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:50:47.392293 | orchestrator | 2026-01-02 03:50:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:50:50.451398 | orchestrator | 2026-01-02 03:50:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:50:50.453716 | orchestrator | 2026-01-02 03:50:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:50:50.454518 | orchestrator | 2026-01-02 03:50:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:50:53.503994 | orchestrator | 2026-01-02 03:50:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:50:53.505965 | orchestrator | 2026-01-02 03:50:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:50:53.506074 | orchestrator | 2026-01-02 03:50:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:50:56.554878 | orchestrator | 2026-01-02 03:50:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:50:56.556965 | orchestrator | 2026-01-02 03:50:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:50:56.557005 | orchestrator | 2026-01-02 03:50:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:50:59.602783 | orchestrator | 2026-01-02 03:50:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:50:59.604576 | orchestrator | 2026-01-02 03:50:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:50:59.604635 | orchestrator | 2026-01-02 03:50:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:51:02.649189 | orchestrator | 2026-01-02 03:51:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:51:02.650720 | orchestrator | 2026-01-02 03:51:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:51:02.650825 | orchestrator | 2026-01-02 03:51:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:51:05.695803 | orchestrator | 2026-01-02 03:51:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:51:05.697938 | orchestrator | 2026-01-02 03:51:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:51:05.698079 | orchestrator | 2026-01-02 03:51:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:51:08.748531 | orchestrator | 2026-01-02 03:51:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:51:08.749997 | orchestrator | 2026-01-02 03:51:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:51:08.750091 | orchestrator | 2026-01-02 03:51:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:51:11.796118 | orchestrator | 2026-01-02 03:51:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:51:11.798918 | orchestrator | 2026-01-02 03:51:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:51:11.798985 | orchestrator | 2026-01-02 03:51:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:51:14.844973 | orchestrator | 2026-01-02 03:51:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:51:14.846366 | orchestrator | 2026-01-02 03:51:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:51:14.846416 | orchestrator | 2026-01-02 03:51:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:51:17.894925 | orchestrator | 2026-01-02 03:51:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:51:17.896093 | orchestrator | 2026-01-02 03:51:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:51:17.896129 | orchestrator | 2026-01-02 03:51:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:51:20.939112 | orchestrator | 2026-01-02 03:51:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:51:20.940999 | orchestrator | 2026-01-02 03:51:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:51:20.941138 | orchestrator | 2026-01-02 03:51:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:51:23.989112 | orchestrator | 2026-01-02 03:51:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:51:23.990672 | orchestrator | 2026-01-02 03:51:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:51:23.990764 | orchestrator | 2026-01-02 03:51:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:51:27.033494 | orchestrator | 2026-01-02 03:51:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:51:27.034124 | orchestrator | 2026-01-02 03:51:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:51:27.034252 | orchestrator | 2026-01-02 03:51:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:51:30.083577 | orchestrator | 2026-01-02 03:51:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:51:30.085375 | orchestrator | 2026-01-02 03:51:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:51:30.085404 | orchestrator | 2026-01-02 03:51:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:51:33.130060 | orchestrator | 2026-01-02 03:51:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:51:33.133776 | orchestrator | 2026-01-02 03:51:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:51:33.133796 | orchestrator | 2026-01-02 03:51:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:51:36.181478 | orchestrator | 2026-01-02 03:51:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:51:36.183490 | orchestrator | 2026-01-02 03:51:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:51:36.183577 | orchestrator | 2026-01-02 03:51:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:51:39.222719 | orchestrator | 2026-01-02 03:51:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:51:39.223443 | orchestrator | 2026-01-02 03:51:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:51:39.223500 | orchestrator | 2026-01-02 03:51:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:51:42.265177 | orchestrator | 2026-01-02 03:51:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:51:42.265345 | orchestrator | 2026-01-02 03:51:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:51:42.265367 | orchestrator | 2026-01-02 03:51:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:51:45.320677 | orchestrator | 2026-01-02 03:51:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:51:45.322463 | orchestrator | 2026-01-02 03:51:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:51:45.322524 | orchestrator | 2026-01-02 03:51:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:51:48.370664 | orchestrator | 2026-01-02 03:51:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:51:48.371651 | orchestrator | 2026-01-02 03:51:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:51:48.371678 | orchestrator | 2026-01-02 03:51:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:51:51.420992 | orchestrator | 2026-01-02 03:51:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:51:51.422622 | orchestrator | 2026-01-02 03:51:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:51:51.422779 | orchestrator | 2026-01-02 03:51:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:51:54.471623 | orchestrator | 2026-01-02 03:51:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:51:54.473243 | orchestrator | 2026-01-02 03:51:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:51:54.473268 | orchestrator | 2026-01-02 03:51:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:51:57.521330 | orchestrator | 2026-01-02 03:51:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:51:57.524512 | orchestrator | 2026-01-02 03:51:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:51:57.524649 | orchestrator | 2026-01-02 03:51:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:52:00.573618 | orchestrator | 2026-01-02 03:52:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:52:00.574487 | orchestrator | 2026-01-02 03:52:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:52:00.574540 | orchestrator | 2026-01-02 03:52:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:52:03.624577 | orchestrator | 2026-01-02 03:52:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:52:03.626602 | orchestrator | 2026-01-02 03:52:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:52:03.626838 | orchestrator | 2026-01-02 03:52:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:52:06.675532 | orchestrator | 2026-01-02 03:52:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:52:06.676929 | orchestrator | 2026-01-02 03:52:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:52:06.676990 | orchestrator | 2026-01-02 03:52:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:52:09.723432 | orchestrator | 2026-01-02 03:52:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:52:09.726007 | orchestrator | 2026-01-02 03:52:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:52:09.726116 | orchestrator | 2026-01-02 03:52:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:52:12.777853 | orchestrator | 2026-01-02 03:52:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:52:12.779428 | orchestrator | 2026-01-02 03:52:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:52:12.779475 | orchestrator | 2026-01-02 03:52:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:52:15.828422 | orchestrator | 2026-01-02 03:52:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:52:15.829954 | orchestrator | 2026-01-02 03:52:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:52:15.830162 | orchestrator | 2026-01-02 03:52:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:52:18.879062 | orchestrator | 2026-01-02 03:52:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:52:18.881681 | orchestrator | 2026-01-02 03:52:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:52:18.881761 | orchestrator | 2026-01-02 03:52:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:52:21.925693 | orchestrator | 2026-01-02 03:52:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:52:21.928379 | orchestrator | 2026-01-02 03:52:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:52:21.928468 | orchestrator | 2026-01-02 03:52:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:52:24.973052 | orchestrator | 2026-01-02 03:52:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:52:24.974267 | orchestrator | 2026-01-02 03:52:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:52:24.974313 | orchestrator | 2026-01-02 03:52:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:52:28.024114 | orchestrator | 2026-01-02 03:52:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:52:28.024322 | orchestrator | 2026-01-02 03:52:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:52:28.024343 | orchestrator | 2026-01-02 03:52:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:52:31.074655 | orchestrator | 2026-01-02 03:52:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:52:31.076863 | orchestrator | 2026-01-02 03:52:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:52:31.076927 | orchestrator | 2026-01-02 03:52:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:52:34.126565 | orchestrator | 2026-01-02 03:52:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:52:34.127644 | orchestrator | 2026-01-02 03:52:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:52:34.127677 | orchestrator | 2026-01-02 03:52:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:52:37.181207 | orchestrator | 2026-01-02 03:52:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:52:37.183219 | orchestrator | 2026-01-02 03:52:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:52:37.183350 | orchestrator | 2026-01-02 03:52:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:52:40.241040 | orchestrator | 2026-01-02 03:52:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:52:40.241140 | orchestrator | 2026-01-02 03:52:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:52:40.241198 | orchestrator | 2026-01-02 03:52:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:52:43.292481 | orchestrator | 2026-01-02 03:52:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:52:43.295105 | orchestrator | 2026-01-02 03:52:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:52:43.295140 | orchestrator | 2026-01-02 03:52:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:52:46.347227 | orchestrator | 2026-01-02 03:52:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:52:46.349237 | orchestrator | 2026-01-02 03:52:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:52:46.349356 | orchestrator | 2026-01-02 03:52:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:52:49.405065 | orchestrator | 2026-01-02 03:52:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:52:49.406352 | orchestrator | 2026-01-02 03:52:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:52:49.406408 | orchestrator | 2026-01-02 03:52:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:52:52.464628 | orchestrator | 2026-01-02 03:52:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:52:52.468169 | orchestrator | 2026-01-02 03:52:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:52:52.468207 | orchestrator | 2026-01-02 03:52:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:52:55.522166 | orchestrator | 2026-01-02 03:52:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:52:55.523637 | orchestrator | 2026-01-02 03:52:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:52:55.523759 | orchestrator | 2026-01-02 03:52:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:52:58.574461 | orchestrator | 2026-01-02 03:52:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:52:58.576209 | orchestrator | 2026-01-02 03:52:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:52:58.576242 | orchestrator | 2026-01-02 03:52:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:53:01.626832 | orchestrator | 2026-01-02 03:53:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:53:01.628409 | orchestrator | 2026-01-02 03:53:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:53:01.628519 | orchestrator | 2026-01-02 03:53:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:53:04.676040 | orchestrator | 2026-01-02 03:53:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:53:04.677991 | orchestrator | 2026-01-02 03:53:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:53:04.678153 | orchestrator | 2026-01-02 03:53:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:53:07.731738 | orchestrator | 2026-01-02 03:53:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:53:07.733431 | orchestrator | 2026-01-02 03:53:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:53:07.733482 | orchestrator | 2026-01-02 03:53:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:53:10.791211 | orchestrator | 2026-01-02 03:53:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:53:10.793163 | orchestrator | 2026-01-02 03:53:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:53:10.793200 | orchestrator | 2026-01-02 03:53:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:53:13.846271 | orchestrator | 2026-01-02 03:53:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:53:13.848465 | orchestrator | 2026-01-02 03:53:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:53:13.848503 | orchestrator | 2026-01-02 03:53:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:53:16.896995 | orchestrator | 2026-01-02 03:53:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:53:16.899773 | orchestrator | 2026-01-02 03:53:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:53:16.899861 | orchestrator | 2026-01-02 03:53:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:53:19.946404 | orchestrator | 2026-01-02 03:53:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:53:19.948235 | orchestrator | 2026-01-02 03:53:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:53:19.948330 | orchestrator | 2026-01-02 03:53:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:53:22.999823 | orchestrator | 2026-01-02 03:53:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:53:23.002226 | orchestrator | 2026-01-02 03:53:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:53:23.002305 | orchestrator | 2026-01-02 03:53:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:53:26.056957 | orchestrator | 2026-01-02 03:53:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:53:26.057987 | orchestrator | 2026-01-02 03:53:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:53:26.058179 | orchestrator | 2026-01-02 03:53:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:53:29.104989 | orchestrator | 2026-01-02 03:53:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:53:29.107388 | orchestrator | 2026-01-02 03:53:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:53:29.107456 | orchestrator | 2026-01-02 03:53:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:53:32.157128 | orchestrator | 2026-01-02 03:53:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:53:32.159245 | orchestrator | 2026-01-02 03:53:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:53:32.159322 | orchestrator | 2026-01-02 03:53:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:53:35.210488 | orchestrator | 2026-01-02 03:53:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:53:35.211760 | orchestrator | 2026-01-02 03:53:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:53:35.211794 | orchestrator | 2026-01-02 03:53:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:53:38.261328 | orchestrator | 2026-01-02 03:53:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:53:38.263191 | orchestrator | 2026-01-02 03:53:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:53:38.263305 | orchestrator | 2026-01-02 03:53:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:53:41.311615 | orchestrator | 2026-01-02 03:53:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:53:41.313104 | orchestrator | 2026-01-02 03:53:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:53:41.313206 | orchestrator | 2026-01-02 03:53:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:53:44.364796 | orchestrator | 2026-01-02 03:53:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:53:44.365934 | orchestrator | 2026-01-02 03:53:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:53:44.366206 | orchestrator | 2026-01-02 03:53:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:53:47.417033 | orchestrator | 2026-01-02 03:53:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:53:47.418120 | orchestrator | 2026-01-02 03:53:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:53:47.418162 | orchestrator | 2026-01-02 03:53:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:53:50.465292 | orchestrator | 2026-01-02 03:53:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:53:50.466099 | orchestrator | 2026-01-02 03:53:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:53:50.466241 | orchestrator | 2026-01-02 03:53:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:53:53.511144 | orchestrator | 2026-01-02 03:53:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:53:53.516086 | orchestrator | 2026-01-02 03:53:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:53:53.516169 | orchestrator | 2026-01-02 03:53:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:53:56.563228 | orchestrator | 2026-01-02 03:53:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:53:56.565271 | orchestrator | 2026-01-02 03:53:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:53:56.565350 | orchestrator | 2026-01-02 03:53:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:53:59.614792 | orchestrator | 2026-01-02 03:53:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:53:59.615447 | orchestrator | 2026-01-02 03:53:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:53:59.615486 | orchestrator | 2026-01-02 03:53:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:54:02.660759 | orchestrator | 2026-01-02 03:54:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:54:02.662456 | orchestrator | 2026-01-02 03:54:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:54:02.662545 | orchestrator | 2026-01-02 03:54:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:54:05.703738 | orchestrator | 2026-01-02 03:54:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:54:05.704424 | orchestrator | 2026-01-02 03:54:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:54:05.704459 | orchestrator | 2026-01-02 03:54:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:54:08.753666 | orchestrator | 2026-01-02 03:54:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:54:08.756178 | orchestrator | 2026-01-02 03:54:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:54:08.756249 | orchestrator | 2026-01-02 03:54:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:54:11.810567 | orchestrator | 2026-01-02 03:54:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:54:11.811666 | orchestrator | 2026-01-02 03:54:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:54:11.811713 | orchestrator | 2026-01-02 03:54:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:54:14.866279 | orchestrator | 2026-01-02 03:54:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:54:14.869330 | orchestrator | 2026-01-02 03:54:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:54:14.869394 | orchestrator | 2026-01-02 03:54:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:54:17.920573 | orchestrator | 2026-01-02 03:54:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:54:17.921242 | orchestrator | 2026-01-02 03:54:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:54:17.921350 | orchestrator | 2026-01-02 03:54:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:54:20.968906 | orchestrator | 2026-01-02 03:54:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:54:20.971377 | orchestrator | 2026-01-02 03:54:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:54:20.971418 | orchestrator | 2026-01-02 03:54:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:54:24.028921 | orchestrator | 2026-01-02 03:54:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:54:24.029923 | orchestrator | 2026-01-02 03:54:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:54:24.030241 | orchestrator | 2026-01-02 03:54:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:54:27.079799 | orchestrator | 2026-01-02 03:54:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:54:27.080171 | orchestrator | 2026-01-02 03:54:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:54:27.080190 | orchestrator | 2026-01-02 03:54:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:54:30.127678 | orchestrator | 2026-01-02 03:54:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:54:30.129390 | orchestrator | 2026-01-02 03:54:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:54:30.129440 | orchestrator | 2026-01-02 03:54:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:54:33.181533 | orchestrator | 2026-01-02 03:54:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:54:33.181903 | orchestrator | 2026-01-02 03:54:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:54:33.181950 | orchestrator | 2026-01-02 03:54:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:54:36.228025 | orchestrator | 2026-01-02 03:54:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:54:36.231015 | orchestrator | 2026-01-02 03:54:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:54:36.231099 | orchestrator | 2026-01-02 03:54:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:54:39.276522 | orchestrator | 2026-01-02 03:54:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:54:39.277998 | orchestrator | 2026-01-02 03:54:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:54:39.278250 | orchestrator | 2026-01-02 03:54:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:54:42.324095 | orchestrator | 2026-01-02 03:54:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:54:42.326958 | orchestrator | 2026-01-02 03:54:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:54:42.327027 | orchestrator | 2026-01-02 03:54:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:54:45.375311 | orchestrator | 2026-01-02 03:54:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:54:45.377885 | orchestrator | 2026-01-02 03:54:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:54:45.378112 | orchestrator | 2026-01-02 03:54:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:54:48.426911 | orchestrator | 2026-01-02 03:54:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:54:48.429426 | orchestrator | 2026-01-02 03:54:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:54:48.429559 | orchestrator | 2026-01-02 03:54:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:54:51.478346 | orchestrator | 2026-01-02 03:54:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:54:51.480545 | orchestrator | 2026-01-02 03:54:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:54:51.480620 | orchestrator | 2026-01-02 03:54:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:54:54.532525 | orchestrator | 2026-01-02 03:54:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:54:54.534282 | orchestrator | 2026-01-02 03:54:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:54:54.534338 | orchestrator | 2026-01-02 03:54:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:54:57.577707 | orchestrator | 2026-01-02 03:54:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:54:57.579507 | orchestrator | 2026-01-02 03:54:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:54:57.579641 | orchestrator | 2026-01-02 03:54:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:55:00.627143 | orchestrator | 2026-01-02 03:55:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:55:00.628102 | orchestrator | 2026-01-02 03:55:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:55:00.628155 | orchestrator | 2026-01-02 03:55:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:55:03.674172 | orchestrator | 2026-01-02 03:55:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:55:03.675447 | orchestrator | 2026-01-02 03:55:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:55:03.675561 | orchestrator | 2026-01-02 03:55:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:55:06.721409 | orchestrator | 2026-01-02 03:55:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:55:06.723176 | orchestrator | 2026-01-02 03:55:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:55:06.723246 | orchestrator | 2026-01-02 03:55:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:55:09.768071 | orchestrator | 2026-01-02 03:55:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:55:09.769182 | orchestrator | 2026-01-02 03:55:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:55:09.769211 | orchestrator | 2026-01-02 03:55:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:55:12.810512 | orchestrator | 2026-01-02 03:55:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:55:12.812141 | orchestrator | 2026-01-02 03:55:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:55:12.812198 | orchestrator | 2026-01-02 03:55:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:55:15.866774 | orchestrator | 2026-01-02 03:55:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:55:15.868346 | orchestrator | 2026-01-02 03:55:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:55:15.868551 | orchestrator | 2026-01-02 03:55:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:55:18.915865 | orchestrator | 2026-01-02 03:55:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:55:18.917538 | orchestrator | 2026-01-02 03:55:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:55:18.917566 | orchestrator | 2026-01-02 03:55:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:55:21.967807 | orchestrator | 2026-01-02 03:55:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:55:21.969385 | orchestrator | 2026-01-02 03:55:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:55:21.969420 | orchestrator | 2026-01-02 03:55:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:55:25.027188 | orchestrator | 2026-01-02 03:55:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:55:25.029956 | orchestrator | 2026-01-02 03:55:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:55:25.030094 | orchestrator | 2026-01-02 03:55:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:55:28.081553 | orchestrator | 2026-01-02 03:55:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:55:28.083070 | orchestrator | 2026-01-02 03:55:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:55:28.083118 | orchestrator | 2026-01-02 03:55:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:55:31.136816 | orchestrator | 2026-01-02 03:55:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:55:31.140240 | orchestrator | 2026-01-02 03:55:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:55:31.140352 | orchestrator | 2026-01-02 03:55:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:55:34.198504 | orchestrator | 2026-01-02 03:55:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:55:34.200668 | orchestrator | 2026-01-02 03:55:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:55:34.200793 | orchestrator | 2026-01-02 03:55:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:55:37.257635 | orchestrator | 2026-01-02 03:55:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:55:37.260453 | orchestrator | 2026-01-02 03:55:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:55:37.260641 | orchestrator | 2026-01-02 03:55:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:55:40.303760 | orchestrator | 2026-01-02 03:55:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:55:40.306624 | orchestrator | 2026-01-02 03:55:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:55:40.306723 | orchestrator | 2026-01-02 03:55:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:55:43.358430 | orchestrator | 2026-01-02 03:55:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:55:43.362723 | orchestrator | 2026-01-02 03:55:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:55:43.362792 | orchestrator | 2026-01-02 03:55:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:55:46.412825 | orchestrator | 2026-01-02 03:55:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:55:46.414366 | orchestrator | 2026-01-02 03:55:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:55:46.414420 | orchestrator | 2026-01-02 03:55:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:55:49.463161 | orchestrator | 2026-01-02 03:55:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:55:49.465066 | orchestrator | 2026-01-02 03:55:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:55:49.465114 | orchestrator | 2026-01-02 03:55:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:55:52.515154 | orchestrator | 2026-01-02 03:55:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:55:52.517236 | orchestrator | 2026-01-02 03:55:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:55:52.517273 | orchestrator | 2026-01-02 03:55:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:55:55.570253 | orchestrator | 2026-01-02 03:55:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:55:55.572969 | orchestrator | 2026-01-02 03:55:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:55:55.573766 | orchestrator | 2026-01-02 03:55:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:55:58.628318 | orchestrator | 2026-01-02 03:55:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:55:58.630497 | orchestrator | 2026-01-02 03:55:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:55:58.630535 | orchestrator | 2026-01-02 03:55:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:56:01.679658 | orchestrator | 2026-01-02 03:56:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:56:01.681672 | orchestrator | 2026-01-02 03:56:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:56:01.681757 | orchestrator | 2026-01-02 03:56:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:56:04.724890 | orchestrator | 2026-01-02 03:56:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:56:04.726904 | orchestrator | 2026-01-02 03:56:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:56:04.726965 | orchestrator | 2026-01-02 03:56:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:56:07.770622 | orchestrator | 2026-01-02 03:56:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:56:07.773721 | orchestrator | 2026-01-02 03:56:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:56:07.773768 | orchestrator | 2026-01-02 03:56:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:56:10.822554 | orchestrator | 2026-01-02 03:56:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:56:10.824657 | orchestrator | 2026-01-02 03:56:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:56:10.824704 | orchestrator | 2026-01-02 03:56:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:56:13.876626 | orchestrator | 2026-01-02 03:56:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:56:13.877719 | orchestrator | 2026-01-02 03:56:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:56:13.877757 | orchestrator | 2026-01-02 03:56:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:56:16.928805 | orchestrator | 2026-01-02 03:56:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:56:16.930220 | orchestrator | 2026-01-02 03:56:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:56:16.930314 | orchestrator | 2026-01-02 03:56:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:56:19.981284 | orchestrator | 2026-01-02 03:56:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:56:19.982573 | orchestrator | 2026-01-02 03:56:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:56:19.982666 | orchestrator | 2026-01-02 03:56:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:56:23.040634 | orchestrator | 2026-01-02 03:56:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:56:23.042584 | orchestrator | 2026-01-02 03:56:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:56:23.042616 | orchestrator | 2026-01-02 03:56:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:56:26.091723 | orchestrator | 2026-01-02 03:56:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:56:26.094371 | orchestrator | 2026-01-02 03:56:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:56:26.094426 | orchestrator | 2026-01-02 03:56:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:56:29.141942 | orchestrator | 2026-01-02 03:56:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:56:29.143549 | orchestrator | 2026-01-02 03:56:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:56:29.143859 | orchestrator | 2026-01-02 03:56:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:56:32.189905 | orchestrator | 2026-01-02 03:56:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:56:32.192443 | orchestrator | 2026-01-02 03:56:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:56:32.192554 | orchestrator | 2026-01-02 03:56:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:56:35.246759 | orchestrator | 2026-01-02 03:56:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:56:35.248174 | orchestrator | 2026-01-02 03:56:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:56:35.248225 | orchestrator | 2026-01-02 03:56:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:56:38.290791 | orchestrator | 2026-01-02 03:56:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:56:38.293666 | orchestrator | 2026-01-02 03:56:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:56:38.293740 | orchestrator | 2026-01-02 03:56:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:56:41.341451 | orchestrator | 2026-01-02 03:56:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:56:41.342895 | orchestrator | 2026-01-02 03:56:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:56:41.342951 | orchestrator | 2026-01-02 03:56:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:56:44.387891 | orchestrator | 2026-01-02 03:56:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:56:44.388048 | orchestrator | 2026-01-02 03:56:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:56:44.388286 | orchestrator | 2026-01-02 03:56:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:56:47.437220 | orchestrator | 2026-01-02 03:56:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:56:47.439072 | orchestrator | 2026-01-02 03:56:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:56:47.439133 | orchestrator | 2026-01-02 03:56:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:56:50.482233 | orchestrator | 2026-01-02 03:56:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:56:50.484454 | orchestrator | 2026-01-02 03:56:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:56:50.484485 | orchestrator | 2026-01-02 03:56:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:56:53.535926 | orchestrator | 2026-01-02 03:56:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:56:53.537677 | orchestrator | 2026-01-02 03:56:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:56:53.537766 | orchestrator | 2026-01-02 03:56:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:56:56.588495 | orchestrator | 2026-01-02 03:56:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:56:56.591542 | orchestrator | 2026-01-02 03:56:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:56:56.591588 | orchestrator | 2026-01-02 03:56:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:56:59.638466 | orchestrator | 2026-01-02 03:56:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:56:59.640546 | orchestrator | 2026-01-02 03:56:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:56:59.640728 | orchestrator | 2026-01-02 03:56:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:57:02.686728 | orchestrator | 2026-01-02 03:57:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:57:02.688366 | orchestrator | 2026-01-02 03:57:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:57:02.688403 | orchestrator | 2026-01-02 03:57:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:57:05.737293 | orchestrator | 2026-01-02 03:57:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:57:05.739141 | orchestrator | 2026-01-02 03:57:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:57:05.739202 | orchestrator | 2026-01-02 03:57:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:57:08.788505 | orchestrator | 2026-01-02 03:57:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:57:08.792011 | orchestrator | 2026-01-02 03:57:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:57:08.792145 | orchestrator | 2026-01-02 03:57:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:57:11.838580 | orchestrator | 2026-01-02 03:57:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:57:11.840346 | orchestrator | 2026-01-02 03:57:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:57:11.840489 | orchestrator | 2026-01-02 03:57:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:57:14.889392 | orchestrator | 2026-01-02 03:57:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:57:14.890767 | orchestrator | 2026-01-02 03:57:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:57:14.890856 | orchestrator | 2026-01-02 03:57:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:57:17.937582 | orchestrator | 2026-01-02 03:57:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:57:17.940396 | orchestrator | 2026-01-02 03:57:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:57:17.940492 | orchestrator | 2026-01-02 03:57:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:57:20.984081 | orchestrator | 2026-01-02 03:57:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:57:20.984832 | orchestrator | 2026-01-02 03:57:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:57:20.984872 | orchestrator | 2026-01-02 03:57:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:57:24.037583 | orchestrator | 2026-01-02 03:57:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:57:24.039410 | orchestrator | 2026-01-02 03:57:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:57:24.039465 | orchestrator | 2026-01-02 03:57:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:57:27.088801 | orchestrator | 2026-01-02 03:57:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:57:27.089703 | orchestrator | 2026-01-02 03:57:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:57:27.089740 | orchestrator | 2026-01-02 03:57:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:57:30.137052 | orchestrator | 2026-01-02 03:57:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:57:30.139529 | orchestrator | 2026-01-02 03:57:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:57:30.139612 | orchestrator | 2026-01-02 03:57:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:57:33.175549 | orchestrator | 2026-01-02 03:57:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:57:33.175854 | orchestrator | 2026-01-02 03:57:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:57:33.175901 | orchestrator | 2026-01-02 03:57:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:57:36.223041 | orchestrator | 2026-01-02 03:57:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:57:36.226166 | orchestrator | 2026-01-02 03:57:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:57:36.226203 | orchestrator | 2026-01-02 03:57:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:57:39.276263 | orchestrator | 2026-01-02 03:57:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:57:39.277872 | orchestrator | 2026-01-02 03:57:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:57:39.277939 | orchestrator | 2026-01-02 03:57:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:57:42.319041 | orchestrator | 2026-01-02 03:57:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:57:42.321159 | orchestrator | 2026-01-02 03:57:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:57:42.321274 | orchestrator | 2026-01-02 03:57:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:57:45.374762 | orchestrator | 2026-01-02 03:57:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:57:45.375133 | orchestrator | 2026-01-02 03:57:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:57:45.375187 | orchestrator | 2026-01-02 03:57:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:57:48.419705 | orchestrator | 2026-01-02 03:57:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:57:48.421096 | orchestrator | 2026-01-02 03:57:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:57:48.421328 | orchestrator | 2026-01-02 03:57:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:57:51.465236 | orchestrator | 2026-01-02 03:57:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:57:51.467290 | orchestrator | 2026-01-02 03:57:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:57:51.467367 | orchestrator | 2026-01-02 03:57:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:57:54.518175 | orchestrator | 2026-01-02 03:57:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:57:54.518550 | orchestrator | 2026-01-02 03:57:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:57:54.518583 | orchestrator | 2026-01-02 03:57:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:57:57.564641 | orchestrator | 2026-01-02 03:57:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:57:57.568183 | orchestrator | 2026-01-02 03:57:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:57:57.568263 | orchestrator | 2026-01-02 03:57:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:58:00.621759 | orchestrator | 2026-01-02 03:58:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:58:00.623653 | orchestrator | 2026-01-02 03:58:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:58:00.623693 | orchestrator | 2026-01-02 03:58:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:58:03.674213 | orchestrator | 2026-01-02 03:58:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:58:03.676497 | orchestrator | 2026-01-02 03:58:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:58:03.676600 | orchestrator | 2026-01-02 03:58:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:58:06.730795 | orchestrator | 2026-01-02 03:58:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:58:06.733433 | orchestrator | 2026-01-02 03:58:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:58:06.733495 | orchestrator | 2026-01-02 03:58:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:58:09.782786 | orchestrator | 2026-01-02 03:58:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:58:09.783106 | orchestrator | 2026-01-02 03:58:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:58:09.783133 | orchestrator | 2026-01-02 03:58:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:58:12.837031 | orchestrator | 2026-01-02 03:58:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:58:12.839896 | orchestrator | 2026-01-02 03:58:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:58:12.840025 | orchestrator | 2026-01-02 03:58:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:58:15.889992 | orchestrator | 2026-01-02 03:58:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:58:15.892050 | orchestrator | 2026-01-02 03:58:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:58:15.892158 | orchestrator | 2026-01-02 03:58:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:58:18.940125 | orchestrator | 2026-01-02 03:58:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:58:18.941972 | orchestrator | 2026-01-02 03:58:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:58:18.942138 | orchestrator | 2026-01-02 03:58:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:58:21.986784 | orchestrator | 2026-01-02 03:58:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:58:21.988551 | orchestrator | 2026-01-02 03:58:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:58:21.988700 | orchestrator | 2026-01-02 03:58:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:58:25.034300 | orchestrator | 2026-01-02 03:58:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:58:25.038371 | orchestrator | 2026-01-02 03:58:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:58:25.038522 | orchestrator | 2026-01-02 03:58:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:58:28.085429 | orchestrator | 2026-01-02 03:58:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:58:28.086511 | orchestrator | 2026-01-02 03:58:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:58:28.086614 | orchestrator | 2026-01-02 03:58:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:58:31.134690 | orchestrator | 2026-01-02 03:58:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:58:31.137122 | orchestrator | 2026-01-02 03:58:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:58:31.137188 | orchestrator | 2026-01-02 03:58:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:58:34.175599 | orchestrator | 2026-01-02 03:58:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:58:34.178581 | orchestrator | 2026-01-02 03:58:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:58:34.179248 | orchestrator | 2026-01-02 03:58:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:58:37.224960 | orchestrator | 2026-01-02 03:58:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:58:37.225461 | orchestrator | 2026-01-02 03:58:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:58:37.225548 | orchestrator | 2026-01-02 03:58:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:58:40.271572 | orchestrator | 2026-01-02 03:58:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:58:40.272576 | orchestrator | 2026-01-02 03:58:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:58:40.272746 | orchestrator | 2026-01-02 03:58:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:58:43.325111 | orchestrator | 2026-01-02 03:58:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:58:43.327348 | orchestrator | 2026-01-02 03:58:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:58:43.327389 | orchestrator | 2026-01-02 03:58:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:58:46.374556 | orchestrator | 2026-01-02 03:58:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:58:46.377038 | orchestrator | 2026-01-02 03:58:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:58:46.377068 | orchestrator | 2026-01-02 03:58:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:58:49.419914 | orchestrator | 2026-01-02 03:58:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:58:49.421348 | orchestrator | 2026-01-02 03:58:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:58:49.421393 | orchestrator | 2026-01-02 03:58:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:58:52.470206 | orchestrator | 2026-01-02 03:58:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:58:52.471296 | orchestrator | 2026-01-02 03:58:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:58:52.471394 | orchestrator | 2026-01-02 03:58:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:58:55.519276 | orchestrator | 2026-01-02 03:58:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:58:55.519471 | orchestrator | 2026-01-02 03:58:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:58:55.519493 | orchestrator | 2026-01-02 03:58:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:58:58.570629 | orchestrator | 2026-01-02 03:58:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:58:58.572990 | orchestrator | 2026-01-02 03:58:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:58:58.573107 | orchestrator | 2026-01-02 03:58:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:59:01.618187 | orchestrator | 2026-01-02 03:59:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:59:01.620161 | orchestrator | 2026-01-02 03:59:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:59:01.620520 | orchestrator | 2026-01-02 03:59:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:59:04.669287 | orchestrator | 2026-01-02 03:59:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:59:04.670870 | orchestrator | 2026-01-02 03:59:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:59:04.670920 | orchestrator | 2026-01-02 03:59:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:59:07.720975 | orchestrator | 2026-01-02 03:59:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:59:07.722404 | orchestrator | 2026-01-02 03:59:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:59:07.722479 | orchestrator | 2026-01-02 03:59:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:59:10.766869 | orchestrator | 2026-01-02 03:59:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:59:10.769139 | orchestrator | 2026-01-02 03:59:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:59:10.769254 | orchestrator | 2026-01-02 03:59:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:59:13.817325 | orchestrator | 2026-01-02 03:59:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:59:13.818604 | orchestrator | 2026-01-02 03:59:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:59:13.818635 | orchestrator | 2026-01-02 03:59:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:59:16.870606 | orchestrator | 2026-01-02 03:59:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:59:16.871949 | orchestrator | 2026-01-02 03:59:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:59:16.872367 | orchestrator | 2026-01-02 03:59:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:59:19.921346 | orchestrator | 2026-01-02 03:59:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:59:19.923112 | orchestrator | 2026-01-02 03:59:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:59:19.923268 | orchestrator | 2026-01-02 03:59:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:59:22.977693 | orchestrator | 2026-01-02 03:59:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:59:22.979109 | orchestrator | 2026-01-02 03:59:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:59:22.979601 | orchestrator | 2026-01-02 03:59:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:59:26.032760 | orchestrator | 2026-01-02 03:59:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:59:26.035264 | orchestrator | 2026-01-02 03:59:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:59:26.035416 | orchestrator | 2026-01-02 03:59:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:59:29.077268 | orchestrator | 2026-01-02 03:59:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:59:29.079145 | orchestrator | 2026-01-02 03:59:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:59:29.079207 | orchestrator | 2026-01-02 03:59:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:59:32.123301 | orchestrator | 2026-01-02 03:59:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:59:32.124549 | orchestrator | 2026-01-02 03:59:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:59:32.124593 | orchestrator | 2026-01-02 03:59:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:59:35.176640 | orchestrator | 2026-01-02 03:59:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:59:35.180302 | orchestrator | 2026-01-02 03:59:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:59:35.180351 | orchestrator | 2026-01-02 03:59:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:59:38.230691 | orchestrator | 2026-01-02 03:59:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:59:38.232213 | orchestrator | 2026-01-02 03:59:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:59:38.232262 | orchestrator | 2026-01-02 03:59:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:59:41.279999 | orchestrator | 2026-01-02 03:59:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:59:41.282074 | orchestrator | 2026-01-02 03:59:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:59:41.282124 | orchestrator | 2026-01-02 03:59:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:59:44.330689 | orchestrator | 2026-01-02 03:59:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:59:44.331835 | orchestrator | 2026-01-02 03:59:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:59:44.332223 | orchestrator | 2026-01-02 03:59:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:59:47.386422 | orchestrator | 2026-01-02 03:59:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:59:47.388380 | orchestrator | 2026-01-02 03:59:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:59:47.388446 | orchestrator | 2026-01-02 03:59:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:59:50.437465 | orchestrator | 2026-01-02 03:59:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:59:50.439877 | orchestrator | 2026-01-02 03:59:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:59:50.439939 | orchestrator | 2026-01-02 03:59:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:59:53.483385 | orchestrator | 2026-01-02 03:59:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:59:53.484907 | orchestrator | 2026-01-02 03:59:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:59:53.484939 | orchestrator | 2026-01-02 03:59:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:59:56.535830 | orchestrator | 2026-01-02 03:59:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:59:56.538712 | orchestrator | 2026-01-02 03:59:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:59:56.539049 | orchestrator | 2026-01-02 03:59:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:59:59.591586 | orchestrator | 2026-01-02 03:59:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 03:59:59.594530 | orchestrator | 2026-01-02 03:59:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 03:59:59.594581 | orchestrator | 2026-01-02 03:59:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:00:02.639486 | orchestrator | 2026-01-02 04:00:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:00:02.640653 | orchestrator | 2026-01-02 04:00:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:00:02.640709 | orchestrator | 2026-01-02 04:00:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:00:05.687174 | orchestrator | 2026-01-02 04:00:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:00:05.690497 | orchestrator | 2026-01-02 04:00:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:00:05.690590 | orchestrator | 2026-01-02 04:00:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:00:08.738148 | orchestrator | 2026-01-02 04:00:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:00:08.739221 | orchestrator | 2026-01-02 04:00:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:00:08.739865 | orchestrator | 2026-01-02 04:00:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:00:11.791688 | orchestrator | 2026-01-02 04:00:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:00:11.793596 | orchestrator | 2026-01-02 04:00:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:00:11.793629 | orchestrator | 2026-01-02 04:00:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:00:14.840519 | orchestrator | 2026-01-02 04:00:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:00:14.842644 | orchestrator | 2026-01-02 04:00:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:00:14.842672 | orchestrator | 2026-01-02 04:00:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:00:17.887594 | orchestrator | 2026-01-02 04:00:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:00:17.890267 | orchestrator | 2026-01-02 04:00:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:00:17.890422 | orchestrator | 2026-01-02 04:00:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:00:20.924439 | orchestrator | 2026-01-02 04:00:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:00:20.925011 | orchestrator | 2026-01-02 04:00:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:00:20.925650 | orchestrator | 2026-01-02 04:00:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:00:23.977616 | orchestrator | 2026-01-02 04:00:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:00:23.980965 | orchestrator | 2026-01-02 04:00:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:00:23.981017 | orchestrator | 2026-01-02 04:00:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:00:27.027542 | orchestrator | 2026-01-02 04:00:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:00:27.030145 | orchestrator | 2026-01-02 04:00:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:00:27.030209 | orchestrator | 2026-01-02 04:00:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:00:30.079947 | orchestrator | 2026-01-02 04:00:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:00:30.082403 | orchestrator | 2026-01-02 04:00:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:00:30.082466 | orchestrator | 2026-01-02 04:00:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:00:33.122423 | orchestrator | 2026-01-02 04:00:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:00:33.123777 | orchestrator | 2026-01-02 04:00:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:00:33.123819 | orchestrator | 2026-01-02 04:00:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:00:36.177263 | orchestrator | 2026-01-02 04:00:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:00:36.183291 | orchestrator | 2026-01-02 04:00:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:00:36.183396 | orchestrator | 2026-01-02 04:00:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:00:39.231242 | orchestrator | 2026-01-02 04:00:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:00:39.232951 | orchestrator | 2026-01-02 04:00:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:00:39.233114 | orchestrator | 2026-01-02 04:00:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:00:42.279042 | orchestrator | 2026-01-02 04:00:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:00:42.281255 | orchestrator | 2026-01-02 04:00:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:00:42.281316 | orchestrator | 2026-01-02 04:00:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:00:45.337766 | orchestrator | 2026-01-02 04:00:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:00:45.339483 | orchestrator | 2026-01-02 04:00:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:00:45.339519 | orchestrator | 2026-01-02 04:00:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:00:48.386719 | orchestrator | 2026-01-02 04:00:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:00:48.387859 | orchestrator | 2026-01-02 04:00:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:00:48.387891 | orchestrator | 2026-01-02 04:00:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:00:51.435078 | orchestrator | 2026-01-02 04:00:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:00:51.436106 | orchestrator | 2026-01-02 04:00:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:00:51.436138 | orchestrator | 2026-01-02 04:00:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:00:54.472234 | orchestrator | 2026-01-02 04:00:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:00:54.474767 | orchestrator | 2026-01-02 04:00:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:00:54.474827 | orchestrator | 2026-01-02 04:00:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:00:57.526925 | orchestrator | 2026-01-02 04:00:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:00:57.529939 | orchestrator | 2026-01-02 04:00:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:00:57.530122 | orchestrator | 2026-01-02 04:00:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:01:00.580728 | orchestrator | 2026-01-02 04:01:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:01:00.581813 | orchestrator | 2026-01-02 04:01:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:01:00.581866 | orchestrator | 2026-01-02 04:01:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:01:03.631870 | orchestrator | 2026-01-02 04:01:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:01:03.634869 | orchestrator | 2026-01-02 04:01:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:01:03.634951 | orchestrator | 2026-01-02 04:01:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:01:06.684891 | orchestrator | 2026-01-02 04:01:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:01:06.687693 | orchestrator | 2026-01-02 04:01:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:01:06.687745 | orchestrator | 2026-01-02 04:01:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:01:09.734964 | orchestrator | 2026-01-02 04:01:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:01:09.735301 | orchestrator | 2026-01-02 04:01:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:01:09.735508 | orchestrator | 2026-01-02 04:01:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:01:12.788731 | orchestrator | 2026-01-02 04:01:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:01:12.790151 | orchestrator | 2026-01-02 04:01:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:01:12.790224 | orchestrator | 2026-01-02 04:01:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:01:15.840528 | orchestrator | 2026-01-02 04:01:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:01:15.841664 | orchestrator | 2026-01-02 04:01:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:01:15.841713 | orchestrator | 2026-01-02 04:01:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:01:18.893692 | orchestrator | 2026-01-02 04:01:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:01:18.894769 | orchestrator | 2026-01-02 04:01:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:01:18.894813 | orchestrator | 2026-01-02 04:01:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:01:21.940742 | orchestrator | 2026-01-02 04:01:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:01:21.942955 | orchestrator | 2026-01-02 04:01:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:01:21.943009 | orchestrator | 2026-01-02 04:01:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:01:24.999519 | orchestrator | 2026-01-02 04:01:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:01:25.002287 | orchestrator | 2026-01-02 04:01:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:01:25.002790 | orchestrator | 2026-01-02 04:01:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:01:28.055420 | orchestrator | 2026-01-02 04:01:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:01:28.057307 | orchestrator | 2026-01-02 04:01:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:01:28.057355 | orchestrator | 2026-01-02 04:01:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:01:31.101623 | orchestrator | 2026-01-02 04:01:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:01:31.103808 | orchestrator | 2026-01-02 04:01:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:01:31.103870 | orchestrator | 2026-01-02 04:01:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:01:34.146711 | orchestrator | 2026-01-02 04:01:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:01:34.148286 | orchestrator | 2026-01-02 04:01:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:01:34.148328 | orchestrator | 2026-01-02 04:01:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:01:37.190792 | orchestrator | 2026-01-02 04:01:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:01:37.192211 | orchestrator | 2026-01-02 04:01:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:01:37.192254 | orchestrator | 2026-01-02 04:01:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:01:40.238726 | orchestrator | 2026-01-02 04:01:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:01:40.243528 | orchestrator | 2026-01-02 04:01:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:01:40.243629 | orchestrator | 2026-01-02 04:01:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:01:43.291984 | orchestrator | 2026-01-02 04:01:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:01:43.293374 | orchestrator | 2026-01-02 04:01:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:01:43.293808 | orchestrator | 2026-01-02 04:01:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:01:46.341311 | orchestrator | 2026-01-02 04:01:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:01:46.343381 | orchestrator | 2026-01-02 04:01:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:01:46.343420 | orchestrator | 2026-01-02 04:01:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:01:49.394967 | orchestrator | 2026-01-02 04:01:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:01:49.396370 | orchestrator | 2026-01-02 04:01:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:01:49.396415 | orchestrator | 2026-01-02 04:01:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:01:52.447370 | orchestrator | 2026-01-02 04:01:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:01:52.448922 | orchestrator | 2026-01-02 04:01:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:01:52.448969 | orchestrator | 2026-01-02 04:01:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:01:55.497057 | orchestrator | 2026-01-02 04:01:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:01:55.498195 | orchestrator | 2026-01-02 04:01:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:01:55.498229 | orchestrator | 2026-01-02 04:01:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:01:58.540368 | orchestrator | 2026-01-02 04:01:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:01:58.540881 | orchestrator | 2026-01-02 04:01:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:01:58.541060 | orchestrator | 2026-01-02 04:01:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:02:01.600066 | orchestrator | 2026-01-02 04:02:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:02:01.603178 | orchestrator | 2026-01-02 04:02:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:02:01.603231 | orchestrator | 2026-01-02 04:02:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:02:04.660883 | orchestrator | 2026-01-02 04:02:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:02:04.662921 | orchestrator | 2026-01-02 04:02:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:02:04.662973 | orchestrator | 2026-01-02 04:02:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:02:07.706835 | orchestrator | 2026-01-02 04:02:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:02:07.707913 | orchestrator | 2026-01-02 04:02:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:02:07.707955 | orchestrator | 2026-01-02 04:02:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:02:10.755833 | orchestrator | 2026-01-02 04:02:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:02:10.756903 | orchestrator | 2026-01-02 04:02:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:02:10.756930 | orchestrator | 2026-01-02 04:02:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:02:13.806331 | orchestrator | 2026-01-02 04:02:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:02:13.808666 | orchestrator | 2026-01-02 04:02:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:02:13.808823 | orchestrator | 2026-01-02 04:02:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:02:16.859146 | orchestrator | 2026-01-02 04:02:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:02:16.861437 | orchestrator | 2026-01-02 04:02:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:02:16.861641 | orchestrator | 2026-01-02 04:02:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:02:19.910715 | orchestrator | 2026-01-02 04:02:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:02:19.912240 | orchestrator | 2026-01-02 04:02:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:02:19.912273 | orchestrator | 2026-01-02 04:02:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:02:22.967735 | orchestrator | 2026-01-02 04:02:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:02:22.969569 | orchestrator | 2026-01-02 04:02:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:02:22.969611 | orchestrator | 2026-01-02 04:02:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:02:26.020954 | orchestrator | 2026-01-02 04:02:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:02:26.022343 | orchestrator | 2026-01-02 04:02:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:02:26.023110 | orchestrator | 2026-01-02 04:02:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:02:29.071071 | orchestrator | 2026-01-02 04:02:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:02:29.073975 | orchestrator | 2026-01-02 04:02:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:02:29.074135 | orchestrator | 2026-01-02 04:02:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:02:32.133615 | orchestrator | 2026-01-02 04:02:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:02:32.134186 | orchestrator | 2026-01-02 04:02:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:02:32.135267 | orchestrator | 2026-01-02 04:02:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:02:35.178367 | orchestrator | 2026-01-02 04:02:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:02:35.180388 | orchestrator | 2026-01-02 04:02:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:02:35.180473 | orchestrator | 2026-01-02 04:02:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:02:38.234119 | orchestrator | 2026-01-02 04:02:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:02:38.235901 | orchestrator | 2026-01-02 04:02:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:02:38.236179 | orchestrator | 2026-01-02 04:02:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:02:41.280078 | orchestrator | 2026-01-02 04:02:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:02:41.281258 | orchestrator | 2026-01-02 04:02:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:02:41.281316 | orchestrator | 2026-01-02 04:02:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:02:44.325001 | orchestrator | 2026-01-02 04:02:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:02:44.326166 | orchestrator | 2026-01-02 04:02:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:02:44.326258 | orchestrator | 2026-01-02 04:02:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:02:47.372361 | orchestrator | 2026-01-02 04:02:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:02:47.373370 | orchestrator | 2026-01-02 04:02:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:02:47.373427 | orchestrator | 2026-01-02 04:02:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:02:50.423024 | orchestrator | 2026-01-02 04:02:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:02:50.423735 | orchestrator | 2026-01-02 04:02:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:02:50.423773 | orchestrator | 2026-01-02 04:02:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:02:53.466999 | orchestrator | 2026-01-02 04:02:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:02:53.468401 | orchestrator | 2026-01-02 04:02:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:02:53.468547 | orchestrator | 2026-01-02 04:02:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:02:56.516585 | orchestrator | 2026-01-02 04:02:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:02:56.518289 | orchestrator | 2026-01-02 04:02:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:02:56.518349 | orchestrator | 2026-01-02 04:02:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:02:59.560998 | orchestrator | 2026-01-02 04:02:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:02:59.563367 | orchestrator | 2026-01-02 04:02:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:02:59.563412 | orchestrator | 2026-01-02 04:02:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:03:02.608085 | orchestrator | 2026-01-02 04:03:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:03:02.611427 | orchestrator | 2026-01-02 04:03:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:03:02.611468 | orchestrator | 2026-01-02 04:03:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:03:05.662134 | orchestrator | 2026-01-02 04:03:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:03:05.664618 | orchestrator | 2026-01-02 04:03:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:03:05.664673 | orchestrator | 2026-01-02 04:03:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:03:08.715634 | orchestrator | 2026-01-02 04:03:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:03:08.718583 | orchestrator | 2026-01-02 04:03:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:03:08.718650 | orchestrator | 2026-01-02 04:03:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:03:11.769794 | orchestrator | 2026-01-02 04:03:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:03:11.771987 | orchestrator | 2026-01-02 04:03:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:03:11.772040 | orchestrator | 2026-01-02 04:03:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:03:14.816701 | orchestrator | 2026-01-02 04:03:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:03:14.817068 | orchestrator | 2026-01-02 04:03:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:03:14.817138 | orchestrator | 2026-01-02 04:03:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:03:17.868557 | orchestrator | 2026-01-02 04:03:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:03:17.869866 | orchestrator | 2026-01-02 04:03:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:03:17.869891 | orchestrator | 2026-01-02 04:03:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:03:20.920360 | orchestrator | 2026-01-02 04:03:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:03:20.921445 | orchestrator | 2026-01-02 04:03:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:03:20.921517 | orchestrator | 2026-01-02 04:03:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:03:23.969944 | orchestrator | 2026-01-02 04:03:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:03:23.971173 | orchestrator | 2026-01-02 04:03:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:03:23.971222 | orchestrator | 2026-01-02 04:03:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:03:27.024268 | orchestrator | 2026-01-02 04:03:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:03:27.026932 | orchestrator | 2026-01-02 04:03:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:03:27.027030 | orchestrator | 2026-01-02 04:03:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:03:30.077793 | orchestrator | 2026-01-02 04:03:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:03:30.079743 | orchestrator | 2026-01-02 04:03:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:03:30.079842 | orchestrator | 2026-01-02 04:03:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:03:33.128613 | orchestrator | 2026-01-02 04:03:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:03:33.129770 | orchestrator | 2026-01-02 04:03:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:03:33.129811 | orchestrator | 2026-01-02 04:03:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:03:36.182359 | orchestrator | 2026-01-02 04:03:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:03:36.184067 | orchestrator | 2026-01-02 04:03:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:03:36.184138 | orchestrator | 2026-01-02 04:03:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:03:39.230856 | orchestrator | 2026-01-02 04:03:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:03:39.232855 | orchestrator | 2026-01-02 04:03:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:03:39.232902 | orchestrator | 2026-01-02 04:03:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:03:42.279775 | orchestrator | 2026-01-02 04:03:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:03:42.280825 | orchestrator | 2026-01-02 04:03:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:03:42.280914 | orchestrator | 2026-01-02 04:03:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:03:45.330941 | orchestrator | 2026-01-02 04:03:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:03:45.331856 | orchestrator | 2026-01-02 04:03:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:03:45.332209 | orchestrator | 2026-01-02 04:03:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:03:48.379075 | orchestrator | 2026-01-02 04:03:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:03:48.381350 | orchestrator | 2026-01-02 04:03:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:03:48.381388 | orchestrator | 2026-01-02 04:03:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:03:51.432274 | orchestrator | 2026-01-02 04:03:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:03:51.434553 | orchestrator | 2026-01-02 04:03:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:03:51.434713 | orchestrator | 2026-01-02 04:03:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:03:54.484818 | orchestrator | 2026-01-02 04:03:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:03:54.487787 | orchestrator | 2026-01-02 04:03:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:03:54.487861 | orchestrator | 2026-01-02 04:03:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:03:57.540532 | orchestrator | 2026-01-02 04:03:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:03:57.543550 | orchestrator | 2026-01-02 04:03:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:03:57.543658 | orchestrator | 2026-01-02 04:03:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:04:00.577721 | orchestrator | 2026-01-02 04:04:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:04:00.578308 | orchestrator | 2026-01-02 04:04:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:04:00.578334 | orchestrator | 2026-01-02 04:04:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:04:03.624325 | orchestrator | 2026-01-02 04:04:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:04:03.626421 | orchestrator | 2026-01-02 04:04:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:04:03.626606 | orchestrator | 2026-01-02 04:04:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:04:06.676231 | orchestrator | 2026-01-02 04:04:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:04:06.678789 | orchestrator | 2026-01-02 04:04:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:04:06.678840 | orchestrator | 2026-01-02 04:04:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:04:09.731664 | orchestrator | 2026-01-02 04:04:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:04:09.733228 | orchestrator | 2026-01-02 04:04:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:04:09.733497 | orchestrator | 2026-01-02 04:04:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:04:12.777928 | orchestrator | 2026-01-02 04:04:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:04:12.782545 | orchestrator | 2026-01-02 04:04:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:04:12.782693 | orchestrator | 2026-01-02 04:04:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:04:15.835263 | orchestrator | 2026-01-02 04:04:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:04:15.836167 | orchestrator | 2026-01-02 04:04:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:04:15.836398 | orchestrator | 2026-01-02 04:04:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:04:18.884921 | orchestrator | 2026-01-02 04:04:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:04:18.886391 | orchestrator | 2026-01-02 04:04:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:04:18.886525 | orchestrator | 2026-01-02 04:04:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:04:21.934514 | orchestrator | 2026-01-02 04:04:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:04:21.935780 | orchestrator | 2026-01-02 04:04:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:04:21.935833 | orchestrator | 2026-01-02 04:04:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:04:24.983666 | orchestrator | 2026-01-02 04:04:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:04:24.986175 | orchestrator | 2026-01-02 04:04:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:04:24.986358 | orchestrator | 2026-01-02 04:04:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:04:28.033503 | orchestrator | 2026-01-02 04:04:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:04:28.035261 | orchestrator | 2026-01-02 04:04:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:04:28.035296 | orchestrator | 2026-01-02 04:04:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:04:31.088473 | orchestrator | 2026-01-02 04:04:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:04:31.090170 | orchestrator | 2026-01-02 04:04:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:04:31.090220 | orchestrator | 2026-01-02 04:04:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:04:34.132277 | orchestrator | 2026-01-02 04:04:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:04:34.134714 | orchestrator | 2026-01-02 04:04:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:04:34.134784 | orchestrator | 2026-01-02 04:04:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:04:37.184186 | orchestrator | 2026-01-02 04:04:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:04:37.185941 | orchestrator | 2026-01-02 04:04:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:04:37.185980 | orchestrator | 2026-01-02 04:04:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:04:40.231982 | orchestrator | 2026-01-02 04:04:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:04:40.236150 | orchestrator | 2026-01-02 04:04:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:04:40.236365 | orchestrator | 2026-01-02 04:04:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:04:43.289085 | orchestrator | 2026-01-02 04:04:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:04:43.291611 | orchestrator | 2026-01-02 04:04:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:04:43.291671 | orchestrator | 2026-01-02 04:04:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:04:46.338331 | orchestrator | 2026-01-02 04:04:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:04:46.341115 | orchestrator | 2026-01-02 04:04:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:04:46.341847 | orchestrator | 2026-01-02 04:04:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:04:49.394346 | orchestrator | 2026-01-02 04:04:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:04:49.395466 | orchestrator | 2026-01-02 04:04:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:04:49.395516 | orchestrator | 2026-01-02 04:04:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:04:52.448689 | orchestrator | 2026-01-02 04:04:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:04:52.449891 | orchestrator | 2026-01-02 04:04:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:04:52.449935 | orchestrator | 2026-01-02 04:04:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:04:55.503043 | orchestrator | 2026-01-02 04:04:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:04:55.504693 | orchestrator | 2026-01-02 04:04:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:04:55.504736 | orchestrator | 2026-01-02 04:04:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:04:58.554376 | orchestrator | 2026-01-02 04:04:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:04:58.556241 | orchestrator | 2026-01-02 04:04:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:04:58.556848 | orchestrator | 2026-01-02 04:04:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:05:01.602265 | orchestrator | 2026-01-02 04:05:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:05:01.604138 | orchestrator | 2026-01-02 04:05:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:05:01.604236 | orchestrator | 2026-01-02 04:05:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:05:04.659628 | orchestrator | 2026-01-02 04:05:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:05:04.663339 | orchestrator | 2026-01-02 04:05:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:05:04.663695 | orchestrator | 2026-01-02 04:05:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:05:07.707890 | orchestrator | 2026-01-02 04:05:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:05:07.710516 | orchestrator | 2026-01-02 04:05:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:05:07.710596 | orchestrator | 2026-01-02 04:05:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:05:10.751940 | orchestrator | 2026-01-02 04:05:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:05:10.753642 | orchestrator | 2026-01-02 04:05:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:05:10.753910 | orchestrator | 2026-01-02 04:05:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:05:13.796085 | orchestrator | 2026-01-02 04:05:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:05:13.798432 | orchestrator | 2026-01-02 04:05:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:05:13.798530 | orchestrator | 2026-01-02 04:05:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:05:16.848198 | orchestrator | 2026-01-02 04:05:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:05:16.850271 | orchestrator | 2026-01-02 04:05:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:05:16.850348 | orchestrator | 2026-01-02 04:05:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:05:19.906592 | orchestrator | 2026-01-02 04:05:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:05:19.908254 | orchestrator | 2026-01-02 04:05:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:05:19.908621 | orchestrator | 2026-01-02 04:05:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:05:22.965204 | orchestrator | 2026-01-02 04:05:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:05:22.967609 | orchestrator | 2026-01-02 04:05:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:05:22.967653 | orchestrator | 2026-01-02 04:05:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:05:26.025524 | orchestrator | 2026-01-02 04:05:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:05:26.027167 | orchestrator | 2026-01-02 04:05:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:05:26.027218 | orchestrator | 2026-01-02 04:05:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:05:29.079605 | orchestrator | 2026-01-02 04:05:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:05:29.082005 | orchestrator | 2026-01-02 04:05:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:05:29.082086 | orchestrator | 2026-01-02 04:05:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:05:32.136902 | orchestrator | 2026-01-02 04:05:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:05:32.137561 | orchestrator | 2026-01-02 04:05:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:05:32.137656 | orchestrator | 2026-01-02 04:05:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:05:35.192499 | orchestrator | 2026-01-02 04:05:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:05:35.195456 | orchestrator | 2026-01-02 04:05:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:05:35.195517 | orchestrator | 2026-01-02 04:05:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:05:38.241473 | orchestrator | 2026-01-02 04:05:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:05:38.243295 | orchestrator | 2026-01-02 04:05:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:05:38.243385 | orchestrator | 2026-01-02 04:05:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:05:41.298315 | orchestrator | 2026-01-02 04:05:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:05:41.299596 | orchestrator | 2026-01-02 04:05:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:05:41.299738 | orchestrator | 2026-01-02 04:05:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:05:44.352925 | orchestrator | 2026-01-02 04:05:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:05:44.353025 | orchestrator | 2026-01-02 04:05:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:05:44.353041 | orchestrator | 2026-01-02 04:05:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:05:47.397503 | orchestrator | 2026-01-02 04:05:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:05:47.399687 | orchestrator | 2026-01-02 04:05:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:05:47.399728 | orchestrator | 2026-01-02 04:05:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:05:50.447608 | orchestrator | 2026-01-02 04:05:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:05:50.450182 | orchestrator | 2026-01-02 04:05:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:05:50.450315 | orchestrator | 2026-01-02 04:05:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:05:53.496738 | orchestrator | 2026-01-02 04:05:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:05:53.498004 | orchestrator | 2026-01-02 04:05:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:05:53.498095 | orchestrator | 2026-01-02 04:05:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:05:56.544988 | orchestrator | 2026-01-02 04:05:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:05:56.546912 | orchestrator | 2026-01-02 04:05:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:05:56.546965 | orchestrator | 2026-01-02 04:05:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:05:59.595502 | orchestrator | 2026-01-02 04:05:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:05:59.596861 | orchestrator | 2026-01-02 04:05:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:05:59.596900 | orchestrator | 2026-01-02 04:05:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:06:02.647151 | orchestrator | 2026-01-02 04:06:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:06:02.648058 | orchestrator | 2026-01-02 04:06:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:06:02.648163 | orchestrator | 2026-01-02 04:06:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:06:05.696995 | orchestrator | 2026-01-02 04:06:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:06:05.700961 | orchestrator | 2026-01-02 04:06:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:06:05.701017 | orchestrator | 2026-01-02 04:06:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:06:08.748115 | orchestrator | 2026-01-02 04:06:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:06:08.748834 | orchestrator | 2026-01-02 04:06:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:06:08.748868 | orchestrator | 2026-01-02 04:06:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:06:11.804511 | orchestrator | 2026-01-02 04:06:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:06:11.805528 | orchestrator | 2026-01-02 04:06:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:06:11.805616 | orchestrator | 2026-01-02 04:06:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:06:14.853621 | orchestrator | 2026-01-02 04:06:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:06:14.855987 | orchestrator | 2026-01-02 04:06:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:06:14.856039 | orchestrator | 2026-01-02 04:06:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:06:17.900724 | orchestrator | 2026-01-02 04:06:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:06:17.901884 | orchestrator | 2026-01-02 04:06:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:06:17.902115 | orchestrator | 2026-01-02 04:06:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:06:20.938620 | orchestrator | 2026-01-02 04:06:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:06:20.940043 | orchestrator | 2026-01-02 04:06:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:06:20.940167 | orchestrator | 2026-01-02 04:06:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:06:23.990428 | orchestrator | 2026-01-02 04:06:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:06:23.995141 | orchestrator | 2026-01-02 04:06:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:06:23.995182 | orchestrator | 2026-01-02 04:06:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:06:27.038701 | orchestrator | 2026-01-02 04:06:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:06:27.039383 | orchestrator | 2026-01-02 04:06:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:06:27.039426 | orchestrator | 2026-01-02 04:06:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:06:30.090174 | orchestrator | 2026-01-02 04:06:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:06:30.093045 | orchestrator | 2026-01-02 04:06:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:06:30.093200 | orchestrator | 2026-01-02 04:06:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:06:33.142608 | orchestrator | 2026-01-02 04:06:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:06:33.144372 | orchestrator | 2026-01-02 04:06:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:06:33.144455 | orchestrator | 2026-01-02 04:06:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:06:36.189978 | orchestrator | 2026-01-02 04:06:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:06:36.191590 | orchestrator | 2026-01-02 04:06:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:06:36.191635 | orchestrator | 2026-01-02 04:06:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:06:39.238703 | orchestrator | 2026-01-02 04:06:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:06:39.240676 | orchestrator | 2026-01-02 04:06:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:06:39.240767 | orchestrator | 2026-01-02 04:06:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:06:42.292783 | orchestrator | 2026-01-02 04:06:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:06:42.295053 | orchestrator | 2026-01-02 04:06:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:06:42.295103 | orchestrator | 2026-01-02 04:06:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:06:45.347775 | orchestrator | 2026-01-02 04:06:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:06:45.349235 | orchestrator | 2026-01-02 04:06:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:06:45.349344 | orchestrator | 2026-01-02 04:06:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:06:48.392571 | orchestrator | 2026-01-02 04:06:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:06:48.393827 | orchestrator | 2026-01-02 04:06:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:06:48.394086 | orchestrator | 2026-01-02 04:06:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:06:51.443024 | orchestrator | 2026-01-02 04:06:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:06:51.445122 | orchestrator | 2026-01-02 04:06:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:06:51.445180 | orchestrator | 2026-01-02 04:06:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:06:54.494548 | orchestrator | 2026-01-02 04:06:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:06:54.496205 | orchestrator | 2026-01-02 04:06:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:06:54.496335 | orchestrator | 2026-01-02 04:06:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:06:57.548276 | orchestrator | 2026-01-02 04:06:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:06:57.550185 | orchestrator | 2026-01-02 04:06:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:06:57.550236 | orchestrator | 2026-01-02 04:06:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:07:00.594749 | orchestrator | 2026-01-02 04:07:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:07:00.597535 | orchestrator | 2026-01-02 04:07:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:07:00.597637 | orchestrator | 2026-01-02 04:07:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:07:03.645455 | orchestrator | 2026-01-02 04:07:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:07:03.647502 | orchestrator | 2026-01-02 04:07:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:07:03.647616 | orchestrator | 2026-01-02 04:07:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:07:06.698608 | orchestrator | 2026-01-02 04:07:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:07:06.701181 | orchestrator | 2026-01-02 04:07:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:07:06.701252 | orchestrator | 2026-01-02 04:07:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:07:09.746477 | orchestrator | 2026-01-02 04:07:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:07:09.747689 | orchestrator | 2026-01-02 04:07:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:07:09.748222 | orchestrator | 2026-01-02 04:07:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:07:12.800688 | orchestrator | 2026-01-02 04:07:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:07:12.802433 | orchestrator | 2026-01-02 04:07:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:07:12.802490 | orchestrator | 2026-01-02 04:07:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:07:15.846489 | orchestrator | 2026-01-02 04:07:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:07:15.847856 | orchestrator | 2026-01-02 04:07:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:07:15.847983 | orchestrator | 2026-01-02 04:07:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:07:18.895907 | orchestrator | 2026-01-02 04:07:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:07:18.898467 | orchestrator | 2026-01-02 04:07:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:07:18.898538 | orchestrator | 2026-01-02 04:07:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:07:21.944754 | orchestrator | 2026-01-02 04:07:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:07:21.946643 | orchestrator | 2026-01-02 04:07:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:07:21.946693 | orchestrator | 2026-01-02 04:07:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:07:25.000005 | orchestrator | 2026-01-02 04:07:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:07:25.002168 | orchestrator | 2026-01-02 04:07:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:07:25.002213 | orchestrator | 2026-01-02 04:07:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:07:28.051856 | orchestrator | 2026-01-02 04:07:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:07:28.053894 | orchestrator | 2026-01-02 04:07:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:07:28.053940 | orchestrator | 2026-01-02 04:07:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:07:31.102443 | orchestrator | 2026-01-02 04:07:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:07:31.105118 | orchestrator | 2026-01-02 04:07:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:07:31.105171 | orchestrator | 2026-01-02 04:07:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:07:34.143563 | orchestrator | 2026-01-02 04:07:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:07:34.144732 | orchestrator | 2026-01-02 04:07:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:07:34.144771 | orchestrator | 2026-01-02 04:07:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:07:37.196955 | orchestrator | 2026-01-02 04:07:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:07:37.199166 | orchestrator | 2026-01-02 04:07:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:07:37.199249 | orchestrator | 2026-01-02 04:07:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:07:40.252878 | orchestrator | 2026-01-02 04:07:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:07:40.254699 | orchestrator | 2026-01-02 04:07:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:07:40.254767 | orchestrator | 2026-01-02 04:07:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:07:43.307406 | orchestrator | 2026-01-02 04:07:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:07:43.309732 | orchestrator | 2026-01-02 04:07:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:07:43.309783 | orchestrator | 2026-01-02 04:07:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:07:46.363247 | orchestrator | 2026-01-02 04:07:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:07:46.365026 | orchestrator | 2026-01-02 04:07:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:07:46.365078 | orchestrator | 2026-01-02 04:07:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:07:49.411638 | orchestrator | 2026-01-02 04:07:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:07:49.414120 | orchestrator | 2026-01-02 04:07:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:07:49.414166 | orchestrator | 2026-01-02 04:07:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:07:52.471402 | orchestrator | 2026-01-02 04:07:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:07:52.473347 | orchestrator | 2026-01-02 04:07:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:07:52.473399 | orchestrator | 2026-01-02 04:07:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:07:55.528402 | orchestrator | 2026-01-02 04:07:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:07:55.531567 | orchestrator | 2026-01-02 04:07:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:07:55.531622 | orchestrator | 2026-01-02 04:07:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:07:58.579903 | orchestrator | 2026-01-02 04:07:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:07:58.580926 | orchestrator | 2026-01-02 04:07:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:07:58.580963 | orchestrator | 2026-01-02 04:07:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:08:01.625823 | orchestrator | 2026-01-02 04:08:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:08:01.627631 | orchestrator | 2026-01-02 04:08:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:08:01.627673 | orchestrator | 2026-01-02 04:08:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:08:04.675658 | orchestrator | 2026-01-02 04:08:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:08:04.678295 | orchestrator | 2026-01-02 04:08:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:08:04.678373 | orchestrator | 2026-01-02 04:08:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:08:07.729210 | orchestrator | 2026-01-02 04:08:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:08:07.731431 | orchestrator | 2026-01-02 04:08:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:08:07.731621 | orchestrator | 2026-01-02 04:08:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:08:10.778737 | orchestrator | 2026-01-02 04:08:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:08:10.781007 | orchestrator | 2026-01-02 04:08:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:08:10.781037 | orchestrator | 2026-01-02 04:08:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:08:13.836988 | orchestrator | 2026-01-02 04:08:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:08:13.838471 | orchestrator | 2026-01-02 04:08:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:08:13.838503 | orchestrator | 2026-01-02 04:08:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:08:16.880691 | orchestrator | 2026-01-02 04:08:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:08:16.882276 | orchestrator | 2026-01-02 04:08:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:08:16.882372 | orchestrator | 2026-01-02 04:08:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:08:19.932652 | orchestrator | 2026-01-02 04:08:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:08:19.934303 | orchestrator | 2026-01-02 04:08:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:08:19.934383 | orchestrator | 2026-01-02 04:08:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:08:22.983558 | orchestrator | 2026-01-02 04:08:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:08:22.985160 | orchestrator | 2026-01-02 04:08:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:08:22.985396 | orchestrator | 2026-01-02 04:08:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:08:26.038302 | orchestrator | 2026-01-02 04:08:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:08:26.040125 | orchestrator | 2026-01-02 04:08:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:08:26.040387 | orchestrator | 2026-01-02 04:08:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:08:29.093600 | orchestrator | 2026-01-02 04:08:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:08:29.096391 | orchestrator | 2026-01-02 04:08:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:08:29.096441 | orchestrator | 2026-01-02 04:08:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:08:32.140581 | orchestrator | 2026-01-02 04:08:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:08:32.141900 | orchestrator | 2026-01-02 04:08:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:08:32.141957 | orchestrator | 2026-01-02 04:08:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:08:35.190912 | orchestrator | 2026-01-02 04:08:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:08:35.192350 | orchestrator | 2026-01-02 04:08:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:08:35.192872 | orchestrator | 2026-01-02 04:08:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:08:38.240510 | orchestrator | 2026-01-02 04:08:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:08:38.243737 | orchestrator | 2026-01-02 04:08:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:08:38.243801 | orchestrator | 2026-01-02 04:08:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:08:41.288890 | orchestrator | 2026-01-02 04:08:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:08:41.290994 | orchestrator | 2026-01-02 04:08:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:08:41.291048 | orchestrator | 2026-01-02 04:08:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:08:44.337951 | orchestrator | 2026-01-02 04:08:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:08:44.340583 | orchestrator | 2026-01-02 04:08:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:08:44.341353 | orchestrator | 2026-01-02 04:08:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:08:47.381108 | orchestrator | 2026-01-02 04:08:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:08:47.382646 | orchestrator | 2026-01-02 04:08:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:08:47.382693 | orchestrator | 2026-01-02 04:08:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:08:50.441865 | orchestrator | 2026-01-02 04:08:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:08:50.444020 | orchestrator | 2026-01-02 04:08:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:08:50.444073 | orchestrator | 2026-01-02 04:08:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:08:53.495121 | orchestrator | 2026-01-02 04:08:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:08:53.497078 | orchestrator | 2026-01-02 04:08:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:08:53.497131 | orchestrator | 2026-01-02 04:08:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:08:56.547133 | orchestrator | 2026-01-02 04:08:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:08:56.550444 | orchestrator | 2026-01-02 04:08:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:08:56.550518 | orchestrator | 2026-01-02 04:08:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:08:59.605352 | orchestrator | 2026-01-02 04:08:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:08:59.610186 | orchestrator | 2026-01-02 04:08:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:08:59.610277 | orchestrator | 2026-01-02 04:08:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:09:02.654456 | orchestrator | 2026-01-02 04:09:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:09:02.656244 | orchestrator | 2026-01-02 04:09:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:09:02.656340 | orchestrator | 2026-01-02 04:09:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:09:05.701734 | orchestrator | 2026-01-02 04:09:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:09:05.704347 | orchestrator | 2026-01-02 04:09:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:09:05.704430 | orchestrator | 2026-01-02 04:09:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:09:08.756649 | orchestrator | 2026-01-02 04:09:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:09:08.757715 | orchestrator | 2026-01-02 04:09:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:09:08.757751 | orchestrator | 2026-01-02 04:09:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:09:11.812800 | orchestrator | 2026-01-02 04:09:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:09:11.815122 | orchestrator | 2026-01-02 04:09:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:09:11.815158 | orchestrator | 2026-01-02 04:09:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:09:14.867964 | orchestrator | 2026-01-02 04:09:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:09:14.869752 | orchestrator | 2026-01-02 04:09:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:09:14.869893 | orchestrator | 2026-01-02 04:09:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:09:17.919099 | orchestrator | 2026-01-02 04:09:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:09:17.920391 | orchestrator | 2026-01-02 04:09:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:09:17.920541 | orchestrator | 2026-01-02 04:09:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:09:20.971531 | orchestrator | 2026-01-02 04:09:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:09:20.973668 | orchestrator | 2026-01-02 04:09:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:09:20.973726 | orchestrator | 2026-01-02 04:09:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:09:24.027659 | orchestrator | 2026-01-02 04:09:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:09:24.029640 | orchestrator | 2026-01-02 04:09:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:09:24.029711 | orchestrator | 2026-01-02 04:09:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:09:27.074483 | orchestrator | 2026-01-02 04:09:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:09:27.075485 | orchestrator | 2026-01-02 04:09:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:09:27.075813 | orchestrator | 2026-01-02 04:09:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:09:30.128670 | orchestrator | 2026-01-02 04:09:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:09:30.130873 | orchestrator | 2026-01-02 04:09:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:09:30.130999 | orchestrator | 2026-01-02 04:09:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:09:33.175767 | orchestrator | 2026-01-02 04:09:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:09:33.177086 | orchestrator | 2026-01-02 04:09:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:09:33.177121 | orchestrator | 2026-01-02 04:09:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:09:36.221307 | orchestrator | 2026-01-02 04:09:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:09:36.223902 | orchestrator | 2026-01-02 04:09:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:09:36.223986 | orchestrator | 2026-01-02 04:09:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:09:39.274419 | orchestrator | 2026-01-02 04:09:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:09:39.275796 | orchestrator | 2026-01-02 04:09:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:09:39.276131 | orchestrator | 2026-01-02 04:09:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:09:42.322584 | orchestrator | 2026-01-02 04:09:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:09:42.324662 | orchestrator | 2026-01-02 04:09:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:09:42.324705 | orchestrator | 2026-01-02 04:09:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:09:45.366676 | orchestrator | 2026-01-02 04:09:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:09:45.367745 | orchestrator | 2026-01-02 04:09:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:09:45.368360 | orchestrator | 2026-01-02 04:09:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:09:48.417453 | orchestrator | 2026-01-02 04:09:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:09:48.419153 | orchestrator | 2026-01-02 04:09:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:09:48.419252 | orchestrator | 2026-01-02 04:09:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:09:51.466686 | orchestrator | 2026-01-02 04:09:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:09:51.468651 | orchestrator | 2026-01-02 04:09:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:09:51.469000 | orchestrator | 2026-01-02 04:09:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:09:54.519530 | orchestrator | 2026-01-02 04:09:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:09:54.521562 | orchestrator | 2026-01-02 04:09:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:09:54.521752 | orchestrator | 2026-01-02 04:09:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:09:57.571398 | orchestrator | 2026-01-02 04:09:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:09:57.573306 | orchestrator | 2026-01-02 04:09:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:09:57.573382 | orchestrator | 2026-01-02 04:09:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:10:00.626851 | orchestrator | 2026-01-02 04:10:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:10:00.628579 | orchestrator | 2026-01-02 04:10:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:10:00.628634 | orchestrator | 2026-01-02 04:10:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:10:03.683654 | orchestrator | 2026-01-02 04:10:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:10:03.685797 | orchestrator | 2026-01-02 04:10:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:10:03.685962 | orchestrator | 2026-01-02 04:10:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:10:06.739753 | orchestrator | 2026-01-02 04:10:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:10:06.741322 | orchestrator | 2026-01-02 04:10:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:10:06.741355 | orchestrator | 2026-01-02 04:10:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:10:09.792805 | orchestrator | 2026-01-02 04:10:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:10:09.795199 | orchestrator | 2026-01-02 04:10:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:10:09.795226 | orchestrator | 2026-01-02 04:10:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:10:12.844956 | orchestrator | 2026-01-02 04:10:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:10:12.846963 | orchestrator | 2026-01-02 04:10:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:10:12.847050 | orchestrator | 2026-01-02 04:10:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:10:15.897335 | orchestrator | 2026-01-02 04:10:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:10:15.899892 | orchestrator | 2026-01-02 04:10:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:10:15.900255 | orchestrator | 2026-01-02 04:10:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:10:18.951665 | orchestrator | 2026-01-02 04:10:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:10:18.954267 | orchestrator | 2026-01-02 04:10:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:10:18.954348 | orchestrator | 2026-01-02 04:10:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:10:21.997677 | orchestrator | 2026-01-02 04:10:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:10:22.003302 | orchestrator | 2026-01-02 04:10:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:10:22.003366 | orchestrator | 2026-01-02 04:10:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:10:25.052251 | orchestrator | 2026-01-02 04:10:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:10:25.054267 | orchestrator | 2026-01-02 04:10:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:10:25.054314 | orchestrator | 2026-01-02 04:10:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:10:28.096407 | orchestrator | 2026-01-02 04:10:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:10:28.098842 | orchestrator | 2026-01-02 04:10:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:10:28.099008 | orchestrator | 2026-01-02 04:10:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:10:31.149390 | orchestrator | 2026-01-02 04:10:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:10:31.152285 | orchestrator | 2026-01-02 04:10:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:10:31.152314 | orchestrator | 2026-01-02 04:10:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:10:34.206959 | orchestrator | 2026-01-02 04:10:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:10:34.209050 | orchestrator | 2026-01-02 04:10:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:10:34.209235 | orchestrator | 2026-01-02 04:10:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:10:37.265124 | orchestrator | 2026-01-02 04:10:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:10:37.266434 | orchestrator | 2026-01-02 04:10:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:10:37.266643 | orchestrator | 2026-01-02 04:10:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:10:40.306818 | orchestrator | 2026-01-02 04:10:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:10:40.307219 | orchestrator | 2026-01-02 04:10:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:10:40.307257 | orchestrator | 2026-01-02 04:10:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:10:43.349992 | orchestrator | 2026-01-02 04:10:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:10:43.352335 | orchestrator | 2026-01-02 04:10:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:10:43.352390 | orchestrator | 2026-01-02 04:10:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:10:46.405681 | orchestrator | 2026-01-02 04:10:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:10:46.406921 | orchestrator | 2026-01-02 04:10:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:10:46.407025 | orchestrator | 2026-01-02 04:10:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:10:49.459583 | orchestrator | 2026-01-02 04:10:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:10:49.460689 | orchestrator | 2026-01-02 04:10:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:10:49.460830 | orchestrator | 2026-01-02 04:10:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:10:52.505377 | orchestrator | 2026-01-02 04:10:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:10:52.507676 | orchestrator | 2026-01-02 04:10:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:10:52.507732 | orchestrator | 2026-01-02 04:10:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:10:55.550617 | orchestrator | 2026-01-02 04:10:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:10:55.552044 | orchestrator | 2026-01-02 04:10:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:10:55.552082 | orchestrator | 2026-01-02 04:10:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:10:58.603214 | orchestrator | 2026-01-02 04:10:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:10:58.605647 | orchestrator | 2026-01-02 04:10:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:10:58.605702 | orchestrator | 2026-01-02 04:10:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:11:01.656830 | orchestrator | 2026-01-02 04:11:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:11:01.658248 | orchestrator | 2026-01-02 04:11:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:11:01.658297 | orchestrator | 2026-01-02 04:11:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:11:04.708187 | orchestrator | 2026-01-02 04:11:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:11:04.709848 | orchestrator | 2026-01-02 04:11:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:11:04.709924 | orchestrator | 2026-01-02 04:11:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:11:07.761939 | orchestrator | 2026-01-02 04:11:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:11:07.763560 | orchestrator | 2026-01-02 04:11:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:11:07.763665 | orchestrator | 2026-01-02 04:11:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:11:10.808519 | orchestrator | 2026-01-02 04:11:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:11:10.810311 | orchestrator | 2026-01-02 04:11:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:11:10.810380 | orchestrator | 2026-01-02 04:11:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:11:13.861249 | orchestrator | 2026-01-02 04:11:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:11:13.864541 | orchestrator | 2026-01-02 04:11:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:11:13.864602 | orchestrator | 2026-01-02 04:11:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:11:16.911669 | orchestrator | 2026-01-02 04:11:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:11:16.913297 | orchestrator | 2026-01-02 04:11:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:11:16.913618 | orchestrator | 2026-01-02 04:11:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:11:19.966659 | orchestrator | 2026-01-02 04:11:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:11:19.967956 | orchestrator | 2026-01-02 04:11:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:11:19.968334 | orchestrator | 2026-01-02 04:11:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:11:23.030853 | orchestrator | 2026-01-02 04:11:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:11:23.031212 | orchestrator | 2026-01-02 04:11:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:11:23.031243 | orchestrator | 2026-01-02 04:11:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:11:26.078833 | orchestrator | 2026-01-02 04:11:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:11:26.080767 | orchestrator | 2026-01-02 04:11:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:11:26.080823 | orchestrator | 2026-01-02 04:11:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:11:29.125331 | orchestrator | 2026-01-02 04:11:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:11:29.127453 | orchestrator | 2026-01-02 04:11:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:11:29.127529 | orchestrator | 2026-01-02 04:11:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:11:32.177494 | orchestrator | 2026-01-02 04:11:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:11:32.178979 | orchestrator | 2026-01-02 04:11:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:11:32.179006 | orchestrator | 2026-01-02 04:11:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:11:35.230479 | orchestrator | 2026-01-02 04:11:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:11:35.231942 | orchestrator | 2026-01-02 04:11:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:11:35.231988 | orchestrator | 2026-01-02 04:11:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:11:38.284284 | orchestrator | 2026-01-02 04:11:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:11:38.287053 | orchestrator | 2026-01-02 04:11:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:11:38.287136 | orchestrator | 2026-01-02 04:11:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:11:41.331408 | orchestrator | 2026-01-02 04:11:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:11:41.332787 | orchestrator | 2026-01-02 04:11:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:11:41.332831 | orchestrator | 2026-01-02 04:11:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:11:44.381633 | orchestrator | 2026-01-02 04:11:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:11:44.383817 | orchestrator | 2026-01-02 04:11:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:11:44.383872 | orchestrator | 2026-01-02 04:11:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:11:47.425583 | orchestrator | 2026-01-02 04:11:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:11:47.428228 | orchestrator | 2026-01-02 04:11:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:11:47.428298 | orchestrator | 2026-01-02 04:11:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:11:50.472921 | orchestrator | 2026-01-02 04:11:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:11:50.475032 | orchestrator | 2026-01-02 04:11:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:11:50.475322 | orchestrator | 2026-01-02 04:11:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:11:53.521027 | orchestrator | 2026-01-02 04:11:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:11:53.523829 | orchestrator | 2026-01-02 04:11:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:11:53.523894 | orchestrator | 2026-01-02 04:11:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:11:56.567811 | orchestrator | 2026-01-02 04:11:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:11:56.572633 | orchestrator | 2026-01-02 04:11:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:11:56.572722 | orchestrator | 2026-01-02 04:11:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:11:59.627904 | orchestrator | 2026-01-02 04:11:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:11:59.631457 | orchestrator | 2026-01-02 04:11:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:11:59.631627 | orchestrator | 2026-01-02 04:11:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:12:02.690062 | orchestrator | 2026-01-02 04:12:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:12:02.691573 | orchestrator | 2026-01-02 04:12:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:12:02.691640 | orchestrator | 2026-01-02 04:12:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:12:05.749010 | orchestrator | 2026-01-02 04:12:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:12:05.751632 | orchestrator | 2026-01-02 04:12:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:12:05.751679 | orchestrator | 2026-01-02 04:12:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:12:08.800506 | orchestrator | 2026-01-02 04:12:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:12:08.802291 | orchestrator | 2026-01-02 04:12:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:12:08.802328 | orchestrator | 2026-01-02 04:12:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:12:11.866353 | orchestrator | 2026-01-02 04:12:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:12:11.868126 | orchestrator | 2026-01-02 04:12:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:12:11.868168 | orchestrator | 2026-01-02 04:12:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:12:14.922477 | orchestrator | 2026-01-02 04:12:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:12:14.924004 | orchestrator | 2026-01-02 04:12:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:12:14.924095 | orchestrator | 2026-01-02 04:12:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:12:17.978759 | orchestrator | 2026-01-02 04:12:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:12:17.982491 | orchestrator | 2026-01-02 04:12:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:12:17.982565 | orchestrator | 2026-01-02 04:12:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:12:21.035244 | orchestrator | 2026-01-02 04:12:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:12:21.037747 | orchestrator | 2026-01-02 04:12:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:12:21.037857 | orchestrator | 2026-01-02 04:12:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:12:24.081484 | orchestrator | 2026-01-02 04:12:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:12:24.084814 | orchestrator | 2026-01-02 04:12:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:12:24.085300 | orchestrator | 2026-01-02 04:12:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:12:27.133962 | orchestrator | 2026-01-02 04:12:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:12:27.134745 | orchestrator | 2026-01-02 04:12:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:12:27.134797 | orchestrator | 2026-01-02 04:12:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:12:30.186968 | orchestrator | 2026-01-02 04:12:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:12:30.188544 | orchestrator | 2026-01-02 04:12:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:12:30.188581 | orchestrator | 2026-01-02 04:12:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:12:33.240790 | orchestrator | 2026-01-02 04:12:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:12:33.242724 | orchestrator | 2026-01-02 04:12:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:12:33.243608 | orchestrator | 2026-01-02 04:12:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:12:36.301605 | orchestrator | 2026-01-02 04:12:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:12:36.304580 | orchestrator | 2026-01-02 04:12:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:12:36.304632 | orchestrator | 2026-01-02 04:12:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:12:39.353152 | orchestrator | 2026-01-02 04:12:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:12:39.353982 | orchestrator | 2026-01-02 04:12:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:12:39.354104 | orchestrator | 2026-01-02 04:12:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:12:42.406242 | orchestrator | 2026-01-02 04:12:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:12:42.406411 | orchestrator | 2026-01-02 04:12:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:12:42.406774 | orchestrator | 2026-01-02 04:12:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:12:45.456936 | orchestrator | 2026-01-02 04:12:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:12:45.459189 | orchestrator | 2026-01-02 04:12:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:12:45.459243 | orchestrator | 2026-01-02 04:12:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:12:48.506197 | orchestrator | 2026-01-02 04:12:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:12:48.507667 | orchestrator | 2026-01-02 04:12:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:12:48.507830 | orchestrator | 2026-01-02 04:12:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:12:51.554203 | orchestrator | 2026-01-02 04:12:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:12:51.556616 | orchestrator | 2026-01-02 04:12:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:12:51.556646 | orchestrator | 2026-01-02 04:12:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:12:54.608023 | orchestrator | 2026-01-02 04:12:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:12:54.609598 | orchestrator | 2026-01-02 04:12:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:12:54.609650 | orchestrator | 2026-01-02 04:12:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:12:57.664222 | orchestrator | 2026-01-02 04:12:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:12:57.665695 | orchestrator | 2026-01-02 04:12:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:12:57.665741 | orchestrator | 2026-01-02 04:12:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:13:00.712018 | orchestrator | 2026-01-02 04:13:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:13:00.714699 | orchestrator | 2026-01-02 04:13:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:13:00.714754 | orchestrator | 2026-01-02 04:13:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:13:03.759880 | orchestrator | 2026-01-02 04:13:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:13:03.761887 | orchestrator | 2026-01-02 04:13:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:13:03.761935 | orchestrator | 2026-01-02 04:13:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:13:06.818522 | orchestrator | 2026-01-02 04:13:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:13:06.819956 | orchestrator | 2026-01-02 04:13:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:13:06.820016 | orchestrator | 2026-01-02 04:13:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:13:09.873273 | orchestrator | 2026-01-02 04:13:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:13:09.874918 | orchestrator | 2026-01-02 04:13:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:13:09.875131 | orchestrator | 2026-01-02 04:13:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:13:12.927299 | orchestrator | 2026-01-02 04:13:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:13:12.929324 | orchestrator | 2026-01-02 04:13:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:13:12.929501 | orchestrator | 2026-01-02 04:13:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:13:15.983001 | orchestrator | 2026-01-02 04:13:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:13:15.985349 | orchestrator | 2026-01-02 04:13:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:13:15.985416 | orchestrator | 2026-01-02 04:13:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:13:19.044971 | orchestrator | 2026-01-02 04:13:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:13:19.046473 | orchestrator | 2026-01-02 04:13:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:13:19.046585 | orchestrator | 2026-01-02 04:13:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:13:22.095862 | orchestrator | 2026-01-02 04:13:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:13:22.095949 | orchestrator | 2026-01-02 04:13:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:13:22.095960 | orchestrator | 2026-01-02 04:13:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:13:25.141098 | orchestrator | 2026-01-02 04:13:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:13:25.143080 | orchestrator | 2026-01-02 04:13:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:13:25.143127 | orchestrator | 2026-01-02 04:13:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:13:28.190214 | orchestrator | 2026-01-02 04:13:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:13:28.190682 | orchestrator | 2026-01-02 04:13:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:13:28.190717 | orchestrator | 2026-01-02 04:13:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:13:31.245526 | orchestrator | 2026-01-02 04:13:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:13:31.246742 | orchestrator | 2026-01-02 04:13:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:13:31.246852 | orchestrator | 2026-01-02 04:13:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:13:34.294472 | orchestrator | 2026-01-02 04:13:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:13:34.296143 | orchestrator | 2026-01-02 04:13:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:13:34.296259 | orchestrator | 2026-01-02 04:13:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:13:37.333802 | orchestrator | 2026-01-02 04:13:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:13:37.335489 | orchestrator | 2026-01-02 04:13:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:13:37.335544 | orchestrator | 2026-01-02 04:13:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:13:40.382921 | orchestrator | 2026-01-02 04:13:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:13:40.384147 | orchestrator | 2026-01-02 04:13:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:13:40.384212 | orchestrator | 2026-01-02 04:13:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:13:43.425549 | orchestrator | 2026-01-02 04:13:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:13:43.427941 | orchestrator | 2026-01-02 04:13:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:13:43.427985 | orchestrator | 2026-01-02 04:13:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:13:46.472605 | orchestrator | 2026-01-02 04:13:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:13:46.474392 | orchestrator | 2026-01-02 04:13:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:13:46.474484 | orchestrator | 2026-01-02 04:13:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:13:49.519566 | orchestrator | 2026-01-02 04:13:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:13:49.521267 | orchestrator | 2026-01-02 04:13:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:13:49.521317 | orchestrator | 2026-01-02 04:13:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:13:52.576892 | orchestrator | 2026-01-02 04:13:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:13:52.581097 | orchestrator | 2026-01-02 04:13:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:13:52.581189 | orchestrator | 2026-01-02 04:13:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:13:55.630447 | orchestrator | 2026-01-02 04:13:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:13:55.633520 | orchestrator | 2026-01-02 04:13:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:13:55.633602 | orchestrator | 2026-01-02 04:13:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:13:58.679241 | orchestrator | 2026-01-02 04:13:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:13:58.680660 | orchestrator | 2026-01-02 04:13:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:13:58.681435 | orchestrator | 2026-01-02 04:13:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:14:01.724921 | orchestrator | 2026-01-02 04:14:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:14:01.726531 | orchestrator | 2026-01-02 04:14:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:14:01.726576 | orchestrator | 2026-01-02 04:14:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:14:04.783355 | orchestrator | 2026-01-02 04:14:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:14:04.785294 | orchestrator | 2026-01-02 04:14:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:14:04.785345 | orchestrator | 2026-01-02 04:14:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:14:07.835145 | orchestrator | 2026-01-02 04:14:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:14:07.837191 | orchestrator | 2026-01-02 04:14:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:14:07.837307 | orchestrator | 2026-01-02 04:14:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:14:10.883712 | orchestrator | 2026-01-02 04:14:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:14:10.885466 | orchestrator | 2026-01-02 04:14:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:14:10.885539 | orchestrator | 2026-01-02 04:14:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:14:13.928418 | orchestrator | 2026-01-02 04:14:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:14:13.931368 | orchestrator | 2026-01-02 04:14:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:14:13.931439 | orchestrator | 2026-01-02 04:14:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:14:16.974332 | orchestrator | 2026-01-02 04:14:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:14:16.975168 | orchestrator | 2026-01-02 04:14:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:14:16.975233 | orchestrator | 2026-01-02 04:14:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:14:20.026223 | orchestrator | 2026-01-02 04:14:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:14:20.028287 | orchestrator | 2026-01-02 04:14:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:14:20.028969 | orchestrator | 2026-01-02 04:14:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:14:23.081998 | orchestrator | 2026-01-02 04:14:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:14:23.083398 | orchestrator | 2026-01-02 04:14:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:14:23.083422 | orchestrator | 2026-01-02 04:14:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:14:26.133412 | orchestrator | 2026-01-02 04:14:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:14:26.135171 | orchestrator | 2026-01-02 04:14:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:14:26.135207 | orchestrator | 2026-01-02 04:14:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:14:29.186733 | orchestrator | 2026-01-02 04:14:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:14:29.188970 | orchestrator | 2026-01-02 04:14:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:14:29.188995 | orchestrator | 2026-01-02 04:14:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:14:32.239781 | orchestrator | 2026-01-02 04:14:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:14:32.242341 | orchestrator | 2026-01-02 04:14:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:14:32.242386 | orchestrator | 2026-01-02 04:14:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:14:35.287862 | orchestrator | 2026-01-02 04:14:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:14:35.289877 | orchestrator | 2026-01-02 04:14:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:14:35.289928 | orchestrator | 2026-01-02 04:14:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:14:38.343747 | orchestrator | 2026-01-02 04:14:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:14:38.344872 | orchestrator | 2026-01-02 04:14:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:14:38.344910 | orchestrator | 2026-01-02 04:14:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:14:41.393187 | orchestrator | 2026-01-02 04:14:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:14:41.395904 | orchestrator | 2026-01-02 04:14:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:14:41.395947 | orchestrator | 2026-01-02 04:14:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:14:44.443406 | orchestrator | 2026-01-02 04:14:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:14:44.445233 | orchestrator | 2026-01-02 04:14:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:14:44.445316 | orchestrator | 2026-01-02 04:14:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:14:47.493254 | orchestrator | 2026-01-02 04:14:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:14:47.495021 | orchestrator | 2026-01-02 04:14:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:14:47.495042 | orchestrator | 2026-01-02 04:14:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:14:50.539546 | orchestrator | 2026-01-02 04:14:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:14:50.540799 | orchestrator | 2026-01-02 04:14:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:14:50.540859 | orchestrator | 2026-01-02 04:14:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:14:53.594643 | orchestrator | 2026-01-02 04:14:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:14:53.596168 | orchestrator | 2026-01-02 04:14:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:14:53.596204 | orchestrator | 2026-01-02 04:14:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:14:56.647048 | orchestrator | 2026-01-02 04:14:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:14:56.650192 | orchestrator | 2026-01-02 04:14:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:14:56.650253 | orchestrator | 2026-01-02 04:14:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:14:59.701832 | orchestrator | 2026-01-02 04:14:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:14:59.704214 | orchestrator | 2026-01-02 04:14:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:14:59.704364 | orchestrator | 2026-01-02 04:14:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:15:02.757582 | orchestrator | 2026-01-02 04:15:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:15:02.761287 | orchestrator | 2026-01-02 04:15:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:15:02.761531 | orchestrator | 2026-01-02 04:15:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:15:05.810160 | orchestrator | 2026-01-02 04:15:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:15:05.811959 | orchestrator | 2026-01-02 04:15:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:15:05.812083 | orchestrator | 2026-01-02 04:15:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:15:08.858834 | orchestrator | 2026-01-02 04:15:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:15:08.859939 | orchestrator | 2026-01-02 04:15:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:15:08.860120 | orchestrator | 2026-01-02 04:15:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:15:11.902206 | orchestrator | 2026-01-02 04:15:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:15:11.905022 | orchestrator | 2026-01-02 04:15:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:15:11.905089 | orchestrator | 2026-01-02 04:15:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:15:14.955091 | orchestrator | 2026-01-02 04:15:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:15:14.956961 | orchestrator | 2026-01-02 04:15:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:15:14.957035 | orchestrator | 2026-01-02 04:15:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:15:18.002106 | orchestrator | 2026-01-02 04:15:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:15:18.005191 | orchestrator | 2026-01-02 04:15:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:15:18.005261 | orchestrator | 2026-01-02 04:15:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:15:21.054853 | orchestrator | 2026-01-02 04:15:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:15:21.057531 | orchestrator | 2026-01-02 04:15:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:15:21.058599 | orchestrator | 2026-01-02 04:15:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:15:24.106785 | orchestrator | 2026-01-02 04:15:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:15:24.108522 | orchestrator | 2026-01-02 04:15:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:15:24.108601 | orchestrator | 2026-01-02 04:15:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:15:27.154002 | orchestrator | 2026-01-02 04:15:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:15:27.156256 | orchestrator | 2026-01-02 04:15:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:15:27.156293 | orchestrator | 2026-01-02 04:15:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:15:30.202834 | orchestrator | 2026-01-02 04:15:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:15:30.204239 | orchestrator | 2026-01-02 04:15:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:15:30.204484 | orchestrator | 2026-01-02 04:15:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:15:33.251339 | orchestrator | 2026-01-02 04:15:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:15:33.253459 | orchestrator | 2026-01-02 04:15:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:15:33.253508 | orchestrator | 2026-01-02 04:15:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:15:36.299955 | orchestrator | 2026-01-02 04:15:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:15:36.302117 | orchestrator | 2026-01-02 04:15:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:15:36.302188 | orchestrator | 2026-01-02 04:15:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:15:39.356479 | orchestrator | 2026-01-02 04:15:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:15:39.358566 | orchestrator | 2026-01-02 04:15:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:15:39.358627 | orchestrator | 2026-01-02 04:15:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:15:42.401939 | orchestrator | 2026-01-02 04:15:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:15:42.405311 | orchestrator | 2026-01-02 04:15:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:15:42.405372 | orchestrator | 2026-01-02 04:15:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:15:45.455824 | orchestrator | 2026-01-02 04:15:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:15:45.456109 | orchestrator | 2026-01-02 04:15:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:15:45.456146 | orchestrator | 2026-01-02 04:15:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:15:48.502174 | orchestrator | 2026-01-02 04:15:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:15:48.503931 | orchestrator | 2026-01-02 04:15:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:15:48.503954 | orchestrator | 2026-01-02 04:15:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:15:51.548709 | orchestrator | 2026-01-02 04:15:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:15:51.549149 | orchestrator | 2026-01-02 04:15:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:15:51.549300 | orchestrator | 2026-01-02 04:15:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:15:54.592095 | orchestrator | 2026-01-02 04:15:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:15:54.594698 | orchestrator | 2026-01-02 04:15:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:15:54.594780 | orchestrator | 2026-01-02 04:15:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:15:57.643416 | orchestrator | 2026-01-02 04:15:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:15:57.645890 | orchestrator | 2026-01-02 04:15:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:15:57.646007 | orchestrator | 2026-01-02 04:15:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:16:00.689114 | orchestrator | 2026-01-02 04:16:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:16:00.691492 | orchestrator | 2026-01-02 04:16:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:16:00.691627 | orchestrator | 2026-01-02 04:16:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:16:03.736205 | orchestrator | 2026-01-02 04:16:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:16:03.737654 | orchestrator | 2026-01-02 04:16:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:16:03.737720 | orchestrator | 2026-01-02 04:16:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:16:06.786210 | orchestrator | 2026-01-02 04:16:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:16:06.788059 | orchestrator | 2026-01-02 04:16:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:16:06.788127 | orchestrator | 2026-01-02 04:16:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:16:09.837149 | orchestrator | 2026-01-02 04:16:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:16:09.839508 | orchestrator | 2026-01-02 04:16:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:16:09.839562 | orchestrator | 2026-01-02 04:16:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:16:12.895459 | orchestrator | 2026-01-02 04:16:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:16:12.897107 | orchestrator | 2026-01-02 04:16:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:16:12.897209 | orchestrator | 2026-01-02 04:16:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:16:15.941676 | orchestrator | 2026-01-02 04:16:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:16:15.942854 | orchestrator | 2026-01-02 04:16:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:16:15.942887 | orchestrator | 2026-01-02 04:16:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:16:18.996435 | orchestrator | 2026-01-02 04:16:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:16:18.998887 | orchestrator | 2026-01-02 04:16:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:16:18.999134 | orchestrator | 2026-01-02 04:16:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:16:22.044517 | orchestrator | 2026-01-02 04:16:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:16:22.044809 | orchestrator | 2026-01-02 04:16:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:16:22.044849 | orchestrator | 2026-01-02 04:16:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:16:25.093860 | orchestrator | 2026-01-02 04:16:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:16:25.096511 | orchestrator | 2026-01-02 04:16:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:16:25.096563 | orchestrator | 2026-01-02 04:16:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:16:28.148892 | orchestrator | 2026-01-02 04:16:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:16:28.150166 | orchestrator | 2026-01-02 04:16:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:16:28.150819 | orchestrator | 2026-01-02 04:16:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:16:31.206227 | orchestrator | 2026-01-02 04:16:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:16:31.207457 | orchestrator | 2026-01-02 04:16:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:16:31.207501 | orchestrator | 2026-01-02 04:16:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:16:34.256297 | orchestrator | 2026-01-02 04:16:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:16:34.257675 | orchestrator | 2026-01-02 04:16:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:16:34.258079 | orchestrator | 2026-01-02 04:16:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:16:37.306768 | orchestrator | 2026-01-02 04:16:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:16:37.308102 | orchestrator | 2026-01-02 04:16:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:16:37.308244 | orchestrator | 2026-01-02 04:16:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:16:40.354358 | orchestrator | 2026-01-02 04:16:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:16:40.355673 | orchestrator | 2026-01-02 04:16:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:16:40.355737 | orchestrator | 2026-01-02 04:16:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:16:43.402441 | orchestrator | 2026-01-02 04:16:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:16:43.404238 | orchestrator | 2026-01-02 04:16:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:16:43.404285 | orchestrator | 2026-01-02 04:16:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:16:46.462789 | orchestrator | 2026-01-02 04:16:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:16:46.465770 | orchestrator | 2026-01-02 04:16:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:16:46.465970 | orchestrator | 2026-01-02 04:16:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:16:49.515019 | orchestrator | 2026-01-02 04:16:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:16:49.516660 | orchestrator | 2026-01-02 04:16:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:16:49.516854 | orchestrator | 2026-01-02 04:16:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:16:52.564533 | orchestrator | 2026-01-02 04:16:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:16:52.566733 | orchestrator | 2026-01-02 04:16:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:16:52.566799 | orchestrator | 2026-01-02 04:16:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:16:55.612665 | orchestrator | 2026-01-02 04:16:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:16:55.612830 | orchestrator | 2026-01-02 04:16:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:16:55.612849 | orchestrator | 2026-01-02 04:16:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:16:58.659899 | orchestrator | 2026-01-02 04:16:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:16:58.662181 | orchestrator | 2026-01-02 04:16:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:16:58.662374 | orchestrator | 2026-01-02 04:16:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:17:01.710854 | orchestrator | 2026-01-02 04:17:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:17:01.713204 | orchestrator | 2026-01-02 04:17:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:17:01.713235 | orchestrator | 2026-01-02 04:17:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:17:04.761293 | orchestrator | 2026-01-02 04:17:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:17:04.762490 | orchestrator | 2026-01-02 04:17:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:17:04.762546 | orchestrator | 2026-01-02 04:17:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:17:07.807100 | orchestrator | 2026-01-02 04:17:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:17:07.808453 | orchestrator | 2026-01-02 04:17:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:17:07.808556 | orchestrator | 2026-01-02 04:17:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:17:10.857630 | orchestrator | 2026-01-02 04:17:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:17:10.858335 | orchestrator | 2026-01-02 04:17:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:17:10.858385 | orchestrator | 2026-01-02 04:17:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:17:13.913574 | orchestrator | 2026-01-02 04:17:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:17:13.915582 | orchestrator | 2026-01-02 04:17:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:17:13.915671 | orchestrator | 2026-01-02 04:17:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:17:16.959584 | orchestrator | 2026-01-02 04:17:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:17:16.960497 | orchestrator | 2026-01-02 04:17:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:17:16.960552 | orchestrator | 2026-01-02 04:17:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:17:20.009120 | orchestrator | 2026-01-02 04:17:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:17:20.010567 | orchestrator | 2026-01-02 04:17:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:17:20.011128 | orchestrator | 2026-01-02 04:17:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:17:23.070843 | orchestrator | 2026-01-02 04:17:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:17:23.071427 | orchestrator | 2026-01-02 04:17:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:17:23.071468 | orchestrator | 2026-01-02 04:17:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:17:26.120615 | orchestrator | 2026-01-02 04:17:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:17:26.123133 | orchestrator | 2026-01-02 04:17:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:17:26.123343 | orchestrator | 2026-01-02 04:17:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:17:29.173297 | orchestrator | 2026-01-02 04:17:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:17:29.174691 | orchestrator | 2026-01-02 04:17:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:17:29.174760 | orchestrator | 2026-01-02 04:17:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:17:32.219383 | orchestrator | 2026-01-02 04:17:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:17:32.219873 | orchestrator | 2026-01-02 04:17:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:17:32.219906 | orchestrator | 2026-01-02 04:17:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:17:35.273610 | orchestrator | 2026-01-02 04:17:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:17:35.275335 | orchestrator | 2026-01-02 04:17:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:17:35.275820 | orchestrator | 2026-01-02 04:17:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:17:38.319280 | orchestrator | 2026-01-02 04:17:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:17:38.321207 | orchestrator | 2026-01-02 04:17:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:17:38.321238 | orchestrator | 2026-01-02 04:17:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:17:41.372390 | orchestrator | 2026-01-02 04:17:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:17:41.374670 | orchestrator | 2026-01-02 04:17:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:17:41.374816 | orchestrator | 2026-01-02 04:17:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:17:44.422729 | orchestrator | 2026-01-02 04:17:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:17:44.425041 | orchestrator | 2026-01-02 04:17:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:17:44.425086 | orchestrator | 2026-01-02 04:17:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:17:47.468035 | orchestrator | 2026-01-02 04:17:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:17:47.470304 | orchestrator | 2026-01-02 04:17:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:17:47.470378 | orchestrator | 2026-01-02 04:17:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:17:50.520448 | orchestrator | 2026-01-02 04:17:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:17:50.520658 | orchestrator | 2026-01-02 04:17:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:17:50.520678 | orchestrator | 2026-01-02 04:17:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:17:53.567641 | orchestrator | 2026-01-02 04:17:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:17:53.570264 | orchestrator | 2026-01-02 04:17:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:17:53.571194 | orchestrator | 2026-01-02 04:17:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:17:56.619660 | orchestrator | 2026-01-02 04:17:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:17:56.621603 | orchestrator | 2026-01-02 04:17:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:17:56.621687 | orchestrator | 2026-01-02 04:17:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:17:59.673501 | orchestrator | 2026-01-02 04:17:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:17:59.675223 | orchestrator | 2026-01-02 04:17:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:17:59.675302 | orchestrator | 2026-01-02 04:17:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:18:02.719581 | orchestrator | 2026-01-02 04:18:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:18:02.721784 | orchestrator | 2026-01-02 04:18:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:18:02.721819 | orchestrator | 2026-01-02 04:18:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:18:05.777633 | orchestrator | 2026-01-02 04:18:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:18:05.780172 | orchestrator | 2026-01-02 04:18:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:18:05.780352 | orchestrator | 2026-01-02 04:18:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:18:08.830776 | orchestrator | 2026-01-02 04:18:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:18:08.831769 | orchestrator | 2026-01-02 04:18:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:18:08.831810 | orchestrator | 2026-01-02 04:18:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:18:11.883123 | orchestrator | 2026-01-02 04:18:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:18:11.884955 | orchestrator | 2026-01-02 04:18:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:18:11.885029 | orchestrator | 2026-01-02 04:18:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:18:14.935084 | orchestrator | 2026-01-02 04:18:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:18:14.937018 | orchestrator | 2026-01-02 04:18:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:18:14.937149 | orchestrator | 2026-01-02 04:18:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:18:17.989864 | orchestrator | 2026-01-02 04:18:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:18:17.995496 | orchestrator | 2026-01-02 04:18:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:18:17.995560 | orchestrator | 2026-01-02 04:18:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:18:21.045986 | orchestrator | 2026-01-02 04:18:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:18:21.048174 | orchestrator | 2026-01-02 04:18:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:18:21.048309 | orchestrator | 2026-01-02 04:18:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:18:24.087184 | orchestrator | 2026-01-02 04:18:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:18:24.089926 | orchestrator | 2026-01-02 04:18:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:18:24.089989 | orchestrator | 2026-01-02 04:18:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:18:27.137305 | orchestrator | 2026-01-02 04:18:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:18:27.140331 | orchestrator | 2026-01-02 04:18:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:18:27.140449 | orchestrator | 2026-01-02 04:18:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:18:30.192021 | orchestrator | 2026-01-02 04:18:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:18:30.192659 | orchestrator | 2026-01-02 04:18:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:18:30.192796 | orchestrator | 2026-01-02 04:18:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:18:33.236004 | orchestrator | 2026-01-02 04:18:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:18:33.238078 | orchestrator | 2026-01-02 04:18:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:18:33.238118 | orchestrator | 2026-01-02 04:18:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:18:36.287486 | orchestrator | 2026-01-02 04:18:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:18:36.289186 | orchestrator | 2026-01-02 04:18:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:18:36.289245 | orchestrator | 2026-01-02 04:18:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:18:39.338704 | orchestrator | 2026-01-02 04:18:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:18:39.342486 | orchestrator | 2026-01-02 04:18:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:18:39.342720 | orchestrator | 2026-01-02 04:18:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:18:42.392221 | orchestrator | 2026-01-02 04:18:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:18:42.394207 | orchestrator | 2026-01-02 04:18:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:18:42.394255 | orchestrator | 2026-01-02 04:18:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:18:45.451007 | orchestrator | 2026-01-02 04:18:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:18:45.451115 | orchestrator | 2026-01-02 04:18:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:18:45.451140 | orchestrator | 2026-01-02 04:18:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:18:48.500289 | orchestrator | 2026-01-02 04:18:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:18:48.502561 | orchestrator | 2026-01-02 04:18:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:18:48.502615 | orchestrator | 2026-01-02 04:18:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:18:51.547697 | orchestrator | 2026-01-02 04:18:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:18:51.549098 | orchestrator | 2026-01-02 04:18:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:18:51.549192 | orchestrator | 2026-01-02 04:18:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:18:54.597544 | orchestrator | 2026-01-02 04:18:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:18:54.599498 | orchestrator | 2026-01-02 04:18:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:18:54.599575 | orchestrator | 2026-01-02 04:18:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:18:57.648531 | orchestrator | 2026-01-02 04:18:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:18:57.650967 | orchestrator | 2026-01-02 04:18:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:18:57.650995 | orchestrator | 2026-01-02 04:18:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:19:00.695771 | orchestrator | 2026-01-02 04:19:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:19:00.697504 | orchestrator | 2026-01-02 04:19:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:19:00.697549 | orchestrator | 2026-01-02 04:19:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:19:03.746969 | orchestrator | 2026-01-02 04:19:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:19:03.748463 | orchestrator | 2026-01-02 04:19:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:19:03.748515 | orchestrator | 2026-01-02 04:19:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:19:06.803180 | orchestrator | 2026-01-02 04:19:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:19:06.805374 | orchestrator | 2026-01-02 04:19:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:19:06.805592 | orchestrator | 2026-01-02 04:19:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:19:09.847380 | orchestrator | 2026-01-02 04:19:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:19:09.849332 | orchestrator | 2026-01-02 04:19:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:19:09.849367 | orchestrator | 2026-01-02 04:19:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:19:12.900247 | orchestrator | 2026-01-02 04:19:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:19:12.902132 | orchestrator | 2026-01-02 04:19:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:19:12.902187 | orchestrator | 2026-01-02 04:19:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:19:15.952785 | orchestrator | 2026-01-02 04:19:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:19:15.954834 | orchestrator | 2026-01-02 04:19:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:19:15.954891 | orchestrator | 2026-01-02 04:19:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:19:19.003848 | orchestrator | 2026-01-02 04:19:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:19:19.006009 | orchestrator | 2026-01-02 04:19:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:19:19.006141 | orchestrator | 2026-01-02 04:19:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:19:22.064086 | orchestrator | 2026-01-02 04:19:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:19:22.066463 | orchestrator | 2026-01-02 04:19:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:19:22.066538 | orchestrator | 2026-01-02 04:19:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:19:25.115609 | orchestrator | 2026-01-02 04:19:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:19:25.117163 | orchestrator | 2026-01-02 04:19:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:19:25.117533 | orchestrator | 2026-01-02 04:19:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:19:28.165156 | orchestrator | 2026-01-02 04:19:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:19:28.168058 | orchestrator | 2026-01-02 04:19:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:19:28.168106 | orchestrator | 2026-01-02 04:19:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:19:31.214366 | orchestrator | 2026-01-02 04:19:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:19:31.217733 | orchestrator | 2026-01-02 04:19:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:19:31.217773 | orchestrator | 2026-01-02 04:19:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:19:34.264036 | orchestrator | 2026-01-02 04:19:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:19:34.267017 | orchestrator | 2026-01-02 04:19:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:19:34.267077 | orchestrator | 2026-01-02 04:19:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:19:37.317354 | orchestrator | 2026-01-02 04:19:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:19:37.319772 | orchestrator | 2026-01-02 04:19:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:19:37.319821 | orchestrator | 2026-01-02 04:19:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:19:40.364242 | orchestrator | 2026-01-02 04:19:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:19:40.365565 | orchestrator | 2026-01-02 04:19:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:19:40.365600 | orchestrator | 2026-01-02 04:19:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:19:43.407886 | orchestrator | 2026-01-02 04:19:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:19:43.409782 | orchestrator | 2026-01-02 04:19:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:19:43.409825 | orchestrator | 2026-01-02 04:19:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:19:46.457163 | orchestrator | 2026-01-02 04:19:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:19:46.458805 | orchestrator | 2026-01-02 04:19:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:19:46.458875 | orchestrator | 2026-01-02 04:19:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:19:49.507675 | orchestrator | 2026-01-02 04:19:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:19:49.509364 | orchestrator | 2026-01-02 04:19:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:19:49.509418 | orchestrator | 2026-01-02 04:19:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:19:52.552258 | orchestrator | 2026-01-02 04:19:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:19:52.554206 | orchestrator | 2026-01-02 04:19:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:19:52.554314 | orchestrator | 2026-01-02 04:19:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:19:55.599441 | orchestrator | 2026-01-02 04:19:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:19:55.601066 | orchestrator | 2026-01-02 04:19:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:19:55.601177 | orchestrator | 2026-01-02 04:19:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:19:58.649832 | orchestrator | 2026-01-02 04:19:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:19:58.652495 | orchestrator | 2026-01-02 04:19:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:19:58.652520 | orchestrator | 2026-01-02 04:19:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:20:01.699659 | orchestrator | 2026-01-02 04:20:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:20:01.700937 | orchestrator | 2026-01-02 04:20:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:20:01.701005 | orchestrator | 2026-01-02 04:20:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:20:04.748695 | orchestrator | 2026-01-02 04:20:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:20:04.750570 | orchestrator | 2026-01-02 04:20:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:20:04.750991 | orchestrator | 2026-01-02 04:20:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:20:07.795787 | orchestrator | 2026-01-02 04:20:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:20:07.797100 | orchestrator | 2026-01-02 04:20:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:20:07.797132 | orchestrator | 2026-01-02 04:20:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:20:10.845051 | orchestrator | 2026-01-02 04:20:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:20:10.847125 | orchestrator | 2026-01-02 04:20:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:20:10.847282 | orchestrator | 2026-01-02 04:20:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:20:13.894305 | orchestrator | 2026-01-02 04:20:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:20:13.896316 | orchestrator | 2026-01-02 04:20:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:20:13.896365 | orchestrator | 2026-01-02 04:20:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:20:16.940011 | orchestrator | 2026-01-02 04:20:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:20:16.941460 | orchestrator | 2026-01-02 04:20:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:20:16.941491 | orchestrator | 2026-01-02 04:20:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:20:19.992466 | orchestrator | 2026-01-02 04:20:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:20:19.994676 | orchestrator | 2026-01-02 04:20:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:20:19.994749 | orchestrator | 2026-01-02 04:20:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:20:23.044741 | orchestrator | 2026-01-02 04:20:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:20:23.046146 | orchestrator | 2026-01-02 04:20:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:20:23.046183 | orchestrator | 2026-01-02 04:20:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:20:26.094246 | orchestrator | 2026-01-02 04:20:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:20:26.095879 | orchestrator | 2026-01-02 04:20:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:20:26.095945 | orchestrator | 2026-01-02 04:20:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:20:29.142376 | orchestrator | 2026-01-02 04:20:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:20:29.144719 | orchestrator | 2026-01-02 04:20:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:20:29.144889 | orchestrator | 2026-01-02 04:20:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:20:32.192200 | orchestrator | 2026-01-02 04:20:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:20:32.193615 | orchestrator | 2026-01-02 04:20:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:20:32.193675 | orchestrator | 2026-01-02 04:20:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:20:35.243935 | orchestrator | 2026-01-02 04:20:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:20:35.245883 | orchestrator | 2026-01-02 04:20:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:20:35.245969 | orchestrator | 2026-01-02 04:20:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:20:38.302178 | orchestrator | 2026-01-02 04:20:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:20:38.305006 | orchestrator | 2026-01-02 04:20:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:20:38.305184 | orchestrator | 2026-01-02 04:20:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:20:41.351583 | orchestrator | 2026-01-02 04:20:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:20:41.353230 | orchestrator | 2026-01-02 04:20:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:20:41.353262 | orchestrator | 2026-01-02 04:20:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:20:44.403406 | orchestrator | 2026-01-02 04:20:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:20:44.404975 | orchestrator | 2026-01-02 04:20:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:20:44.405128 | orchestrator | 2026-01-02 04:20:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:20:47.448154 | orchestrator | 2026-01-02 04:20:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:20:47.449739 | orchestrator | 2026-01-02 04:20:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:20:47.449782 | orchestrator | 2026-01-02 04:20:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:20:50.497067 | orchestrator | 2026-01-02 04:20:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:20:50.499546 | orchestrator | 2026-01-02 04:20:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:20:50.499610 | orchestrator | 2026-01-02 04:20:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:20:53.548504 | orchestrator | 2026-01-02 04:20:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:20:53.551391 | orchestrator | 2026-01-02 04:20:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:20:53.551443 | orchestrator | 2026-01-02 04:20:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:20:56.600037 | orchestrator | 2026-01-02 04:20:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:20:56.603401 | orchestrator | 2026-01-02 04:20:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:20:56.603463 | orchestrator | 2026-01-02 04:20:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:20:59.657041 | orchestrator | 2026-01-02 04:20:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:20:59.658725 | orchestrator | 2026-01-02 04:20:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:20:59.658774 | orchestrator | 2026-01-02 04:20:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:21:02.706798 | orchestrator | 2026-01-02 04:21:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:21:02.710500 | orchestrator | 2026-01-02 04:21:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:21:02.710549 | orchestrator | 2026-01-02 04:21:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:21:05.765213 | orchestrator | 2026-01-02 04:21:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:21:05.766663 | orchestrator | 2026-01-02 04:21:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:21:05.766698 | orchestrator | 2026-01-02 04:21:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:21:08.817213 | orchestrator | 2026-01-02 04:21:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:21:08.818545 | orchestrator | 2026-01-02 04:21:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:21:08.818585 | orchestrator | 2026-01-02 04:21:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:21:11.863881 | orchestrator | 2026-01-02 04:21:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:21:11.866785 | orchestrator | 2026-01-02 04:21:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:21:11.866875 | orchestrator | 2026-01-02 04:21:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:21:14.912664 | orchestrator | 2026-01-02 04:21:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:21:14.914496 | orchestrator | 2026-01-02 04:21:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:21:14.914590 | orchestrator | 2026-01-02 04:21:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:21:17.966368 | orchestrator | 2026-01-02 04:21:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:21:17.968699 | orchestrator | 2026-01-02 04:21:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:21:17.968763 | orchestrator | 2026-01-02 04:21:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:21:21.016933 | orchestrator | 2026-01-02 04:21:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:21:21.018981 | orchestrator | 2026-01-02 04:21:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:21:21.019023 | orchestrator | 2026-01-02 04:21:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:21:24.073024 | orchestrator | 2026-01-02 04:21:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:21:24.073545 | orchestrator | 2026-01-02 04:21:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:21:24.073613 | orchestrator | 2026-01-02 04:21:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:21:27.115390 | orchestrator | 2026-01-02 04:21:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:21:27.116057 | orchestrator | 2026-01-02 04:21:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:21:27.116080 | orchestrator | 2026-01-02 04:21:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:21:30.158088 | orchestrator | 2026-01-02 04:21:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:21:30.158984 | orchestrator | 2026-01-02 04:21:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:21:30.159019 | orchestrator | 2026-01-02 04:21:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:21:33.199302 | orchestrator | 2026-01-02 04:21:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:21:33.201033 | orchestrator | 2026-01-02 04:21:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:21:33.201062 | orchestrator | 2026-01-02 04:21:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:21:36.244013 | orchestrator | 2026-01-02 04:21:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:21:36.245557 | orchestrator | 2026-01-02 04:21:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:21:36.245615 | orchestrator | 2026-01-02 04:21:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:21:39.291871 | orchestrator | 2026-01-02 04:21:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:21:39.293148 | orchestrator | 2026-01-02 04:21:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:21:39.293233 | orchestrator | 2026-01-02 04:21:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:21:42.337325 | orchestrator | 2026-01-02 04:21:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:21:42.338857 | orchestrator | 2026-01-02 04:21:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:21:42.338975 | orchestrator | 2026-01-02 04:21:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:21:45.399144 | orchestrator | 2026-01-02 04:21:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:21:45.402478 | orchestrator | 2026-01-02 04:21:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:21:45.402539 | orchestrator | 2026-01-02 04:21:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:21:48.448898 | orchestrator | 2026-01-02 04:21:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:21:48.451336 | orchestrator | 2026-01-02 04:21:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:21:48.451394 | orchestrator | 2026-01-02 04:21:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:21:51.499775 | orchestrator | 2026-01-02 04:21:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:21:51.501837 | orchestrator | 2026-01-02 04:21:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:21:51.501875 | orchestrator | 2026-01-02 04:21:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:21:54.557615 | orchestrator | 2026-01-02 04:21:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:21:54.564546 | orchestrator | 2026-01-02 04:21:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:21:54.564606 | orchestrator | 2026-01-02 04:21:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:21:57.609672 | orchestrator | 2026-01-02 04:21:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:21:57.611762 | orchestrator | 2026-01-02 04:21:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:21:57.611876 | orchestrator | 2026-01-02 04:21:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:22:00.658673 | orchestrator | 2026-01-02 04:22:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:22:00.660384 | orchestrator | 2026-01-02 04:22:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:22:00.660435 | orchestrator | 2026-01-02 04:22:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:22:03.715524 | orchestrator | 2026-01-02 04:22:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:22:03.717524 | orchestrator | 2026-01-02 04:22:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:22:03.717581 | orchestrator | 2026-01-02 04:22:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:22:06.764410 | orchestrator | 2026-01-02 04:22:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:22:06.765400 | orchestrator | 2026-01-02 04:22:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:22:06.765453 | orchestrator | 2026-01-02 04:22:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:22:09.822997 | orchestrator | 2026-01-02 04:22:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:22:09.824630 | orchestrator | 2026-01-02 04:22:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:22:09.824706 | orchestrator | 2026-01-02 04:22:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:22:12.872480 | orchestrator | 2026-01-02 04:22:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:22:12.875020 | orchestrator | 2026-01-02 04:22:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:22:12.875103 | orchestrator | 2026-01-02 04:22:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:22:15.922592 | orchestrator | 2026-01-02 04:22:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:22:15.923697 | orchestrator | 2026-01-02 04:22:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:22:15.923727 | orchestrator | 2026-01-02 04:22:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:22:18.975328 | orchestrator | 2026-01-02 04:22:18 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:22:18.978153 | orchestrator | 2026-01-02 04:22:18 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:22:18.978288 | orchestrator | 2026-01-02 04:22:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:22:22.025865 | orchestrator | 2026-01-02 04:22:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:22:22.031410 | orchestrator | 2026-01-02 04:22:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:22:22.031476 | orchestrator | 2026-01-02 04:22:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:22:25.095820 | orchestrator | 2026-01-02 04:22:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:22:25.098170 | orchestrator | 2026-01-02 04:22:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:22:25.098235 | orchestrator | 2026-01-02 04:22:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:22:28.151675 | orchestrator | 2026-01-02 04:22:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:22:28.153951 | orchestrator | 2026-01-02 04:22:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:22:28.154005 | orchestrator | 2026-01-02 04:22:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:22:31.212019 | orchestrator | 2026-01-02 04:22:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:22:31.215250 | orchestrator | 2026-01-02 04:22:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:22:31.215299 | orchestrator | 2026-01-02 04:22:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:22:34.272009 | orchestrator | 2026-01-02 04:22:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:22:34.274684 | orchestrator | 2026-01-02 04:22:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:22:34.274738 | orchestrator | 2026-01-02 04:22:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:22:37.322980 | orchestrator | 2026-01-02 04:22:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:22:37.324675 | orchestrator | 2026-01-02 04:22:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:22:37.324710 | orchestrator | 2026-01-02 04:22:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:22:40.374589 | orchestrator | 2026-01-02 04:22:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:22:40.375304 | orchestrator | 2026-01-02 04:22:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:22:40.375557 | orchestrator | 2026-01-02 04:22:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:22:43.423219 | orchestrator | 2026-01-02 04:22:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:22:43.424471 | orchestrator | 2026-01-02 04:22:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:22:43.424638 | orchestrator | 2026-01-02 04:22:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:22:46.477032 | orchestrator | 2026-01-02 04:22:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:22:46.478115 | orchestrator | 2026-01-02 04:22:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:22:46.478149 | orchestrator | 2026-01-02 04:22:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:22:49.529554 | orchestrator | 2026-01-02 04:22:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:22:49.534233 | orchestrator | 2026-01-02 04:22:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:22:49.534289 | orchestrator | 2026-01-02 04:22:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:22:52.581321 | orchestrator | 2026-01-02 04:22:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:22:52.583140 | orchestrator | 2026-01-02 04:22:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:22:52.583210 | orchestrator | 2026-01-02 04:22:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:22:55.632463 | orchestrator | 2026-01-02 04:22:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:22:55.634434 | orchestrator | 2026-01-02 04:22:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:22:55.634477 | orchestrator | 2026-01-02 04:22:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:22:58.689017 | orchestrator | 2026-01-02 04:22:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:22:58.693097 | orchestrator | 2026-01-02 04:22:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:22:58.693152 | orchestrator | 2026-01-02 04:22:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:23:01.746753 | orchestrator | 2026-01-02 04:23:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:23:01.750898 | orchestrator | 2026-01-02 04:23:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:23:01.750943 | orchestrator | 2026-01-02 04:23:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:23:04.800568 | orchestrator | 2026-01-02 04:23:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:23:04.802497 | orchestrator | 2026-01-02 04:23:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:23:04.802534 | orchestrator | 2026-01-02 04:23:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:23:07.849095 | orchestrator | 2026-01-02 04:23:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:23:07.852238 | orchestrator | 2026-01-02 04:23:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:23:07.852300 | orchestrator | 2026-01-02 04:23:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:23:10.903443 | orchestrator | 2026-01-02 04:23:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:23:10.905045 | orchestrator | 2026-01-02 04:23:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:23:10.905117 | orchestrator | 2026-01-02 04:23:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:23:13.954602 | orchestrator | 2026-01-02 04:23:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:23:13.956179 | orchestrator | 2026-01-02 04:23:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:23:13.956222 | orchestrator | 2026-01-02 04:23:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:23:17.007693 | orchestrator | 2026-01-02 04:23:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:23:17.009571 | orchestrator | 2026-01-02 04:23:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:23:17.009609 | orchestrator | 2026-01-02 04:23:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:23:20.055127 | orchestrator | 2026-01-02 04:23:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:23:20.056667 | orchestrator | 2026-01-02 04:23:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:23:20.056722 | orchestrator | 2026-01-02 04:23:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:23:23.099132 | orchestrator | 2026-01-02 04:23:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:23:23.099228 | orchestrator | 2026-01-02 04:23:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:23:23.099235 | orchestrator | 2026-01-02 04:23:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:23:26.150311 | orchestrator | 2026-01-02 04:23:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:23:26.152285 | orchestrator | 2026-01-02 04:23:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:23:26.152335 | orchestrator | 2026-01-02 04:23:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:23:29.203895 | orchestrator | 2026-01-02 04:23:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:23:29.205135 | orchestrator | 2026-01-02 04:23:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:23:29.205162 | orchestrator | 2026-01-02 04:23:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:23:32.250829 | orchestrator | 2026-01-02 04:23:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:23:32.252324 | orchestrator | 2026-01-02 04:23:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:23:32.252369 | orchestrator | 2026-01-02 04:23:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:23:35.300879 | orchestrator | 2026-01-02 04:23:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:23:35.302267 | orchestrator | 2026-01-02 04:23:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:23:35.302311 | orchestrator | 2026-01-02 04:23:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:23:38.346544 | orchestrator | 2026-01-02 04:23:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:23:38.348122 | orchestrator | 2026-01-02 04:23:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:23:38.348166 | orchestrator | 2026-01-02 04:23:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:23:41.402237 | orchestrator | 2026-01-02 04:23:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:23:41.404834 | orchestrator | 2026-01-02 04:23:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:23:41.404905 | orchestrator | 2026-01-02 04:23:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:23:44.451353 | orchestrator | 2026-01-02 04:23:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:23:44.453702 | orchestrator | 2026-01-02 04:23:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:23:44.453839 | orchestrator | 2026-01-02 04:23:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:23:47.502457 | orchestrator | 2026-01-02 04:23:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:23:47.504899 | orchestrator | 2026-01-02 04:23:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:23:47.504934 | orchestrator | 2026-01-02 04:23:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:23:50.546181 | orchestrator | 2026-01-02 04:23:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:23:50.546849 | orchestrator | 2026-01-02 04:23:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:23:50.546885 | orchestrator | 2026-01-02 04:23:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:23:53.590802 | orchestrator | 2026-01-02 04:23:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:23:53.592133 | orchestrator | 2026-01-02 04:23:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:23:53.592159 | orchestrator | 2026-01-02 04:23:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:23:56.632414 | orchestrator | 2026-01-02 04:23:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:23:56.633187 | orchestrator | 2026-01-02 04:23:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:23:56.633211 | orchestrator | 2026-01-02 04:23:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:23:59.680448 | orchestrator | 2026-01-02 04:23:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:23:59.681419 | orchestrator | 2026-01-02 04:23:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:23:59.681466 | orchestrator | 2026-01-02 04:23:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:24:02.730960 | orchestrator | 2026-01-02 04:24:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:24:02.732477 | orchestrator | 2026-01-02 04:24:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:24:02.732507 | orchestrator | 2026-01-02 04:24:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:24:05.781666 | orchestrator | 2026-01-02 04:24:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:24:05.783111 | orchestrator | 2026-01-02 04:24:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:24:05.783252 | orchestrator | 2026-01-02 04:24:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:24:08.832377 | orchestrator | 2026-01-02 04:24:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:24:08.833524 | orchestrator | 2026-01-02 04:24:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:24:08.833633 | orchestrator | 2026-01-02 04:24:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:24:11.878247 | orchestrator | 2026-01-02 04:24:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:24:11.879894 | orchestrator | 2026-01-02 04:24:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:24:11.879957 | orchestrator | 2026-01-02 04:24:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:24:14.931804 | orchestrator | 2026-01-02 04:24:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:24:14.935796 | orchestrator | 2026-01-02 04:24:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:24:14.935863 | orchestrator | 2026-01-02 04:24:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:24:17.983121 | orchestrator | 2026-01-02 04:24:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:24:17.985343 | orchestrator | 2026-01-02 04:24:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:24:17.985411 | orchestrator | 2026-01-02 04:24:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:24:21.035523 | orchestrator | 2026-01-02 04:24:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:24:21.037504 | orchestrator | 2026-01-02 04:24:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:24:21.037596 | orchestrator | 2026-01-02 04:24:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:24:24.076236 | orchestrator | 2026-01-02 04:24:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:24:24.077539 | orchestrator | 2026-01-02 04:24:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:24:24.077610 | orchestrator | 2026-01-02 04:24:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:24:27.127650 | orchestrator | 2026-01-02 04:24:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:24:27.129536 | orchestrator | 2026-01-02 04:24:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:24:27.129569 | orchestrator | 2026-01-02 04:24:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:24:30.178095 | orchestrator | 2026-01-02 04:24:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:24:30.179780 | orchestrator | 2026-01-02 04:24:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:24:30.179817 | orchestrator | 2026-01-02 04:24:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:24:33.229772 | orchestrator | 2026-01-02 04:24:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:24:33.231854 | orchestrator | 2026-01-02 04:24:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:24:33.231916 | orchestrator | 2026-01-02 04:24:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:24:36.279923 | orchestrator | 2026-01-02 04:24:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:24:36.281878 | orchestrator | 2026-01-02 04:24:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:24:36.281919 | orchestrator | 2026-01-02 04:24:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:24:39.335488 | orchestrator | 2026-01-02 04:24:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:24:39.336616 | orchestrator | 2026-01-02 04:24:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:24:39.336660 | orchestrator | 2026-01-02 04:24:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:24:42.390097 | orchestrator | 2026-01-02 04:24:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:24:42.392343 | orchestrator | 2026-01-02 04:24:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:24:42.392435 | orchestrator | 2026-01-02 04:24:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:24:45.445211 | orchestrator | 2026-01-02 04:24:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:24:45.445324 | orchestrator | 2026-01-02 04:24:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:24:45.445340 | orchestrator | 2026-01-02 04:24:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:24:48.495446 | orchestrator | 2026-01-02 04:24:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:24:48.497377 | orchestrator | 2026-01-02 04:24:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:24:48.497434 | orchestrator | 2026-01-02 04:24:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:24:51.541181 | orchestrator | 2026-01-02 04:24:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:24:51.544609 | orchestrator | 2026-01-02 04:24:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:24:51.544655 | orchestrator | 2026-01-02 04:24:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:24:54.588276 | orchestrator | 2026-01-02 04:24:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:24:54.590100 | orchestrator | 2026-01-02 04:24:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:24:54.590129 | orchestrator | 2026-01-02 04:24:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:24:57.639486 | orchestrator | 2026-01-02 04:24:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:24:57.641458 | orchestrator | 2026-01-02 04:24:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:24:57.641616 | orchestrator | 2026-01-02 04:24:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:25:00.688487 | orchestrator | 2026-01-02 04:25:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:25:00.691550 | orchestrator | 2026-01-02 04:25:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:25:00.691706 | orchestrator | 2026-01-02 04:25:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:25:03.750744 | orchestrator | 2026-01-02 04:25:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:25:03.754222 | orchestrator | 2026-01-02 04:25:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:25:03.754246 | orchestrator | 2026-01-02 04:25:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:25:06.805917 | orchestrator | 2026-01-02 04:25:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:25:06.808994 | orchestrator | 2026-01-02 04:25:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:25:06.809231 | orchestrator | 2026-01-02 04:25:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:25:09.851615 | orchestrator | 2026-01-02 04:25:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:25:09.853987 | orchestrator | 2026-01-02 04:25:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:25:09.854417 | orchestrator | 2026-01-02 04:25:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:25:12.908364 | orchestrator | 2026-01-02 04:25:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:25:12.913242 | orchestrator | 2026-01-02 04:25:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:25:12.913311 | orchestrator | 2026-01-02 04:25:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:25:15.965653 | orchestrator | 2026-01-02 04:25:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:25:15.968746 | orchestrator | 2026-01-02 04:25:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:25:15.968808 | orchestrator | 2026-01-02 04:25:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:25:19.005932 | orchestrator | 2026-01-02 04:25:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:25:19.008672 | orchestrator | 2026-01-02 04:25:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:25:19.008842 | orchestrator | 2026-01-02 04:25:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:25:22.067898 | orchestrator | 2026-01-02 04:25:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:25:22.068276 | orchestrator | 2026-01-02 04:25:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:25:22.068512 | orchestrator | 2026-01-02 04:25:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:25:25.121045 | orchestrator | 2026-01-02 04:25:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:25:25.121827 | orchestrator | 2026-01-02 04:25:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:25:25.121927 | orchestrator | 2026-01-02 04:25:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:25:28.175526 | orchestrator | 2026-01-02 04:25:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:25:28.176057 | orchestrator | 2026-01-02 04:25:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:25:28.176088 | orchestrator | 2026-01-02 04:25:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:25:31.226338 | orchestrator | 2026-01-02 04:25:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:25:31.228108 | orchestrator | 2026-01-02 04:25:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:25:31.228239 | orchestrator | 2026-01-02 04:25:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:25:34.276149 | orchestrator | 2026-01-02 04:25:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:25:34.277531 | orchestrator | 2026-01-02 04:25:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:25:34.277672 | orchestrator | 2026-01-02 04:25:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:25:37.325763 | orchestrator | 2026-01-02 04:25:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:25:37.327335 | orchestrator | 2026-01-02 04:25:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:25:37.327370 | orchestrator | 2026-01-02 04:25:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:25:40.375483 | orchestrator | 2026-01-02 04:25:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:25:40.376819 | orchestrator | 2026-01-02 04:25:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:25:40.376930 | orchestrator | 2026-01-02 04:25:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:25:43.425144 | orchestrator | 2026-01-02 04:25:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:25:43.425256 | orchestrator | 2026-01-02 04:25:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:25:43.425270 | orchestrator | 2026-01-02 04:25:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:25:46.469373 | orchestrator | 2026-01-02 04:25:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:25:46.471845 | orchestrator | 2026-01-02 04:25:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:25:46.472249 | orchestrator | 2026-01-02 04:25:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:25:49.516495 | orchestrator | 2026-01-02 04:25:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:25:49.517322 | orchestrator | 2026-01-02 04:25:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:25:49.517380 | orchestrator | 2026-01-02 04:25:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:25:52.563770 | orchestrator | 2026-01-02 04:25:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:25:52.565575 | orchestrator | 2026-01-02 04:25:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:25:52.565651 | orchestrator | 2026-01-02 04:25:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:25:55.617756 | orchestrator | 2026-01-02 04:25:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:25:55.619650 | orchestrator | 2026-01-02 04:25:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:25:55.619762 | orchestrator | 2026-01-02 04:25:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:25:58.666583 | orchestrator | 2026-01-02 04:25:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:25:58.669002 | orchestrator | 2026-01-02 04:25:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:25:58.669526 | orchestrator | 2026-01-02 04:25:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:26:01.721228 | orchestrator | 2026-01-02 04:26:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:26:01.723591 | orchestrator | 2026-01-02 04:26:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:26:01.723666 | orchestrator | 2026-01-02 04:26:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:26:04.775098 | orchestrator | 2026-01-02 04:26:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:26:04.776510 | orchestrator | 2026-01-02 04:26:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:26:04.776568 | orchestrator | 2026-01-02 04:26:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:26:07.826805 | orchestrator | 2026-01-02 04:26:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:26:07.829135 | orchestrator | 2026-01-02 04:26:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:26:07.829205 | orchestrator | 2026-01-02 04:26:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:26:10.872230 | orchestrator | 2026-01-02 04:26:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:26:10.874674 | orchestrator | 2026-01-02 04:26:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:26:10.874797 | orchestrator | 2026-01-02 04:26:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:26:13.919647 | orchestrator | 2026-01-02 04:26:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:26:13.922367 | orchestrator | 2026-01-02 04:26:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:26:13.922424 | orchestrator | 2026-01-02 04:26:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:26:16.968104 | orchestrator | 2026-01-02 04:26:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:26:16.969864 | orchestrator | 2026-01-02 04:26:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:26:16.969942 | orchestrator | 2026-01-02 04:26:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:26:20.019623 | orchestrator | 2026-01-02 04:26:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:26:20.023262 | orchestrator | 2026-01-02 04:26:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:26:20.023331 | orchestrator | 2026-01-02 04:26:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:26:23.067626 | orchestrator | 2026-01-02 04:26:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:26:23.068025 | orchestrator | 2026-01-02 04:26:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:26:23.068055 | orchestrator | 2026-01-02 04:26:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:26:26.115314 | orchestrator | 2026-01-02 04:26:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:26:26.118238 | orchestrator | 2026-01-02 04:26:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:26:26.118875 | orchestrator | 2026-01-02 04:26:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:26:29.167739 | orchestrator | 2026-01-02 04:26:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:26:29.169521 | orchestrator | 2026-01-02 04:26:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:26:29.169884 | orchestrator | 2026-01-02 04:26:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:26:32.221942 | orchestrator | 2026-01-02 04:26:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:26:32.223910 | orchestrator | 2026-01-02 04:26:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:26:32.223985 | orchestrator | 2026-01-02 04:26:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:26:35.269874 | orchestrator | 2026-01-02 04:26:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:26:35.272407 | orchestrator | 2026-01-02 04:26:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:26:35.272489 | orchestrator | 2026-01-02 04:26:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:26:38.323953 | orchestrator | 2026-01-02 04:26:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:26:38.325022 | orchestrator | 2026-01-02 04:26:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:26:38.325090 | orchestrator | 2026-01-02 04:26:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:26:41.366988 | orchestrator | 2026-01-02 04:26:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:26:41.369200 | orchestrator | 2026-01-02 04:26:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:26:41.369291 | orchestrator | 2026-01-02 04:26:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:26:44.417868 | orchestrator | 2026-01-02 04:26:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:26:44.419775 | orchestrator | 2026-01-02 04:26:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:26:44.419847 | orchestrator | 2026-01-02 04:26:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:26:47.474741 | orchestrator | 2026-01-02 04:26:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:26:47.475957 | orchestrator | 2026-01-02 04:26:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:26:47.475992 | orchestrator | 2026-01-02 04:26:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:26:50.526071 | orchestrator | 2026-01-02 04:26:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:26:50.529156 | orchestrator | 2026-01-02 04:26:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:26:50.529206 | orchestrator | 2026-01-02 04:26:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:26:53.578721 | orchestrator | 2026-01-02 04:26:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:26:53.580200 | orchestrator | 2026-01-02 04:26:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:26:53.580251 | orchestrator | 2026-01-02 04:26:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:26:56.632901 | orchestrator | 2026-01-02 04:26:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:26:56.634885 | orchestrator | 2026-01-02 04:26:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:26:56.634974 | orchestrator | 2026-01-02 04:26:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:26:59.680752 | orchestrator | 2026-01-02 04:26:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:26:59.684541 | orchestrator | 2026-01-02 04:26:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:26:59.684597 | orchestrator | 2026-01-02 04:26:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:27:02.737065 | orchestrator | 2026-01-02 04:27:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:27:02.737525 | orchestrator | 2026-01-02 04:27:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:27:02.737549 | orchestrator | 2026-01-02 04:27:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:27:05.790862 | orchestrator | 2026-01-02 04:27:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:27:05.793394 | orchestrator | 2026-01-02 04:27:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:27:05.793438 | orchestrator | 2026-01-02 04:27:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:27:08.838289 | orchestrator | 2026-01-02 04:27:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:27:08.839617 | orchestrator | 2026-01-02 04:27:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:27:08.839809 | orchestrator | 2026-01-02 04:27:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:27:11.889413 | orchestrator | 2026-01-02 04:27:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:27:11.891540 | orchestrator | 2026-01-02 04:27:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:27:11.891604 | orchestrator | 2026-01-02 04:27:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:27:14.938362 | orchestrator | 2026-01-02 04:27:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:27:14.940948 | orchestrator | 2026-01-02 04:27:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:27:14.941006 | orchestrator | 2026-01-02 04:27:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:27:17.996292 | orchestrator | 2026-01-02 04:27:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:27:17.997079 | orchestrator | 2026-01-02 04:27:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:27:17.997340 | orchestrator | 2026-01-02 04:27:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:27:21.044776 | orchestrator | 2026-01-02 04:27:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:27:21.048437 | orchestrator | 2026-01-02 04:27:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:27:21.048497 | orchestrator | 2026-01-02 04:27:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:27:24.093084 | orchestrator | 2026-01-02 04:27:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:27:24.095701 | orchestrator | 2026-01-02 04:27:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:27:24.095758 | orchestrator | 2026-01-02 04:27:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:27:27.144773 | orchestrator | 2026-01-02 04:27:27 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:27:27.145933 | orchestrator | 2026-01-02 04:27:27 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:27:27.146092 | orchestrator | 2026-01-02 04:27:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:27:30.197075 | orchestrator | 2026-01-02 04:27:30 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:27:30.199497 | orchestrator | 2026-01-02 04:27:30 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:27:30.199578 | orchestrator | 2026-01-02 04:27:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:27:33.252959 | orchestrator | 2026-01-02 04:27:33 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:27:33.255007 | orchestrator | 2026-01-02 04:27:33 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:27:33.255065 | orchestrator | 2026-01-02 04:27:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:27:36.296913 | orchestrator | 2026-01-02 04:27:36 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:27:36.298564 | orchestrator | 2026-01-02 04:27:36 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:27:36.298612 | orchestrator | 2026-01-02 04:27:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:27:39.348815 | orchestrator | 2026-01-02 04:27:39 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:27:39.349857 | orchestrator | 2026-01-02 04:27:39 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:27:39.349987 | orchestrator | 2026-01-02 04:27:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:27:42.397260 | orchestrator | 2026-01-02 04:27:42 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:27:42.399405 | orchestrator | 2026-01-02 04:27:42 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:27:42.399497 | orchestrator | 2026-01-02 04:27:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:27:45.458968 | orchestrator | 2026-01-02 04:27:45 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:27:45.461092 | orchestrator | 2026-01-02 04:27:45 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:27:45.461146 | orchestrator | 2026-01-02 04:27:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:27:48.518356 | orchestrator | 2026-01-02 04:27:48 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:27:48.520332 | orchestrator | 2026-01-02 04:27:48 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:27:48.520571 | orchestrator | 2026-01-02 04:27:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:27:51.566830 | orchestrator | 2026-01-02 04:27:51 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:27:51.569219 | orchestrator | 2026-01-02 04:27:51 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:27:51.569322 | orchestrator | 2026-01-02 04:27:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:27:54.617030 | orchestrator | 2026-01-02 04:27:54 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:27:54.618423 | orchestrator | 2026-01-02 04:27:54 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:27:54.618467 | orchestrator | 2026-01-02 04:27:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:27:57.671354 | orchestrator | 2026-01-02 04:27:57 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:27:57.672514 | orchestrator | 2026-01-02 04:27:57 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:27:57.672572 | orchestrator | 2026-01-02 04:27:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:28:00.729884 | orchestrator | 2026-01-02 04:28:00 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:28:00.732179 | orchestrator | 2026-01-02 04:28:00 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:28:00.732390 | orchestrator | 2026-01-02 04:28:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:28:03.776207 | orchestrator | 2026-01-02 04:28:03 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:28:03.778700 | orchestrator | 2026-01-02 04:28:03 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:28:03.778758 | orchestrator | 2026-01-02 04:28:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:28:06.828118 | orchestrator | 2026-01-02 04:28:06 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:28:06.830173 | orchestrator | 2026-01-02 04:28:06 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:28:06.830227 | orchestrator | 2026-01-02 04:28:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:28:09.875275 | orchestrator | 2026-01-02 04:28:09 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:28:09.876876 | orchestrator | 2026-01-02 04:28:09 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:28:09.876950 | orchestrator | 2026-01-02 04:28:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:28:12.925700 | orchestrator | 2026-01-02 04:28:12 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:28:12.928002 | orchestrator | 2026-01-02 04:28:12 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:28:12.928074 | orchestrator | 2026-01-02 04:28:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:28:15.971141 | orchestrator | 2026-01-02 04:28:15 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:28:15.972403 | orchestrator | 2026-01-02 04:28:15 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:28:15.972443 | orchestrator | 2026-01-02 04:28:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:28:19.020102 | orchestrator | 2026-01-02 04:28:19 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:28:19.021321 | orchestrator | 2026-01-02 04:28:19 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:28:19.021354 | orchestrator | 2026-01-02 04:28:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:28:22.066743 | orchestrator | 2026-01-02 04:28:22 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:28:22.067720 | orchestrator | 2026-01-02 04:28:22 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:28:22.067773 | orchestrator | 2026-01-02 04:28:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:28:25.113938 | orchestrator | 2026-01-02 04:28:25 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:28:25.115693 | orchestrator | 2026-01-02 04:28:25 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:28:25.115754 | orchestrator | 2026-01-02 04:28:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:28:28.160572 | orchestrator | 2026-01-02 04:28:28 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:28:28.161694 | orchestrator | 2026-01-02 04:28:28 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:28:28.161729 | orchestrator | 2026-01-02 04:28:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:28:31.210104 | orchestrator | 2026-01-02 04:28:31 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:28:31.211311 | orchestrator | 2026-01-02 04:28:31 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:28:31.211384 | orchestrator | 2026-01-02 04:28:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:28:34.263018 | orchestrator | 2026-01-02 04:28:34 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:28:34.264223 | orchestrator | 2026-01-02 04:28:34 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:28:34.264257 | orchestrator | 2026-01-02 04:28:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:28:37.307518 | orchestrator | 2026-01-02 04:28:37 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:28:37.312133 | orchestrator | 2026-01-02 04:28:37 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:28:37.312216 | orchestrator | 2026-01-02 04:28:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:28:40.358103 | orchestrator | 2026-01-02 04:28:40 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:28:40.360098 | orchestrator | 2026-01-02 04:28:40 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:28:40.360147 | orchestrator | 2026-01-02 04:28:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:28:43.400459 | orchestrator | 2026-01-02 04:28:43 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:28:43.403247 | orchestrator | 2026-01-02 04:28:43 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:28:43.403382 | orchestrator | 2026-01-02 04:28:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:28:46.467498 | orchestrator | 2026-01-02 04:28:46 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:28:46.468885 | orchestrator | 2026-01-02 04:28:46 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:28:46.468951 | orchestrator | 2026-01-02 04:28:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:28:49.517987 | orchestrator | 2026-01-02 04:28:49 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:28:49.520007 | orchestrator | 2026-01-02 04:28:49 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:28:49.520057 | orchestrator | 2026-01-02 04:28:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:28:52.568905 | orchestrator | 2026-01-02 04:28:52 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:28:52.570225 | orchestrator | 2026-01-02 04:28:52 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:28:52.570264 | orchestrator | 2026-01-02 04:28:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:28:55.616137 | orchestrator | 2026-01-02 04:28:55 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:28:55.618295 | orchestrator | 2026-01-02 04:28:55 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:28:55.618357 | orchestrator | 2026-01-02 04:28:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:28:58.661164 | orchestrator | 2026-01-02 04:28:58 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:28:58.662598 | orchestrator | 2026-01-02 04:28:58 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:28:58.662671 | orchestrator | 2026-01-02 04:28:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:29:01.714538 | orchestrator | 2026-01-02 04:29:01 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:29:01.716613 | orchestrator | 2026-01-02 04:29:01 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:29:01.716673 | orchestrator | 2026-01-02 04:29:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:29:04.773522 | orchestrator | 2026-01-02 04:29:04 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:29:04.775852 | orchestrator | 2026-01-02 04:29:04 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:29:04.775932 | orchestrator | 2026-01-02 04:29:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:29:07.826595 | orchestrator | 2026-01-02 04:29:07 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:29:07.828306 | orchestrator | 2026-01-02 04:29:07 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:29:07.828421 | orchestrator | 2026-01-02 04:29:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:29:10.876221 | orchestrator | 2026-01-02 04:29:10 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:29:10.878386 | orchestrator | 2026-01-02 04:29:10 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:29:10.878481 | orchestrator | 2026-01-02 04:29:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:29:13.923851 | orchestrator | 2026-01-02 04:29:13 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:29:13.925328 | orchestrator | 2026-01-02 04:29:13 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:29:13.925378 | orchestrator | 2026-01-02 04:29:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:29:16.972937 | orchestrator | 2026-01-02 04:29:16 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:29:16.974227 | orchestrator | 2026-01-02 04:29:16 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:29:16.974530 | orchestrator | 2026-01-02 04:29:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:29:20.024901 | orchestrator | 2026-01-02 04:29:20 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:29:20.026252 | orchestrator | 2026-01-02 04:29:20 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:29:20.026360 | orchestrator | 2026-01-02 04:29:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:29:23.070679 | orchestrator | 2026-01-02 04:29:23 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:29:23.070903 | orchestrator | 2026-01-02 04:29:23 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:29:23.070937 | orchestrator | 2026-01-02 04:29:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:29:26.121813 | orchestrator | 2026-01-02 04:29:26 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:29:26.123309 | orchestrator | 2026-01-02 04:29:26 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:29:26.123342 | orchestrator | 2026-01-02 04:29:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:29:29.172066 | orchestrator | 2026-01-02 04:29:29 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:29:29.173757 | orchestrator | 2026-01-02 04:29:29 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:29:29.174105 | orchestrator | 2026-01-02 04:29:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:29:32.219806 | orchestrator | 2026-01-02 04:29:32 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:29:32.221963 | orchestrator | 2026-01-02 04:29:32 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:29:32.222158 | orchestrator | 2026-01-02 04:29:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:29:35.280394 | orchestrator | 2026-01-02 04:29:35 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:29:35.281572 | orchestrator | 2026-01-02 04:29:35 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:29:35.282162 | orchestrator | 2026-01-02 04:29:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:29:38.333236 | orchestrator | 2026-01-02 04:29:38 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:29:38.334308 | orchestrator | 2026-01-02 04:29:38 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:29:38.334357 | orchestrator | 2026-01-02 04:29:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:29:41.380184 | orchestrator | 2026-01-02 04:29:41 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:29:41.380868 | orchestrator | 2026-01-02 04:29:41 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:29:41.381041 | orchestrator | 2026-01-02 04:29:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:29:44.429089 | orchestrator | 2026-01-02 04:29:44 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:29:44.430112 | orchestrator | 2026-01-02 04:29:44 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:29:44.430182 | orchestrator | 2026-01-02 04:29:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:29:47.482194 | orchestrator | 2026-01-02 04:29:47 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:29:47.484358 | orchestrator | 2026-01-02 04:29:47 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:29:47.484387 | orchestrator | 2026-01-02 04:29:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:29:50.529767 | orchestrator | 2026-01-02 04:29:50 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:29:50.530777 | orchestrator | 2026-01-02 04:29:50 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:29:50.531057 | orchestrator | 2026-01-02 04:29:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:29:53.577243 | orchestrator | 2026-01-02 04:29:53 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:29:53.578316 | orchestrator | 2026-01-02 04:29:53 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:29:53.578643 | orchestrator | 2026-01-02 04:29:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:29:56.628355 | orchestrator | 2026-01-02 04:29:56 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:29:56.631596 | orchestrator | 2026-01-02 04:29:56 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:29:56.631720 | orchestrator | 2026-01-02 04:29:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:29:59.672121 | orchestrator | 2026-01-02 04:29:59 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:29:59.675011 | orchestrator | 2026-01-02 04:29:59 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:29:59.675094 | orchestrator | 2026-01-02 04:29:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:30:02.718900 | orchestrator | 2026-01-02 04:30:02 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:30:02.720217 | orchestrator | 2026-01-02 04:30:02 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:30:02.720260 | orchestrator | 2026-01-02 04:30:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:30:05.770727 | orchestrator | 2026-01-02 04:30:05 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:30:05.773046 | orchestrator | 2026-01-02 04:30:05 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:30:05.773081 | orchestrator | 2026-01-02 04:30:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:30:08.825259 | orchestrator | 2026-01-02 04:30:08 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:30:08.826589 | orchestrator | 2026-01-02 04:30:08 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:30:08.826789 | orchestrator | 2026-01-02 04:30:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:30:11.875558 | orchestrator | 2026-01-02 04:30:11 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:30:11.876772 | orchestrator | 2026-01-02 04:30:11 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:30:11.876803 | orchestrator | 2026-01-02 04:30:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:30:14.926246 | orchestrator | 2026-01-02 04:30:14 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:30:14.927561 | orchestrator | 2026-01-02 04:30:14 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:30:14.927765 | orchestrator | 2026-01-02 04:30:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:30:17.983431 | orchestrator | 2026-01-02 04:30:17 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:30:17.985050 | orchestrator | 2026-01-02 04:30:17 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:30:17.985102 | orchestrator | 2026-01-02 04:30:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:30:21.036446 | orchestrator | 2026-01-02 04:30:21 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:30:21.037899 | orchestrator | 2026-01-02 04:30:21 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:30:21.038004 | orchestrator | 2026-01-02 04:30:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:30:24.080259 | orchestrator | 2026-01-02 04:30:24 | INFO  | Task e4e777f8-b3b9-433d-ba3d-a87ea9d99a6c is in state STARTED 2026-01-02 04:30:24.080986 | orchestrator | 2026-01-02 04:30:24 | INFO  | Task 922cb08d-5634-4147-8b36-6e252cfb52ba is in state STARTED 2026-01-02 04:30:24.081025 | orchestrator | 2026-01-02 04:30:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:30:26.225015 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-01-02 04:30:26.228121 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-01-02 04:30:27.089255 | 2026-01-02 04:30:27.089430 | PLAY [Post output play] 2026-01-02 04:30:27.107858 | 2026-01-02 04:30:27.108050 | LOOP [stage-output : Register sources] 2026-01-02 04:30:27.167651 | 2026-01-02 04:30:27.167981 | TASK [stage-output : Check sudo] 2026-01-02 04:30:28.060380 | orchestrator | sudo: a password is required 2026-01-02 04:30:28.209825 | orchestrator | ok: Runtime: 0:00:00.019111 2026-01-02 04:30:28.225171 | 2026-01-02 04:30:28.225348 | LOOP [stage-output : Set source and destination for files and folders] 2026-01-02 04:30:28.266991 | 2026-01-02 04:30:28.267363 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-01-02 04:30:28.337560 | orchestrator | ok 2026-01-02 04:30:28.346084 | 2026-01-02 04:30:28.346230 | LOOP [stage-output : Ensure target folders exist] 2026-01-02 04:30:28.871263 | orchestrator | ok: "docs" 2026-01-02 04:30:28.871550 | 2026-01-02 04:30:29.123144 | orchestrator | ok: "artifacts" 2026-01-02 04:30:29.380030 | orchestrator | ok: "logs" 2026-01-02 04:30:29.395993 | 2026-01-02 04:30:29.396162 | LOOP [stage-output : Copy files and folders to staging folder] 2026-01-02 04:30:29.444065 | 2026-01-02 04:30:29.444340 | TASK [stage-output : Make all log files readable] 2026-01-02 04:30:29.738033 | orchestrator | ok 2026-01-02 04:30:29.747262 | 2026-01-02 04:30:29.747423 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-01-02 04:30:29.782983 | orchestrator | skipping: Conditional result was False 2026-01-02 04:30:29.797225 | 2026-01-02 04:30:29.797402 | TASK [stage-output : Discover log files for compression] 2026-01-02 04:30:29.822351 | orchestrator | skipping: Conditional result was False 2026-01-02 04:30:29.831901 | 2026-01-02 04:30:29.832083 | LOOP [stage-output : Archive everything from logs] 2026-01-02 04:30:29.874600 | 2026-01-02 04:30:29.874772 | PLAY [Post cleanup play] 2026-01-02 04:30:29.883647 | 2026-01-02 04:30:29.883780 | TASK [Set cloud fact (Zuul deployment)] 2026-01-02 04:30:29.934052 | orchestrator | ok 2026-01-02 04:30:29.946332 | 2026-01-02 04:30:29.946479 | TASK [Set cloud fact (local deployment)] 2026-01-02 04:30:29.991189 | orchestrator | skipping: Conditional result was False 2026-01-02 04:30:30.004660 | 2026-01-02 04:30:30.005028 | TASK [Clean the cloud environment] 2026-01-02 04:30:31.478405 | orchestrator | 2026-01-02 04:30:31 - clean up servers 2026-01-02 04:30:32.364761 | orchestrator | 2026-01-02 04:30:32 - testbed-manager 2026-01-02 04:30:32.449933 | orchestrator | 2026-01-02 04:30:32 - testbed-node-1 2026-01-02 04:30:32.541160 | orchestrator | 2026-01-02 04:30:32 - testbed-node-0 2026-01-02 04:30:32.633567 | orchestrator | 2026-01-02 04:30:32 - testbed-node-3 2026-01-02 04:30:32.725485 | orchestrator | 2026-01-02 04:30:32 - testbed-node-4 2026-01-02 04:30:32.809796 | orchestrator | 2026-01-02 04:30:32 - testbed-node-5 2026-01-02 04:30:32.904689 | orchestrator | 2026-01-02 04:30:32 - testbed-node-2 2026-01-02 04:30:33.003955 | orchestrator | 2026-01-02 04:30:33 - clean up keypairs 2026-01-02 04:30:33.028363 | orchestrator | 2026-01-02 04:30:33 - testbed 2026-01-02 04:30:33.053660 | orchestrator | 2026-01-02 04:30:33 - wait for servers to be gone 2026-01-02 04:30:46.050542 | orchestrator | 2026-01-02 04:30:46 - clean up ports 2026-01-02 04:30:46.256313 | orchestrator | 2026-01-02 04:30:46 - 03db1444-050c-4a47-af1d-6993e17ed987 2026-01-02 04:30:47.068551 | orchestrator | 2026-01-02 04:30:47 - 13c2190b-6651-4b22-a92e-34036629c399 2026-01-02 04:30:48.048876 | orchestrator | 2026-01-02 04:30:48 - 400e3786-3b3e-4b47-a5d2-26e29a732bde 2026-01-02 04:30:48.560125 | orchestrator | 2026-01-02 04:30:48 - 6437d547-ef65-419a-ae1e-8a8967b900f7 2026-01-02 04:30:48.808217 | orchestrator | 2026-01-02 04:30:48 - 9124dc2c-9701-420f-9ca2-58cad130a622 2026-01-02 04:30:49.072062 | orchestrator | 2026-01-02 04:30:49 - e9403b32-5af9-421d-84cc-11361b08ab96 2026-01-02 04:30:49.446225 | orchestrator | 2026-01-02 04:30:49 - f49ab2b7-2c97-43ce-8f79-bb2de07f6e2f 2026-01-02 04:30:49.695056 | orchestrator | 2026-01-02 04:30:49 - clean up volumes 2026-01-02 04:30:49.823348 | orchestrator | 2026-01-02 04:30:49 - testbed-volume-4-node-base 2026-01-02 04:30:49.868973 | orchestrator | 2026-01-02 04:30:49 - testbed-volume-0-node-base 2026-01-02 04:30:49.920364 | orchestrator | 2026-01-02 04:30:49 - testbed-volume-1-node-base 2026-01-02 04:30:49.967028 | orchestrator | 2026-01-02 04:30:49 - testbed-volume-3-node-base 2026-01-02 04:30:50.015130 | orchestrator | 2026-01-02 04:30:50 - testbed-volume-manager-base 2026-01-02 04:30:50.058275 | orchestrator | 2026-01-02 04:30:50 - testbed-volume-2-node-base 2026-01-02 04:30:50.113857 | orchestrator | 2026-01-02 04:30:50 - testbed-volume-5-node-base 2026-01-02 04:30:50.161933 | orchestrator | 2026-01-02 04:30:50 - testbed-volume-0-node-3 2026-01-02 04:30:50.205851 | orchestrator | 2026-01-02 04:30:50 - testbed-volume-6-node-3 2026-01-02 04:30:50.266653 | orchestrator | 2026-01-02 04:30:50 - testbed-volume-1-node-4 2026-01-02 04:30:50.317301 | orchestrator | 2026-01-02 04:30:50 - testbed-volume-7-node-4 2026-01-02 04:30:50.361939 | orchestrator | 2026-01-02 04:30:50 - testbed-volume-3-node-3 2026-01-02 04:30:50.409445 | orchestrator | 2026-01-02 04:30:50 - testbed-volume-4-node-4 2026-01-02 04:30:50.454870 | orchestrator | 2026-01-02 04:30:50 - testbed-volume-8-node-5 2026-01-02 04:30:50.499235 | orchestrator | 2026-01-02 04:30:50 - testbed-volume-5-node-5 2026-01-02 04:30:50.548780 | orchestrator | 2026-01-02 04:30:50 - testbed-volume-2-node-5 2026-01-02 04:30:50.593228 | orchestrator | 2026-01-02 04:30:50 - disconnect routers 2026-01-02 04:30:50.712300 | orchestrator | 2026-01-02 04:30:50 - testbed 2026-01-02 04:30:51.962106 | orchestrator | 2026-01-02 04:30:51 - clean up subnets 2026-01-02 04:30:52.019813 | orchestrator | 2026-01-02 04:30:52 - subnet-testbed-management 2026-01-02 04:30:52.225999 | orchestrator | 2026-01-02 04:30:52 - clean up networks 2026-01-02 04:30:53.024562 | orchestrator | 2026-01-02 04:30:53 - net-testbed-management 2026-01-02 04:30:53.343491 | orchestrator | 2026-01-02 04:30:53 - clean up security groups 2026-01-02 04:30:53.387905 | orchestrator | 2026-01-02 04:30:53 - testbed-node 2026-01-02 04:30:53.495968 | orchestrator | 2026-01-02 04:30:53 - testbed-management 2026-01-02 04:30:53.620512 | orchestrator | 2026-01-02 04:30:53 - clean up floating ips 2026-01-02 04:30:53.694616 | orchestrator | 2026-01-02 04:30:53 - 81.163.192.55 2026-01-02 04:30:54.069640 | orchestrator | 2026-01-02 04:30:54 - clean up routers 2026-01-02 04:30:54.173787 | orchestrator | 2026-01-02 04:30:54 - testbed 2026-01-02 04:30:55.580330 | orchestrator | ok: Runtime: 0:00:24.757155 2026-01-02 04:30:55.585125 | 2026-01-02 04:30:55.585302 | PLAY RECAP 2026-01-02 04:30:55.585425 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-01-02 04:30:55.585488 | 2026-01-02 04:30:55.765158 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-01-02 04:30:55.766319 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-01-02 04:30:56.558669 | 2026-01-02 04:30:56.558919 | PLAY [Cleanup play] 2026-01-02 04:30:56.579319 | 2026-01-02 04:30:56.579520 | TASK [Set cloud fact (Zuul deployment)] 2026-01-02 04:30:56.632238 | orchestrator | ok 2026-01-02 04:30:56.639726 | 2026-01-02 04:30:56.639884 | TASK [Set cloud fact (local deployment)] 2026-01-02 04:30:56.685195 | orchestrator | skipping: Conditional result was False 2026-01-02 04:30:56.704695 | 2026-01-02 04:30:56.704901 | TASK [Clean the cloud environment] 2026-01-02 04:30:57.931341 | orchestrator | 2026-01-02 04:30:57 - clean up servers 2026-01-02 04:30:58.558922 | orchestrator | 2026-01-02 04:30:58 - clean up keypairs 2026-01-02 04:30:58.575230 | orchestrator | 2026-01-02 04:30:58 - wait for servers to be gone 2026-01-02 04:30:58.620427 | orchestrator | 2026-01-02 04:30:58 - clean up ports 2026-01-02 04:30:58.698793 | orchestrator | 2026-01-02 04:30:58 - clean up volumes 2026-01-02 04:30:58.781825 | orchestrator | 2026-01-02 04:30:58 - disconnect routers 2026-01-02 04:30:58.808458 | orchestrator | 2026-01-02 04:30:58 - clean up subnets 2026-01-02 04:30:58.832278 | orchestrator | 2026-01-02 04:30:58 - clean up networks 2026-01-02 04:30:58.991067 | orchestrator | 2026-01-02 04:30:58 - clean up security groups 2026-01-02 04:30:59.071495 | orchestrator | 2026-01-02 04:30:59 - clean up floating ips 2026-01-02 04:30:59.101235 | orchestrator | 2026-01-02 04:30:59 - clean up routers 2026-01-02 04:30:59.265085 | orchestrator | ok: Runtime: 0:00:01.594417 2026-01-02 04:30:59.267537 | 2026-01-02 04:30:59.267643 | PLAY RECAP 2026-01-02 04:30:59.267714 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-01-02 04:30:59.267749 | 2026-01-02 04:30:59.404442 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-01-02 04:30:59.407189 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-01-02 04:31:00.217075 | 2026-01-02 04:31:00.217243 | PLAY [Base post-fetch] 2026-01-02 04:31:00.234431 | 2026-01-02 04:31:00.234585 | TASK [fetch-output : Set log path for multiple nodes] 2026-01-02 04:31:00.300149 | orchestrator | skipping: Conditional result was False 2026-01-02 04:31:00.310071 | 2026-01-02 04:31:00.310246 | TASK [fetch-output : Set log path for single node] 2026-01-02 04:31:00.364097 | orchestrator | ok 2026-01-02 04:31:00.370674 | 2026-01-02 04:31:00.371429 | LOOP [fetch-output : Ensure local output dirs] 2026-01-02 04:31:00.902463 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/fd31d5addc3042eb80219b9fb0deced2/work/logs" 2026-01-02 04:31:01.186415 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/fd31d5addc3042eb80219b9fb0deced2/work/artifacts" 2026-01-02 04:31:01.514676 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/fd31d5addc3042eb80219b9fb0deced2/work/docs" 2026-01-02 04:31:01.545676 | 2026-01-02 04:31:01.545887 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-01-02 04:31:02.634754 | orchestrator | changed: .d..t...... ./ 2026-01-02 04:31:02.635205 | orchestrator | changed: All items complete 2026-01-02 04:31:02.635276 | 2026-01-02 04:31:03.380893 | orchestrator | changed: .d..t...... ./ 2026-01-02 04:31:04.124947 | orchestrator | changed: .d..t...... ./ 2026-01-02 04:31:04.142155 | 2026-01-02 04:31:04.142312 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-01-02 04:31:04.179925 | orchestrator | skipping: Conditional result was False 2026-01-02 04:31:04.182626 | orchestrator | skipping: Conditional result was False 2026-01-02 04:31:04.201924 | 2026-01-02 04:31:04.202074 | PLAY RECAP 2026-01-02 04:31:04.202138 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-01-02 04:31:04.202169 | 2026-01-02 04:31:04.353300 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-01-02 04:31:04.355917 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-01-02 04:31:05.195034 | 2026-01-02 04:31:05.195210 | PLAY [Base post] 2026-01-02 04:31:05.211578 | 2026-01-02 04:31:05.211739 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-01-02 04:31:06.378431 | orchestrator | changed 2026-01-02 04:31:06.389568 | 2026-01-02 04:31:06.389748 | PLAY RECAP 2026-01-02 04:31:06.389848 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-01-02 04:31:06.389970 | 2026-01-02 04:31:06.512550 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-01-02 04:31:06.515194 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-01-02 04:31:07.379620 | 2026-01-02 04:31:07.379805 | PLAY [Base post-logs] 2026-01-02 04:31:07.391367 | 2026-01-02 04:31:07.391520 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-01-02 04:31:07.848828 | localhost | changed 2026-01-02 04:31:07.867461 | 2026-01-02 04:31:07.867668 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-01-02 04:31:07.897257 | localhost | ok 2026-01-02 04:31:07.903106 | 2026-01-02 04:31:07.903270 | TASK [Set zuul-log-path fact] 2026-01-02 04:31:07.919891 | localhost | ok 2026-01-02 04:31:07.932538 | 2026-01-02 04:31:07.932666 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-01-02 04:31:07.969366 | localhost | ok 2026-01-02 04:31:07.974288 | 2026-01-02 04:31:07.974443 | TASK [upload-logs : Create log directories] 2026-01-02 04:31:08.526393 | localhost | changed 2026-01-02 04:31:08.531111 | 2026-01-02 04:31:08.531269 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-01-02 04:31:09.086467 | localhost -> localhost | ok: Runtime: 0:00:00.008872 2026-01-02 04:31:09.094493 | 2026-01-02 04:31:09.094715 | TASK [upload-logs : Upload logs to log server] 2026-01-02 04:31:09.695310 | localhost | Output suppressed because no_log was given 2026-01-02 04:31:09.699856 | 2026-01-02 04:31:09.700148 | LOOP [upload-logs : Compress console log and json output] 2026-01-02 04:31:09.763076 | localhost | skipping: Conditional result was False 2026-01-02 04:31:09.768354 | localhost | skipping: Conditional result was False 2026-01-02 04:31:09.783392 | 2026-01-02 04:31:09.783639 | LOOP [upload-logs : Upload compressed console log and json output] 2026-01-02 04:31:09.834470 | localhost | skipping: Conditional result was False 2026-01-02 04:31:09.835213 | 2026-01-02 04:31:09.838826 | localhost | skipping: Conditional result was False 2026-01-02 04:31:09.844204 | 2026-01-02 04:31:09.844335 | LOOP [upload-logs : Upload console log and json output]