2026-04-08 00:00:07.230677 | Job console starting 2026-04-08 00:00:07.289812 | Updating git repos 2026-04-08 00:00:07.429246 | Cloning repos into workspace 2026-04-08 00:00:07.773557 | Restoring repo states 2026-04-08 00:00:07.819953 | Merging changes 2026-04-08 00:00:07.819974 | Checking out repos 2026-04-08 00:00:08.115514 | Preparing playbooks 2026-04-08 00:00:08.959502 | Running Ansible setup 2026-04-08 00:00:15.190587 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-04-08 00:00:17.055169 | 2026-04-08 00:00:17.055299 | PLAY [Base pre] 2026-04-08 00:00:17.106586 | 2026-04-08 00:00:17.106718 | TASK [Setup log path fact] 2026-04-08 00:00:17.137127 | orchestrator | ok 2026-04-08 00:00:17.168654 | 2026-04-08 00:00:17.168789 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-08 00:00:17.237159 | orchestrator | ok 2026-04-08 00:00:17.268290 | 2026-04-08 00:00:17.268424 | TASK [emit-job-header : Print job information] 2026-04-08 00:00:17.331406 | # Job Information 2026-04-08 00:00:17.331551 | Ansible Version: 2.16.14 2026-04-08 00:00:17.331580 | Job: testbed-deploy-stable-in-a-nutshell-with-tempest-ubuntu-24.04 2026-04-08 00:00:17.331607 | Pipeline: periodic-midnight 2026-04-08 00:00:17.331626 | Executor: 521e9411259a 2026-04-08 00:00:17.331643 | Triggered by: https://github.com/osism/testbed 2026-04-08 00:00:17.331661 | Event ID: 605c59d85f174b4ca3197f00f9d26f38 2026-04-08 00:00:17.340518 | 2026-04-08 00:00:17.341796 | LOOP [emit-job-header : Print node information] 2026-04-08 00:00:17.530898 | orchestrator | ok: 2026-04-08 00:00:17.531064 | orchestrator | # Node Information 2026-04-08 00:00:17.531098 | orchestrator | Inventory Hostname: orchestrator 2026-04-08 00:00:17.531123 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-04-08 00:00:17.531146 | orchestrator | Username: zuul-testbed03 2026-04-08 00:00:17.531167 | orchestrator | Distro: Debian 12.13 2026-04-08 00:00:17.531190 | orchestrator | Provider: static-testbed 2026-04-08 00:00:17.531211 | orchestrator | Region: 2026-04-08 00:00:17.531231 | orchestrator | Label: testbed-orchestrator 2026-04-08 00:00:17.531250 | orchestrator | Product Name: OpenStack Nova 2026-04-08 00:00:17.531270 | orchestrator | Interface IP: 81.163.193.140 2026-04-08 00:00:17.551935 | 2026-04-08 00:00:17.552040 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-04-08 00:00:18.476541 | orchestrator -> localhost | changed 2026-04-08 00:00:18.483179 | 2026-04-08 00:00:18.483264 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-04-08 00:00:21.058323 | orchestrator -> localhost | changed 2026-04-08 00:00:21.072853 | 2026-04-08 00:00:21.073000 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-04-08 00:00:21.557481 | orchestrator -> localhost | ok 2026-04-08 00:00:21.564459 | 2026-04-08 00:00:21.564575 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-04-08 00:00:21.593538 | orchestrator | ok 2026-04-08 00:00:21.631395 | orchestrator | included: /var/lib/zuul/builds/041735f2a82d4175bace8e93dd5cfed6/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-04-08 00:00:21.639118 | 2026-04-08 00:00:21.639219 | TASK [add-build-sshkey : Create Temp SSH key] 2026-04-08 00:00:23.700281 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-04-08 00:00:23.700587 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/041735f2a82d4175bace8e93dd5cfed6/work/041735f2a82d4175bace8e93dd5cfed6_id_rsa 2026-04-08 00:00:23.700630 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/041735f2a82d4175bace8e93dd5cfed6/work/041735f2a82d4175bace8e93dd5cfed6_id_rsa.pub 2026-04-08 00:00:23.700658 | orchestrator -> localhost | The key fingerprint is: 2026-04-08 00:00:23.700686 | orchestrator -> localhost | SHA256:6hQ42vJzPsvVUZJN1ISQiLm6U6ABzIHRT8NYU8USOd0 zuul-build-sshkey 2026-04-08 00:00:23.700709 | orchestrator -> localhost | The key's randomart image is: 2026-04-08 00:00:23.700745 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-04-08 00:00:23.700768 | orchestrator -> localhost | |=+.+o.oO.o.+o+. | 2026-04-08 00:00:23.700790 | orchestrator -> localhost | |ooo +.* + E+. . | 2026-04-08 00:00:23.700810 | orchestrator -> localhost | | . o . + o o | 2026-04-08 00:00:23.700831 | orchestrator -> localhost | | . o.. o | 2026-04-08 00:00:23.700851 | orchestrator -> localhost | | ooo. S . | 2026-04-08 00:00:23.700874 | orchestrator -> localhost | | .o...o . . | 2026-04-08 00:00:23.710993 | orchestrator -> localhost | | o .oo . . | 2026-04-08 00:00:23.711039 | orchestrator -> localhost | | o++o. | 2026-04-08 00:00:23.711063 | orchestrator -> localhost | | .==o | 2026-04-08 00:00:23.711083 | orchestrator -> localhost | +----[SHA256]-----+ 2026-04-08 00:00:23.711137 | orchestrator -> localhost | ok: Runtime: 0:00:01.192317 2026-04-08 00:00:23.717061 | 2026-04-08 00:00:23.717141 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-04-08 00:00:23.754088 | orchestrator | ok 2026-04-08 00:00:23.762113 | orchestrator | included: /var/lib/zuul/builds/041735f2a82d4175bace8e93dd5cfed6/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-04-08 00:00:23.782543 | 2026-04-08 00:00:23.782628 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-04-08 00:00:23.836148 | orchestrator | skipping: Conditional result was False 2026-04-08 00:00:23.842412 | 2026-04-08 00:00:23.842494 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-04-08 00:00:24.726697 | orchestrator | changed 2026-04-08 00:00:24.740948 | 2026-04-08 00:00:24.741058 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-04-08 00:00:25.039162 | orchestrator | ok 2026-04-08 00:00:25.057595 | 2026-04-08 00:00:25.057707 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-04-08 00:00:25.704773 | orchestrator | ok 2026-04-08 00:00:25.716981 | 2026-04-08 00:00:25.717089 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-04-08 00:00:26.227623 | orchestrator | ok 2026-04-08 00:00:26.251725 | 2026-04-08 00:00:26.251838 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-04-08 00:00:26.373208 | orchestrator | skipping: Conditional result was False 2026-04-08 00:00:26.383510 | 2026-04-08 00:00:26.383615 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-04-08 00:00:27.713646 | orchestrator -> localhost | changed 2026-04-08 00:00:27.730370 | 2026-04-08 00:00:27.730487 | TASK [add-build-sshkey : Add back temp key] 2026-04-08 00:00:28.684813 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/041735f2a82d4175bace8e93dd5cfed6/work/041735f2a82d4175bace8e93dd5cfed6_id_rsa (zuul-build-sshkey) 2026-04-08 00:00:28.684998 | orchestrator -> localhost | ok: Runtime: 0:00:00.044016 2026-04-08 00:00:28.690853 | 2026-04-08 00:00:28.690938 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-04-08 00:00:29.386614 | orchestrator | ok 2026-04-08 00:00:29.391520 | 2026-04-08 00:00:29.391601 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-04-08 00:00:29.444533 | orchestrator | skipping: Conditional result was False 2026-04-08 00:00:29.513110 | 2026-04-08 00:00:29.513210 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-04-08 00:00:30.206976 | orchestrator | ok 2026-04-08 00:00:30.233873 | 2026-04-08 00:00:30.233983 | TASK [validate-host : Define zuul_info_dir fact] 2026-04-08 00:00:30.332760 | orchestrator | ok 2026-04-08 00:00:30.351753 | 2026-04-08 00:00:30.351861 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-04-08 00:00:31.111963 | orchestrator -> localhost | ok 2026-04-08 00:00:31.119054 | 2026-04-08 00:00:31.119145 | TASK [validate-host : Collect information about the host] 2026-04-08 00:00:33.151925 | orchestrator | ok 2026-04-08 00:00:33.197501 | 2026-04-08 00:00:33.197628 | TASK [validate-host : Sanitize hostname] 2026-04-08 00:00:33.275919 | orchestrator | ok 2026-04-08 00:00:33.281131 | 2026-04-08 00:00:33.281235 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-04-08 00:00:34.137400 | orchestrator -> localhost | changed 2026-04-08 00:00:34.145587 | 2026-04-08 00:00:34.145694 | TASK [validate-host : Collect information about zuul worker] 2026-04-08 00:00:34.668734 | orchestrator | ok 2026-04-08 00:00:34.674160 | 2026-04-08 00:00:34.674263 | TASK [validate-host : Write out all zuul information for each host] 2026-04-08 00:00:36.002448 | orchestrator -> localhost | changed 2026-04-08 00:00:36.011033 | 2026-04-08 00:00:36.011121 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-04-08 00:00:36.390179 | orchestrator | ok 2026-04-08 00:00:36.395275 | 2026-04-08 00:00:36.395385 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-04-08 00:01:55.513200 | orchestrator | changed: 2026-04-08 00:01:55.514779 | orchestrator | .d..t...... src/ 2026-04-08 00:01:55.514878 | orchestrator | .d..t...... src/github.com/ 2026-04-08 00:01:55.514918 | orchestrator | .d..t...... src/github.com/osism/ 2026-04-08 00:01:55.514950 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-04-08 00:01:55.514981 | orchestrator | RedHat.yml 2026-04-08 00:01:55.534782 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-04-08 00:01:55.534800 | orchestrator | RedHat.yml 2026-04-08 00:01:55.534869 | orchestrator | = 2.2.0"... 2026-04-08 00:02:08.551583 | orchestrator | - Finding latest version of hashicorp/null... 2026-04-08 00:02:08.570362 | orchestrator | - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2026-04-08 00:02:08.763159 | orchestrator | - Installing hashicorp/local v2.8.0... 2026-04-08 00:02:09.233552 | orchestrator | - Installed hashicorp/local v2.8.0 (signed, key ID 0C0AF313E5FD9F80) 2026-04-08 00:02:09.300815 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-04-08 00:02:09.877349 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-04-08 00:02:09.951202 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-04-08 00:02:10.818232 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-04-08 00:02:10.818290 | orchestrator | 2026-04-08 00:02:10.818298 | orchestrator | Providers are signed by their developers. 2026-04-08 00:02:10.818303 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-04-08 00:02:10.818316 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-04-08 00:02:10.818368 | orchestrator | 2026-04-08 00:02:10.818374 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-04-08 00:02:10.818384 | orchestrator | selections it made above. Include this file in your version control repository 2026-04-08 00:02:10.818389 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-04-08 00:02:10.818400 | orchestrator | you run "tofu init" in the future. 2026-04-08 00:02:10.818797 | orchestrator | 2026-04-08 00:02:10.818843 | orchestrator | OpenTofu has been successfully initialized! 2026-04-08 00:02:10.818871 | orchestrator | 2026-04-08 00:02:10.818877 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-04-08 00:02:10.818881 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-04-08 00:02:10.818886 | orchestrator | should now work. 2026-04-08 00:02:10.818890 | orchestrator | 2026-04-08 00:02:10.818894 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-04-08 00:02:10.818898 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-04-08 00:02:10.818909 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-04-08 00:02:11.021919 | orchestrator | Created and switched to workspace "ci"! 2026-04-08 00:02:11.021969 | orchestrator | 2026-04-08 00:02:11.021974 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-04-08 00:02:11.021980 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-04-08 00:02:11.021985 | orchestrator | for this configuration. 2026-04-08 00:02:11.471948 | orchestrator | ci.auto.tfvars 2026-04-08 00:02:11.476298 | orchestrator | default_custom.tf 2026-04-08 00:02:12.470744 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-04-08 00:02:12.994217 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-04-08 00:02:13.188448 | orchestrator | 2026-04-08 00:02:13.549815 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-04-08 00:02:13.549893 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-04-08 00:02:13.549907 | orchestrator | + create 2026-04-08 00:02:13.549918 | orchestrator | <= read (data resources) 2026-04-08 00:02:13.549930 | orchestrator | 2026-04-08 00:02:13.549948 | orchestrator | OpenTofu will perform the following actions: 2026-04-08 00:02:13.549997 | orchestrator | 2026-04-08 00:02:13.550115 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-04-08 00:02:13.550140 | orchestrator | # (config refers to values not yet known) 2026-04-08 00:02:13.550158 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-04-08 00:02:13.550175 | orchestrator | + checksum = (known after apply) 2026-04-08 00:02:13.550186 | orchestrator | + created_at = (known after apply) 2026-04-08 00:02:13.550195 | orchestrator | + file = (known after apply) 2026-04-08 00:02:13.550205 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.550246 | orchestrator | + metadata = (known after apply) 2026-04-08 00:02:13.550264 | orchestrator | + min_disk_gb = (known after apply) 2026-04-08 00:02:13.550280 | orchestrator | + min_ram_mb = (known after apply) 2026-04-08 00:02:13.550296 | orchestrator | + most_recent = true 2026-04-08 00:02:13.550312 | orchestrator | + name = (known after apply) 2026-04-08 00:02:13.550328 | orchestrator | + protected = (known after apply) 2026-04-08 00:02:13.550343 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.550357 | orchestrator | + schema = (known after apply) 2026-04-08 00:02:13.550394 | orchestrator | + size_bytes = (known after apply) 2026-04-08 00:02:13.550410 | orchestrator | + tags = (known after apply) 2026-04-08 00:02:13.550426 | orchestrator | + updated_at = (known after apply) 2026-04-08 00:02:13.550443 | orchestrator | } 2026-04-08 00:02:13.550460 | orchestrator | 2026-04-08 00:02:13.550473 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-04-08 00:02:13.550483 | orchestrator | # (config refers to values not yet known) 2026-04-08 00:02:13.550500 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-04-08 00:02:13.550516 | orchestrator | + checksum = (known after apply) 2026-04-08 00:02:13.550532 | orchestrator | + created_at = (known after apply) 2026-04-08 00:02:13.550548 | orchestrator | + file = (known after apply) 2026-04-08 00:02:13.550565 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.550582 | orchestrator | + metadata = (known after apply) 2026-04-08 00:02:13.550596 | orchestrator | + min_disk_gb = (known after apply) 2026-04-08 00:02:13.550610 | orchestrator | + min_ram_mb = (known after apply) 2026-04-08 00:02:13.550627 | orchestrator | + most_recent = true 2026-04-08 00:02:13.550643 | orchestrator | + name = (known after apply) 2026-04-08 00:02:13.550659 | orchestrator | + protected = (known after apply) 2026-04-08 00:02:13.550675 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.550691 | orchestrator | + schema = (known after apply) 2026-04-08 00:02:13.550708 | orchestrator | + size_bytes = (known after apply) 2026-04-08 00:02:13.550724 | orchestrator | + tags = (known after apply) 2026-04-08 00:02:13.550741 | orchestrator | + updated_at = (known after apply) 2026-04-08 00:02:13.550757 | orchestrator | } 2026-04-08 00:02:13.550773 | orchestrator | 2026-04-08 00:02:13.550790 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-04-08 00:02:13.550807 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-04-08 00:02:13.550823 | orchestrator | + content = (known after apply) 2026-04-08 00:02:13.550841 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-08 00:02:13.550867 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-08 00:02:13.550883 | orchestrator | + content_md5 = (known after apply) 2026-04-08 00:02:13.550900 | orchestrator | + content_sha1 = (known after apply) 2026-04-08 00:02:13.550916 | orchestrator | + content_sha256 = (known after apply) 2026-04-08 00:02:13.550933 | orchestrator | + content_sha512 = (known after apply) 2026-04-08 00:02:13.550950 | orchestrator | + directory_permission = "0777" 2026-04-08 00:02:13.550966 | orchestrator | + file_permission = "0644" 2026-04-08 00:02:13.550983 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-04-08 00:02:13.551000 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.551016 | orchestrator | } 2026-04-08 00:02:13.551032 | orchestrator | 2026-04-08 00:02:13.551073 | orchestrator | # local_file.id_rsa_pub will be created 2026-04-08 00:02:13.551090 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-04-08 00:02:13.551106 | orchestrator | + content = (known after apply) 2026-04-08 00:02:13.551121 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-08 00:02:13.551136 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-08 00:02:13.551151 | orchestrator | + content_md5 = (known after apply) 2026-04-08 00:02:13.551169 | orchestrator | + content_sha1 = (known after apply) 2026-04-08 00:02:13.551184 | orchestrator | + content_sha256 = (known after apply) 2026-04-08 00:02:13.551215 | orchestrator | + content_sha512 = (known after apply) 2026-04-08 00:02:13.551231 | orchestrator | + directory_permission = "0777" 2026-04-08 00:02:13.551248 | orchestrator | + file_permission = "0644" 2026-04-08 00:02:13.551287 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-04-08 00:02:13.551306 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.551323 | orchestrator | } 2026-04-08 00:02:13.551340 | orchestrator | 2026-04-08 00:02:13.551356 | orchestrator | # local_file.inventory will be created 2026-04-08 00:02:13.551372 | orchestrator | + resource "local_file" "inventory" { 2026-04-08 00:02:13.551389 | orchestrator | + content = (known after apply) 2026-04-08 00:02:13.551405 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-08 00:02:13.551423 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-08 00:02:13.551439 | orchestrator | + content_md5 = (known after apply) 2026-04-08 00:02:13.551455 | orchestrator | + content_sha1 = (known after apply) 2026-04-08 00:02:13.551474 | orchestrator | + content_sha256 = (known after apply) 2026-04-08 00:02:13.551490 | orchestrator | + content_sha512 = (known after apply) 2026-04-08 00:02:13.551505 | orchestrator | + directory_permission = "0777" 2026-04-08 00:02:13.551515 | orchestrator | + file_permission = "0644" 2026-04-08 00:02:13.551525 | orchestrator | + filename = "inventory.ci" 2026-04-08 00:02:13.551535 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.551544 | orchestrator | } 2026-04-08 00:02:13.551554 | orchestrator | 2026-04-08 00:02:13.551564 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-04-08 00:02:13.551573 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-04-08 00:02:13.551583 | orchestrator | + content = (sensitive value) 2026-04-08 00:02:13.551593 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-08 00:02:13.551603 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-08 00:02:13.551612 | orchestrator | + content_md5 = (known after apply) 2026-04-08 00:02:13.551624 | orchestrator | + content_sha1 = (known after apply) 2026-04-08 00:02:13.551640 | orchestrator | + content_sha256 = (known after apply) 2026-04-08 00:02:13.551657 | orchestrator | + content_sha512 = (known after apply) 2026-04-08 00:02:13.551673 | orchestrator | + directory_permission = "0700" 2026-04-08 00:02:13.551685 | orchestrator | + file_permission = "0600" 2026-04-08 00:02:13.551695 | orchestrator | + filename = ".id_rsa.ci" 2026-04-08 00:02:13.551705 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.551715 | orchestrator | } 2026-04-08 00:02:13.551725 | orchestrator | 2026-04-08 00:02:13.551753 | orchestrator | # null_resource.node_semaphore will be created 2026-04-08 00:02:13.551764 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-04-08 00:02:13.551774 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.551784 | orchestrator | } 2026-04-08 00:02:13.551793 | orchestrator | 2026-04-08 00:02:13.551803 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-04-08 00:02:13.551813 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-04-08 00:02:13.551826 | orchestrator | + attachment = (known after apply) 2026-04-08 00:02:13.551849 | orchestrator | + availability_zone = "nova" 2026-04-08 00:02:13.551868 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.551884 | orchestrator | + image_id = (known after apply) 2026-04-08 00:02:13.551900 | orchestrator | + metadata = (known after apply) 2026-04-08 00:02:13.551915 | orchestrator | + name = "testbed-volume-manager-base" 2026-04-08 00:02:13.551928 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.551944 | orchestrator | + size = 80 2026-04-08 00:02:13.551960 | orchestrator | + volume_retype_policy = "never" 2026-04-08 00:02:13.551977 | orchestrator | + volume_type = "ssd" 2026-04-08 00:02:13.551993 | orchestrator | } 2026-04-08 00:02:13.552010 | orchestrator | 2026-04-08 00:02:13.552021 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-04-08 00:02:13.552031 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-08 00:02:13.552041 | orchestrator | + attachment = (known after apply) 2026-04-08 00:02:13.552110 | orchestrator | + availability_zone = "nova" 2026-04-08 00:02:13.552121 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.552142 | orchestrator | + image_id = (known after apply) 2026-04-08 00:02:13.552152 | orchestrator | + metadata = (known after apply) 2026-04-08 00:02:13.552162 | orchestrator | + name = "testbed-volume-0-node-base" 2026-04-08 00:02:13.552172 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.552182 | orchestrator | + size = 80 2026-04-08 00:02:13.552191 | orchestrator | + volume_retype_policy = "never" 2026-04-08 00:02:13.552201 | orchestrator | + volume_type = "ssd" 2026-04-08 00:02:13.552210 | orchestrator | } 2026-04-08 00:02:13.552220 | orchestrator | 2026-04-08 00:02:13.552230 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-04-08 00:02:13.552239 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-08 00:02:13.552250 | orchestrator | + attachment = (known after apply) 2026-04-08 00:02:13.552259 | orchestrator | + availability_zone = "nova" 2026-04-08 00:02:13.552269 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.552279 | orchestrator | + image_id = (known after apply) 2026-04-08 00:02:13.552288 | orchestrator | + metadata = (known after apply) 2026-04-08 00:02:13.552298 | orchestrator | + name = "testbed-volume-1-node-base" 2026-04-08 00:02:13.552308 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.552317 | orchestrator | + size = 80 2026-04-08 00:02:13.552327 | orchestrator | + volume_retype_policy = "never" 2026-04-08 00:02:13.552336 | orchestrator | + volume_type = "ssd" 2026-04-08 00:02:13.552346 | orchestrator | } 2026-04-08 00:02:13.552355 | orchestrator | 2026-04-08 00:02:13.552365 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-04-08 00:02:13.552375 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-08 00:02:13.552384 | orchestrator | + attachment = (known after apply) 2026-04-08 00:02:13.552394 | orchestrator | + availability_zone = "nova" 2026-04-08 00:02:13.552404 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.552413 | orchestrator | + image_id = (known after apply) 2026-04-08 00:02:13.552423 | orchestrator | + metadata = (known after apply) 2026-04-08 00:02:13.552433 | orchestrator | + name = "testbed-volume-2-node-base" 2026-04-08 00:02:13.552442 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.552452 | orchestrator | + size = 80 2026-04-08 00:02:13.552475 | orchestrator | + volume_retype_policy = "never" 2026-04-08 00:02:13.552501 | orchestrator | + volume_type = "ssd" 2026-04-08 00:02:13.552519 | orchestrator | } 2026-04-08 00:02:13.552542 | orchestrator | 2026-04-08 00:02:13.552561 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-04-08 00:02:13.552579 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-08 00:02:13.552598 | orchestrator | + attachment = (known after apply) 2026-04-08 00:02:13.552610 | orchestrator | + availability_zone = "nova" 2026-04-08 00:02:13.552618 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.552626 | orchestrator | + image_id = (known after apply) 2026-04-08 00:02:13.552634 | orchestrator | + metadata = (known after apply) 2026-04-08 00:02:13.552642 | orchestrator | + name = "testbed-volume-3-node-base" 2026-04-08 00:02:13.552649 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.552657 | orchestrator | + size = 80 2026-04-08 00:02:13.552665 | orchestrator | + volume_retype_policy = "never" 2026-04-08 00:02:13.552673 | orchestrator | + volume_type = "ssd" 2026-04-08 00:02:13.552681 | orchestrator | } 2026-04-08 00:02:13.552689 | orchestrator | 2026-04-08 00:02:13.552697 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-04-08 00:02:13.552705 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-08 00:02:13.552713 | orchestrator | + attachment = (known after apply) 2026-04-08 00:02:13.552721 | orchestrator | + availability_zone = "nova" 2026-04-08 00:02:13.552729 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.552746 | orchestrator | + image_id = (known after apply) 2026-04-08 00:02:13.552754 | orchestrator | + metadata = (known after apply) 2026-04-08 00:02:13.552762 | orchestrator | + name = "testbed-volume-4-node-base" 2026-04-08 00:02:13.552770 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.552778 | orchestrator | + size = 80 2026-04-08 00:02:13.552786 | orchestrator | + volume_retype_policy = "never" 2026-04-08 00:02:13.552793 | orchestrator | + volume_type = "ssd" 2026-04-08 00:02:13.552801 | orchestrator | } 2026-04-08 00:02:13.552809 | orchestrator | 2026-04-08 00:02:13.552817 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-04-08 00:02:13.552825 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-08 00:02:13.552833 | orchestrator | + attachment = (known after apply) 2026-04-08 00:02:13.552841 | orchestrator | + availability_zone = "nova" 2026-04-08 00:02:13.552849 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.552867 | orchestrator | + image_id = (known after apply) 2026-04-08 00:02:13.552875 | orchestrator | + metadata = (known after apply) 2026-04-08 00:02:13.552883 | orchestrator | + name = "testbed-volume-5-node-base" 2026-04-08 00:02:13.552891 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.552899 | orchestrator | + size = 80 2026-04-08 00:02:13.552906 | orchestrator | + volume_retype_policy = "never" 2026-04-08 00:02:13.552914 | orchestrator | + volume_type = "ssd" 2026-04-08 00:02:13.552922 | orchestrator | } 2026-04-08 00:02:13.552930 | orchestrator | 2026-04-08 00:02:13.552938 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-04-08 00:02:13.552947 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-08 00:02:13.552955 | orchestrator | + attachment = (known after apply) 2026-04-08 00:02:13.552963 | orchestrator | + availability_zone = "nova" 2026-04-08 00:02:13.552971 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.552979 | orchestrator | + metadata = (known after apply) 2026-04-08 00:02:13.552987 | orchestrator | + name = "testbed-volume-0-node-3" 2026-04-08 00:02:13.552995 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.553003 | orchestrator | + size = 20 2026-04-08 00:02:13.553010 | orchestrator | + volume_retype_policy = "never" 2026-04-08 00:02:13.553018 | orchestrator | + volume_type = "ssd" 2026-04-08 00:02:13.553027 | orchestrator | } 2026-04-08 00:02:13.553035 | orchestrator | 2026-04-08 00:02:13.553043 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-04-08 00:02:13.553073 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-08 00:02:13.553081 | orchestrator | + attachment = (known after apply) 2026-04-08 00:02:13.553091 | orchestrator | + availability_zone = "nova" 2026-04-08 00:02:13.553105 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.553118 | orchestrator | + metadata = (known after apply) 2026-04-08 00:02:13.553130 | orchestrator | + name = "testbed-volume-1-node-4" 2026-04-08 00:02:13.553147 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.553165 | orchestrator | + size = 20 2026-04-08 00:02:13.553177 | orchestrator | + volume_retype_policy = "never" 2026-04-08 00:02:13.553191 | orchestrator | + volume_type = "ssd" 2026-04-08 00:02:13.553205 | orchestrator | } 2026-04-08 00:02:13.553214 | orchestrator | 2026-04-08 00:02:13.553222 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-04-08 00:02:13.553229 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-08 00:02:13.553237 | orchestrator | + attachment = (known after apply) 2026-04-08 00:02:13.553245 | orchestrator | + availability_zone = "nova" 2026-04-08 00:02:13.553253 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.553261 | orchestrator | + metadata = (known after apply) 2026-04-08 00:02:13.553269 | orchestrator | + name = "testbed-volume-2-node-5" 2026-04-08 00:02:13.553277 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.553293 | orchestrator | + size = 20 2026-04-08 00:02:13.553301 | orchestrator | + volume_retype_policy = "never" 2026-04-08 00:02:13.553308 | orchestrator | + volume_type = "ssd" 2026-04-08 00:02:13.553316 | orchestrator | } 2026-04-08 00:02:13.553324 | orchestrator | 2026-04-08 00:02:13.553332 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-04-08 00:02:13.553340 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-08 00:02:13.553348 | orchestrator | + attachment = (known after apply) 2026-04-08 00:02:13.553356 | orchestrator | + availability_zone = "nova" 2026-04-08 00:02:13.553364 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.553377 | orchestrator | + metadata = (known after apply) 2026-04-08 00:02:13.553385 | orchestrator | + name = "testbed-volume-3-node-3" 2026-04-08 00:02:13.553393 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.553404 | orchestrator | + size = 20 2026-04-08 00:02:13.553417 | orchestrator | + volume_retype_policy = "never" 2026-04-08 00:02:13.553430 | orchestrator | + volume_type = "ssd" 2026-04-08 00:02:13.553442 | orchestrator | } 2026-04-08 00:02:13.553462 | orchestrator | 2026-04-08 00:02:13.553477 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-04-08 00:02:13.553491 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-08 00:02:13.553499 | orchestrator | + attachment = (known after apply) 2026-04-08 00:02:13.553507 | orchestrator | + availability_zone = "nova" 2026-04-08 00:02:13.553515 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.553523 | orchestrator | + metadata = (known after apply) 2026-04-08 00:02:13.553531 | orchestrator | + name = "testbed-volume-4-node-4" 2026-04-08 00:02:13.553539 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.553547 | orchestrator | + size = 20 2026-04-08 00:02:13.553555 | orchestrator | + volume_retype_policy = "never" 2026-04-08 00:02:13.553562 | orchestrator | + volume_type = "ssd" 2026-04-08 00:02:13.553570 | orchestrator | } 2026-04-08 00:02:13.553578 | orchestrator | 2026-04-08 00:02:13.553586 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-04-08 00:02:13.553594 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-08 00:02:13.553602 | orchestrator | + attachment = (known after apply) 2026-04-08 00:02:13.553610 | orchestrator | + availability_zone = "nova" 2026-04-08 00:02:13.553623 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.553637 | orchestrator | + metadata = (known after apply) 2026-04-08 00:02:13.553650 | orchestrator | + name = "testbed-volume-5-node-5" 2026-04-08 00:02:13.553663 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.553674 | orchestrator | + size = 20 2026-04-08 00:02:13.553687 | orchestrator | + volume_retype_policy = "never" 2026-04-08 00:02:13.553699 | orchestrator | + volume_type = "ssd" 2026-04-08 00:02:13.553711 | orchestrator | } 2026-04-08 00:02:13.553723 | orchestrator | 2026-04-08 00:02:13.553735 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-04-08 00:02:13.553747 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-08 00:02:13.553758 | orchestrator | + attachment = (known after apply) 2026-04-08 00:02:13.553769 | orchestrator | + availability_zone = "nova" 2026-04-08 00:02:13.553782 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.553794 | orchestrator | + metadata = (known after apply) 2026-04-08 00:02:13.553806 | orchestrator | + name = "testbed-volume-6-node-3" 2026-04-08 00:02:13.553820 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.553832 | orchestrator | + size = 20 2026-04-08 00:02:13.553855 | orchestrator | + volume_retype_policy = "never" 2026-04-08 00:02:13.553867 | orchestrator | + volume_type = "ssd" 2026-04-08 00:02:13.553880 | orchestrator | } 2026-04-08 00:02:13.553893 | orchestrator | 2026-04-08 00:02:13.553905 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-04-08 00:02:13.553917 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-08 00:02:13.553940 | orchestrator | + attachment = (known after apply) 2026-04-08 00:02:13.553955 | orchestrator | + availability_zone = "nova" 2026-04-08 00:02:13.553968 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.553982 | orchestrator | + metadata = (known after apply) 2026-04-08 00:02:13.553995 | orchestrator | + name = "testbed-volume-7-node-4" 2026-04-08 00:02:13.554010 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.554252 | orchestrator | + size = 20 2026-04-08 00:02:13.554263 | orchestrator | + volume_retype_policy = "never" 2026-04-08 00:02:13.554271 | orchestrator | + volume_type = "ssd" 2026-04-08 00:02:13.554279 | orchestrator | } 2026-04-08 00:02:13.554313 | orchestrator | 2026-04-08 00:02:13.554323 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-04-08 00:02:13.554332 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-08 00:02:13.554340 | orchestrator | + attachment = (known after apply) 2026-04-08 00:02:13.554348 | orchestrator | + availability_zone = "nova" 2026-04-08 00:02:13.554356 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.554364 | orchestrator | + metadata = (known after apply) 2026-04-08 00:02:13.554371 | orchestrator | + name = "testbed-volume-8-node-5" 2026-04-08 00:02:13.554379 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.554387 | orchestrator | + size = 20 2026-04-08 00:02:13.554395 | orchestrator | + volume_retype_policy = "never" 2026-04-08 00:02:13.554404 | orchestrator | + volume_type = "ssd" 2026-04-08 00:02:13.554411 | orchestrator | } 2026-04-08 00:02:13.554419 | orchestrator | 2026-04-08 00:02:13.554427 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-04-08 00:02:13.554435 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-04-08 00:02:13.554443 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-08 00:02:13.554451 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-08 00:02:13.554471 | orchestrator | + all_metadata = (known after apply) 2026-04-08 00:02:13.554480 | orchestrator | + all_tags = (known after apply) 2026-04-08 00:02:13.554488 | orchestrator | + availability_zone = "nova" 2026-04-08 00:02:13.554495 | orchestrator | + config_drive = true 2026-04-08 00:02:13.554511 | orchestrator | + created = (known after apply) 2026-04-08 00:02:13.554519 | orchestrator | + flavor_id = (known after apply) 2026-04-08 00:02:13.554527 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-04-08 00:02:13.554535 | orchestrator | + force_delete = false 2026-04-08 00:02:13.554543 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-08 00:02:13.554551 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.554558 | orchestrator | + image_id = (known after apply) 2026-04-08 00:02:13.554566 | orchestrator | + image_name = (known after apply) 2026-04-08 00:02:13.554574 | orchestrator | + key_pair = "testbed" 2026-04-08 00:02:13.554582 | orchestrator | + name = "testbed-manager" 2026-04-08 00:02:13.554590 | orchestrator | + power_state = "active" 2026-04-08 00:02:13.554598 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.554606 | orchestrator | + security_groups = (known after apply) 2026-04-08 00:02:13.554613 | orchestrator | + stop_before_destroy = false 2026-04-08 00:02:13.554621 | orchestrator | + updated = (known after apply) 2026-04-08 00:02:13.554629 | orchestrator | + user_data = (sensitive value) 2026-04-08 00:02:13.554637 | orchestrator | 2026-04-08 00:02:13.554645 | orchestrator | + block_device { 2026-04-08 00:02:13.554653 | orchestrator | + boot_index = 0 2026-04-08 00:02:13.554661 | orchestrator | + delete_on_termination = false 2026-04-08 00:02:13.554669 | orchestrator | + destination_type = "volume" 2026-04-08 00:02:13.554677 | orchestrator | + multiattach = false 2026-04-08 00:02:13.554685 | orchestrator | + source_type = "volume" 2026-04-08 00:02:13.554693 | orchestrator | + uuid = (known after apply) 2026-04-08 00:02:13.554711 | orchestrator | } 2026-04-08 00:02:13.554719 | orchestrator | 2026-04-08 00:02:13.554727 | orchestrator | + network { 2026-04-08 00:02:13.554735 | orchestrator | + access_network = false 2026-04-08 00:02:13.554743 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-08 00:02:13.554751 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-08 00:02:13.554759 | orchestrator | + mac = (known after apply) 2026-04-08 00:02:13.554766 | orchestrator | + name = (known after apply) 2026-04-08 00:02:13.554774 | orchestrator | + port = (known after apply) 2026-04-08 00:02:13.554782 | orchestrator | + uuid = (known after apply) 2026-04-08 00:02:13.554790 | orchestrator | } 2026-04-08 00:02:13.554798 | orchestrator | } 2026-04-08 00:02:13.554806 | orchestrator | 2026-04-08 00:02:13.554814 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-04-08 00:02:13.554822 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-08 00:02:13.554830 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-08 00:02:13.554837 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-08 00:02:13.554845 | orchestrator | + all_metadata = (known after apply) 2026-04-08 00:02:13.554853 | orchestrator | + all_tags = (known after apply) 2026-04-08 00:02:13.554861 | orchestrator | + availability_zone = "nova" 2026-04-08 00:02:13.554869 | orchestrator | + config_drive = true 2026-04-08 00:02:13.554877 | orchestrator | + created = (known after apply) 2026-04-08 00:02:13.554886 | orchestrator | + flavor_id = (known after apply) 2026-04-08 00:02:13.554895 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-08 00:02:13.554903 | orchestrator | + force_delete = false 2026-04-08 00:02:13.554911 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-08 00:02:13.554919 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.554927 | orchestrator | + image_id = (known after apply) 2026-04-08 00:02:13.554935 | orchestrator | + image_name = (known after apply) 2026-04-08 00:02:13.554943 | orchestrator | + key_pair = "testbed" 2026-04-08 00:02:13.554951 | orchestrator | + name = "testbed-node-0" 2026-04-08 00:02:13.554959 | orchestrator | + power_state = "active" 2026-04-08 00:02:13.554967 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.554974 | orchestrator | + security_groups = (known after apply) 2026-04-08 00:02:13.554982 | orchestrator | + stop_before_destroy = false 2026-04-08 00:02:13.554990 | orchestrator | + updated = (known after apply) 2026-04-08 00:02:13.555009 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-08 00:02:13.555017 | orchestrator | 2026-04-08 00:02:13.555025 | orchestrator | + block_device { 2026-04-08 00:02:13.555033 | orchestrator | + boot_index = 0 2026-04-08 00:02:13.555041 | orchestrator | + delete_on_termination = false 2026-04-08 00:02:13.555094 | orchestrator | + destination_type = "volume" 2026-04-08 00:02:13.555102 | orchestrator | + multiattach = false 2026-04-08 00:02:13.555110 | orchestrator | + source_type = "volume" 2026-04-08 00:02:13.555118 | orchestrator | + uuid = (known after apply) 2026-04-08 00:02:13.555126 | orchestrator | } 2026-04-08 00:02:13.555134 | orchestrator | 2026-04-08 00:02:13.555142 | orchestrator | + network { 2026-04-08 00:02:13.555149 | orchestrator | + access_network = false 2026-04-08 00:02:13.555157 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-08 00:02:13.555165 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-08 00:02:13.555174 | orchestrator | + mac = (known after apply) 2026-04-08 00:02:13.555182 | orchestrator | + name = (known after apply) 2026-04-08 00:02:13.555190 | orchestrator | + port = (known after apply) 2026-04-08 00:02:13.555198 | orchestrator | + uuid = (known after apply) 2026-04-08 00:02:13.555978 | orchestrator | } 2026-04-08 00:02:13.556013 | orchestrator | } 2026-04-08 00:02:13.556021 | orchestrator | 2026-04-08 00:02:13.556028 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-04-08 00:02:13.556036 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-08 00:02:13.556043 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-08 00:02:13.556085 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-08 00:02:13.556092 | orchestrator | + all_metadata = (known after apply) 2026-04-08 00:02:13.556099 | orchestrator | + all_tags = (known after apply) 2026-04-08 00:02:13.556105 | orchestrator | + availability_zone = "nova" 2026-04-08 00:02:13.556113 | orchestrator | + config_drive = true 2026-04-08 00:02:13.556119 | orchestrator | + created = (known after apply) 2026-04-08 00:02:13.556126 | orchestrator | + flavor_id = (known after apply) 2026-04-08 00:02:13.556483 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-08 00:02:13.556503 | orchestrator | + force_delete = false 2026-04-08 00:02:13.556511 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-08 00:02:13.556517 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.556524 | orchestrator | + image_id = (known after apply) 2026-04-08 00:02:13.556531 | orchestrator | + image_name = (known after apply) 2026-04-08 00:02:13.556551 | orchestrator | + key_pair = "testbed" 2026-04-08 00:02:13.556558 | orchestrator | + name = "testbed-node-1" 2026-04-08 00:02:13.556565 | orchestrator | + power_state = "active" 2026-04-08 00:02:13.556572 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.556579 | orchestrator | + security_groups = (known after apply) 2026-04-08 00:02:13.556585 | orchestrator | + stop_before_destroy = false 2026-04-08 00:02:13.556592 | orchestrator | + updated = (known after apply) 2026-04-08 00:02:13.556607 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-08 00:02:13.556615 | orchestrator | 2026-04-08 00:02:13.556621 | orchestrator | + block_device { 2026-04-08 00:02:13.556628 | orchestrator | + boot_index = 0 2026-04-08 00:02:13.556635 | orchestrator | + delete_on_termination = false 2026-04-08 00:02:13.556642 | orchestrator | + destination_type = "volume" 2026-04-08 00:02:13.556648 | orchestrator | + multiattach = false 2026-04-08 00:02:13.556655 | orchestrator | + source_type = "volume" 2026-04-08 00:02:13.556661 | orchestrator | + uuid = (known after apply) 2026-04-08 00:02:13.556669 | orchestrator | } 2026-04-08 00:02:13.556675 | orchestrator | 2026-04-08 00:02:13.556682 | orchestrator | + network { 2026-04-08 00:02:13.556689 | orchestrator | + access_network = false 2026-04-08 00:02:13.556695 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-08 00:02:13.556702 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-08 00:02:13.556708 | orchestrator | + mac = (known after apply) 2026-04-08 00:02:13.556715 | orchestrator | + name = (known after apply) 2026-04-08 00:02:13.556722 | orchestrator | + port = (known after apply) 2026-04-08 00:02:13.556728 | orchestrator | + uuid = (known after apply) 2026-04-08 00:02:13.556735 | orchestrator | } 2026-04-08 00:02:13.556742 | orchestrator | } 2026-04-08 00:02:13.556749 | orchestrator | 2026-04-08 00:02:13.556755 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-04-08 00:02:13.556762 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-08 00:02:13.556769 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-08 00:02:13.556775 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-08 00:02:13.556784 | orchestrator | + all_metadata = (known after apply) 2026-04-08 00:02:13.556791 | orchestrator | + all_tags = (known after apply) 2026-04-08 00:02:13.556798 | orchestrator | + availability_zone = "nova" 2026-04-08 00:02:13.556804 | orchestrator | + config_drive = true 2026-04-08 00:02:13.556811 | orchestrator | + created = (known after apply) 2026-04-08 00:02:13.556818 | orchestrator | + flavor_id = (known after apply) 2026-04-08 00:02:13.556824 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-08 00:02:13.556831 | orchestrator | + force_delete = false 2026-04-08 00:02:13.556838 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-08 00:02:13.556844 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.556851 | orchestrator | + image_id = (known after apply) 2026-04-08 00:02:13.556865 | orchestrator | + image_name = (known after apply) 2026-04-08 00:02:13.556872 | orchestrator | + key_pair = "testbed" 2026-04-08 00:02:13.556879 | orchestrator | + name = "testbed-node-2" 2026-04-08 00:02:13.556885 | orchestrator | + power_state = "active" 2026-04-08 00:02:13.556892 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.556899 | orchestrator | + security_groups = (known after apply) 2026-04-08 00:02:13.556905 | orchestrator | + stop_before_destroy = false 2026-04-08 00:02:13.556912 | orchestrator | + updated = (known after apply) 2026-04-08 00:02:13.556919 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-08 00:02:13.556925 | orchestrator | 2026-04-08 00:02:13.556932 | orchestrator | + block_device { 2026-04-08 00:02:13.556939 | orchestrator | + boot_index = 0 2026-04-08 00:02:13.556945 | orchestrator | + delete_on_termination = false 2026-04-08 00:02:13.556952 | orchestrator | + destination_type = "volume" 2026-04-08 00:02:13.556958 | orchestrator | + multiattach = false 2026-04-08 00:02:13.556965 | orchestrator | + source_type = "volume" 2026-04-08 00:02:13.556971 | orchestrator | + uuid = (known after apply) 2026-04-08 00:02:13.556978 | orchestrator | } 2026-04-08 00:02:13.556985 | orchestrator | 2026-04-08 00:02:13.556991 | orchestrator | + network { 2026-04-08 00:02:13.556998 | orchestrator | + access_network = false 2026-04-08 00:02:13.557005 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-08 00:02:13.557023 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-08 00:02:13.557031 | orchestrator | + mac = (known after apply) 2026-04-08 00:02:13.557043 | orchestrator | + name = (known after apply) 2026-04-08 00:02:13.557072 | orchestrator | + port = (known after apply) 2026-04-08 00:02:13.557079 | orchestrator | + uuid = (known after apply) 2026-04-08 00:02:13.557085 | orchestrator | } 2026-04-08 00:02:13.557092 | orchestrator | } 2026-04-08 00:02:13.557099 | orchestrator | 2026-04-08 00:02:13.557109 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-04-08 00:02:13.557116 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-08 00:02:13.557123 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-08 00:02:13.557130 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-08 00:02:13.557136 | orchestrator | + all_metadata = (known after apply) 2026-04-08 00:02:13.557143 | orchestrator | + all_tags = (known after apply) 2026-04-08 00:02:13.557150 | orchestrator | + availability_zone = "nova" 2026-04-08 00:02:13.557156 | orchestrator | + config_drive = true 2026-04-08 00:02:13.557163 | orchestrator | + created = (known after apply) 2026-04-08 00:02:13.557169 | orchestrator | + flavor_id = (known after apply) 2026-04-08 00:02:13.557176 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-08 00:02:13.557183 | orchestrator | + force_delete = false 2026-04-08 00:02:13.557189 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-08 00:02:13.557196 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.557202 | orchestrator | + image_id = (known after apply) 2026-04-08 00:02:13.557209 | orchestrator | + image_name = (known after apply) 2026-04-08 00:02:13.557216 | orchestrator | + key_pair = "testbed" 2026-04-08 00:02:13.557222 | orchestrator | + name = "testbed-node-3" 2026-04-08 00:02:13.557229 | orchestrator | + power_state = "active" 2026-04-08 00:02:13.557235 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.557242 | orchestrator | + security_groups = (known after apply) 2026-04-08 00:02:13.557249 | orchestrator | + stop_before_destroy = false 2026-04-08 00:02:13.557255 | orchestrator | + updated = (known after apply) 2026-04-08 00:02:13.557262 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-08 00:02:13.557269 | orchestrator | 2026-04-08 00:02:13.557275 | orchestrator | + block_device { 2026-04-08 00:02:13.557282 | orchestrator | + boot_index = 0 2026-04-08 00:02:13.557289 | orchestrator | + delete_on_termination = false 2026-04-08 00:02:13.557295 | orchestrator | + destination_type = "volume" 2026-04-08 00:02:13.557307 | orchestrator | + multiattach = false 2026-04-08 00:02:13.557314 | orchestrator | + source_type = "volume" 2026-04-08 00:02:13.557320 | orchestrator | + uuid = (known after apply) 2026-04-08 00:02:13.557327 | orchestrator | } 2026-04-08 00:02:13.557334 | orchestrator | 2026-04-08 00:02:13.557340 | orchestrator | + network { 2026-04-08 00:02:13.557347 | orchestrator | + access_network = false 2026-04-08 00:02:13.557354 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-08 00:02:13.557360 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-08 00:02:13.557367 | orchestrator | + mac = (known after apply) 2026-04-08 00:02:13.557374 | orchestrator | + name = (known after apply) 2026-04-08 00:02:13.557380 | orchestrator | + port = (known after apply) 2026-04-08 00:02:13.557387 | orchestrator | + uuid = (known after apply) 2026-04-08 00:02:13.557393 | orchestrator | } 2026-04-08 00:02:13.557400 | orchestrator | } 2026-04-08 00:02:13.557407 | orchestrator | 2026-04-08 00:02:13.557414 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-04-08 00:02:13.557421 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-08 00:02:13.557427 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-08 00:02:13.557434 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-08 00:02:13.557441 | orchestrator | + all_metadata = (known after apply) 2026-04-08 00:02:13.557447 | orchestrator | + all_tags = (known after apply) 2026-04-08 00:02:13.557454 | orchestrator | + availability_zone = "nova" 2026-04-08 00:02:13.557460 | orchestrator | + config_drive = true 2026-04-08 00:02:13.557467 | orchestrator | + created = (known after apply) 2026-04-08 00:02:13.557474 | orchestrator | + flavor_id = (known after apply) 2026-04-08 00:02:13.557480 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-08 00:02:13.557487 | orchestrator | + force_delete = false 2026-04-08 00:02:13.557493 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-08 00:02:13.557500 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.557507 | orchestrator | + image_id = (known after apply) 2026-04-08 00:02:13.557513 | orchestrator | + image_name = (known after apply) 2026-04-08 00:02:13.557520 | orchestrator | + key_pair = "testbed" 2026-04-08 00:02:13.557526 | orchestrator | + name = "testbed-node-4" 2026-04-08 00:02:13.557533 | orchestrator | + power_state = "active" 2026-04-08 00:02:13.557540 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.557546 | orchestrator | + security_groups = (known after apply) 2026-04-08 00:02:13.557553 | orchestrator | + stop_before_destroy = false 2026-04-08 00:02:13.557559 | orchestrator | + updated = (known after apply) 2026-04-08 00:02:13.557566 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-08 00:02:13.557573 | orchestrator | 2026-04-08 00:02:13.557580 | orchestrator | + block_device { 2026-04-08 00:02:13.557586 | orchestrator | + boot_index = 0 2026-04-08 00:02:13.557593 | orchestrator | + delete_on_termination = false 2026-04-08 00:02:13.557600 | orchestrator | + destination_type = "volume" 2026-04-08 00:02:13.557606 | orchestrator | + multiattach = false 2026-04-08 00:02:13.557613 | orchestrator | + source_type = "volume" 2026-04-08 00:02:13.557620 | orchestrator | + uuid = (known after apply) 2026-04-08 00:02:13.557626 | orchestrator | } 2026-04-08 00:02:13.557633 | orchestrator | 2026-04-08 00:02:13.557640 | orchestrator | + network { 2026-04-08 00:02:13.557646 | orchestrator | + access_network = false 2026-04-08 00:02:13.557653 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-08 00:02:13.557660 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-08 00:02:13.557666 | orchestrator | + mac = (known after apply) 2026-04-08 00:02:13.557673 | orchestrator | + name = (known after apply) 2026-04-08 00:02:13.557679 | orchestrator | + port = (known after apply) 2026-04-08 00:02:13.557686 | orchestrator | + uuid = (known after apply) 2026-04-08 00:02:13.557846 | orchestrator | } 2026-04-08 00:02:13.557859 | orchestrator | } 2026-04-08 00:02:13.557878 | orchestrator | 2026-04-08 00:02:13.557890 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-04-08 00:02:13.558228 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-08 00:02:13.558245 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-08 00:02:13.558286 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-08 00:02:13.558300 | orchestrator | + all_metadata = (known after apply) 2026-04-08 00:02:13.558312 | orchestrator | + all_tags = (known after apply) 2026-04-08 00:02:13.558324 | orchestrator | + availability_zone = "nova" 2026-04-08 00:02:13.558336 | orchestrator | + config_drive = true 2026-04-08 00:02:13.558348 | orchestrator | + created = (known after apply) 2026-04-08 00:02:13.558360 | orchestrator | + flavor_id = (known after apply) 2026-04-08 00:02:13.558372 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-08 00:02:13.558383 | orchestrator | + force_delete = false 2026-04-08 00:02:13.558395 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-08 00:02:13.558407 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.558418 | orchestrator | + image_id = (known after apply) 2026-04-08 00:02:13.558430 | orchestrator | + image_name = (known after apply) 2026-04-08 00:02:13.558442 | orchestrator | + key_pair = "testbed" 2026-04-08 00:02:13.558464 | orchestrator | + name = "testbed-node-5" 2026-04-08 00:02:13.558475 | orchestrator | + power_state = "active" 2026-04-08 00:02:13.558487 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.558498 | orchestrator | + security_groups = (known after apply) 2026-04-08 00:02:13.558510 | orchestrator | + stop_before_destroy = false 2026-04-08 00:02:13.558521 | orchestrator | + updated = (known after apply) 2026-04-08 00:02:13.558533 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-08 00:02:13.558545 | orchestrator | 2026-04-08 00:02:13.558557 | orchestrator | + block_device { 2026-04-08 00:02:13.558568 | orchestrator | + boot_index = 0 2026-04-08 00:02:13.558580 | orchestrator | + delete_on_termination = false 2026-04-08 00:02:13.558592 | orchestrator | + destination_type = "volume" 2026-04-08 00:02:13.558603 | orchestrator | + multiattach = false 2026-04-08 00:02:13.558615 | orchestrator | + source_type = "volume" 2026-04-08 00:02:13.558627 | orchestrator | + uuid = (known after apply) 2026-04-08 00:02:13.558639 | orchestrator | } 2026-04-08 00:02:13.558650 | orchestrator | 2026-04-08 00:02:13.558662 | orchestrator | + network { 2026-04-08 00:02:13.558673 | orchestrator | + access_network = false 2026-04-08 00:02:13.558686 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-08 00:02:13.558697 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-08 00:02:13.558709 | orchestrator | + mac = (known after apply) 2026-04-08 00:02:13.558721 | orchestrator | + name = (known after apply) 2026-04-08 00:02:13.558733 | orchestrator | + port = (known after apply) 2026-04-08 00:02:13.558745 | orchestrator | + uuid = (known after apply) 2026-04-08 00:02:13.558756 | orchestrator | } 2026-04-08 00:02:13.558767 | orchestrator | } 2026-04-08 00:02:13.558778 | orchestrator | 2026-04-08 00:02:13.558788 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-04-08 00:02:13.558799 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-04-08 00:02:13.558811 | orchestrator | + fingerprint = (known after apply) 2026-04-08 00:02:13.558821 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.558832 | orchestrator | + name = "testbed" 2026-04-08 00:02:13.558843 | orchestrator | + private_key = (sensitive value) 2026-04-08 00:02:13.558854 | orchestrator | + public_key = (known after apply) 2026-04-08 00:02:13.558864 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.558875 | orchestrator | + user_id = (known after apply) 2026-04-08 00:02:13.558886 | orchestrator | } 2026-04-08 00:02:13.558896 | orchestrator | 2026-04-08 00:02:13.558906 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-04-08 00:02:13.558916 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-08 00:02:13.558936 | orchestrator | + device = (known after apply) 2026-04-08 00:02:13.558946 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.558955 | orchestrator | + instance_id = (known after apply) 2026-04-08 00:02:13.558965 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.558983 | orchestrator | + volume_id = (known after apply) 2026-04-08 00:02:13.558994 | orchestrator | } 2026-04-08 00:02:13.559005 | orchestrator | 2026-04-08 00:02:13.559015 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-04-08 00:02:13.559027 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-08 00:02:13.559037 | orchestrator | + device = (known after apply) 2026-04-08 00:02:13.559068 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.559079 | orchestrator | + instance_id = (known after apply) 2026-04-08 00:02:13.559089 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.559100 | orchestrator | + volume_id = (known after apply) 2026-04-08 00:02:13.559110 | orchestrator | } 2026-04-08 00:02:13.559121 | orchestrator | 2026-04-08 00:02:13.559132 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-04-08 00:02:13.559142 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-08 00:02:13.559154 | orchestrator | + device = (known after apply) 2026-04-08 00:02:13.559162 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.559168 | orchestrator | + instance_id = (known after apply) 2026-04-08 00:02:13.559174 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.559181 | orchestrator | + volume_id = (known after apply) 2026-04-08 00:02:13.559187 | orchestrator | } 2026-04-08 00:02:13.559193 | orchestrator | 2026-04-08 00:02:13.559199 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-04-08 00:02:13.559205 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-08 00:02:13.559211 | orchestrator | + device = (known after apply) 2026-04-08 00:02:13.559218 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.559224 | orchestrator | + instance_id = (known after apply) 2026-04-08 00:02:13.559230 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.559236 | orchestrator | + volume_id = (known after apply) 2026-04-08 00:02:13.559242 | orchestrator | } 2026-04-08 00:02:13.559248 | orchestrator | 2026-04-08 00:02:13.559255 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-04-08 00:02:13.559261 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-08 00:02:13.559267 | orchestrator | + device = (known after apply) 2026-04-08 00:02:13.559273 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.559280 | orchestrator | + instance_id = (known after apply) 2026-04-08 00:02:13.559286 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.559292 | orchestrator | + volume_id = (known after apply) 2026-04-08 00:02:13.559298 | orchestrator | } 2026-04-08 00:02:13.559304 | orchestrator | 2026-04-08 00:02:13.559310 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-04-08 00:02:13.559317 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-08 00:02:13.559329 | orchestrator | + device = (known after apply) 2026-04-08 00:02:13.559335 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.559342 | orchestrator | + instance_id = (known after apply) 2026-04-08 00:02:13.559348 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.559354 | orchestrator | + volume_id = (known after apply) 2026-04-08 00:02:13.559360 | orchestrator | } 2026-04-08 00:02:13.559366 | orchestrator | 2026-04-08 00:02:13.559372 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-04-08 00:02:13.559379 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-08 00:02:13.559385 | orchestrator | + device = (known after apply) 2026-04-08 00:02:13.559391 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.559397 | orchestrator | + instance_id = (known after apply) 2026-04-08 00:02:13.559403 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.559415 | orchestrator | + volume_id = (known after apply) 2026-04-08 00:02:13.559421 | orchestrator | } 2026-04-08 00:02:13.559427 | orchestrator | 2026-04-08 00:02:13.559433 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-04-08 00:02:13.559439 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-08 00:02:13.559446 | orchestrator | + device = (known after apply) 2026-04-08 00:02:13.559452 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.559458 | orchestrator | + instance_id = (known after apply) 2026-04-08 00:02:13.559674 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.559688 | orchestrator | + volume_id = (known after apply) 2026-04-08 00:02:13.559694 | orchestrator | } 2026-04-08 00:02:13.560691 | orchestrator | 2026-04-08 00:02:13.560706 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-04-08 00:02:13.560713 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-08 00:02:13.560719 | orchestrator | + device = (known after apply) 2026-04-08 00:02:13.560726 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.560732 | orchestrator | + instance_id = (known after apply) 2026-04-08 00:02:13.560738 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.560744 | orchestrator | + volume_id = (known after apply) 2026-04-08 00:02:13.560764 | orchestrator | } 2026-04-08 00:02:13.560771 | orchestrator | 2026-04-08 00:02:13.560777 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-04-08 00:02:13.560784 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-04-08 00:02:13.560791 | orchestrator | + fixed_ip = (known after apply) 2026-04-08 00:02:13.560797 | orchestrator | + floating_ip = (known after apply) 2026-04-08 00:02:13.560803 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.560809 | orchestrator | + port_id = (known after apply) 2026-04-08 00:02:13.560815 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.560822 | orchestrator | } 2026-04-08 00:02:13.560828 | orchestrator | 2026-04-08 00:02:13.560834 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-04-08 00:02:13.560841 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-04-08 00:02:13.560847 | orchestrator | + address = (known after apply) 2026-04-08 00:02:13.560854 | orchestrator | + all_tags = (known after apply) 2026-04-08 00:02:13.560867 | orchestrator | + dns_domain = (known after apply) 2026-04-08 00:02:13.560874 | orchestrator | + dns_name = (known after apply) 2026-04-08 00:02:13.560880 | orchestrator | + fixed_ip = (known after apply) 2026-04-08 00:02:13.560886 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.560893 | orchestrator | + pool = "public" 2026-04-08 00:02:13.560899 | orchestrator | + port_id = (known after apply) 2026-04-08 00:02:13.560905 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.560911 | orchestrator | + subnet_id = (known after apply) 2026-04-08 00:02:13.560918 | orchestrator | + tenant_id = (known after apply) 2026-04-08 00:02:13.560924 | orchestrator | } 2026-04-08 00:02:13.560930 | orchestrator | 2026-04-08 00:02:13.560936 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-04-08 00:02:13.560943 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-04-08 00:02:13.560949 | orchestrator | + admin_state_up = (known after apply) 2026-04-08 00:02:13.560955 | orchestrator | + all_tags = (known after apply) 2026-04-08 00:02:13.560962 | orchestrator | + availability_zone_hints = [ 2026-04-08 00:02:13.560968 | orchestrator | + "nova", 2026-04-08 00:02:13.560974 | orchestrator | ] 2026-04-08 00:02:13.560981 | orchestrator | + dns_domain = (known after apply) 2026-04-08 00:02:13.560987 | orchestrator | + external = (known after apply) 2026-04-08 00:02:13.560993 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.560999 | orchestrator | + mtu = (known after apply) 2026-04-08 00:02:13.561006 | orchestrator | + name = "net-testbed-management" 2026-04-08 00:02:13.561012 | orchestrator | + port_security_enabled = (known after apply) 2026-04-08 00:02:13.561027 | orchestrator | + qos_policy_id = (known after apply) 2026-04-08 00:02:13.561034 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.561040 | orchestrator | + shared = (known after apply) 2026-04-08 00:02:13.561062 | orchestrator | + tenant_id = (known after apply) 2026-04-08 00:02:13.561068 | orchestrator | + transparent_vlan = (known after apply) 2026-04-08 00:02:13.561075 | orchestrator | 2026-04-08 00:02:13.561081 | orchestrator | + segments (known after apply) 2026-04-08 00:02:13.561087 | orchestrator | } 2026-04-08 00:02:13.561093 | orchestrator | 2026-04-08 00:02:13.561100 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-04-08 00:02:13.561106 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-04-08 00:02:13.561112 | orchestrator | + admin_state_up = (known after apply) 2026-04-08 00:02:13.561118 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-08 00:02:13.561125 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-08 00:02:13.561131 | orchestrator | + all_tags = (known after apply) 2026-04-08 00:02:13.561137 | orchestrator | + device_id = (known after apply) 2026-04-08 00:02:13.561143 | orchestrator | + device_owner = (known after apply) 2026-04-08 00:02:13.561150 | orchestrator | + dns_assignment = (known after apply) 2026-04-08 00:02:13.561156 | orchestrator | + dns_name = (known after apply) 2026-04-08 00:02:13.561162 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.561168 | orchestrator | + mac_address = (known after apply) 2026-04-08 00:02:13.561175 | orchestrator | + network_id = (known after apply) 2026-04-08 00:02:13.561181 | orchestrator | + port_security_enabled = (known after apply) 2026-04-08 00:02:13.561197 | orchestrator | + qos_policy_id = (known after apply) 2026-04-08 00:02:13.561205 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.561211 | orchestrator | + security_group_ids = (known after apply) 2026-04-08 00:02:13.561217 | orchestrator | + tenant_id = (known after apply) 2026-04-08 00:02:13.561223 | orchestrator | 2026-04-08 00:02:13.561230 | orchestrator | + allowed_address_pairs { 2026-04-08 00:02:13.561236 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-08 00:02:13.561242 | orchestrator | } 2026-04-08 00:02:13.561248 | orchestrator | 2026-04-08 00:02:13.561254 | orchestrator | + binding (known after apply) 2026-04-08 00:02:13.561261 | orchestrator | 2026-04-08 00:02:13.561267 | orchestrator | + fixed_ip { 2026-04-08 00:02:13.561273 | orchestrator | + ip_address = "192.168.16.5" 2026-04-08 00:02:13.561279 | orchestrator | + subnet_id = (known after apply) 2026-04-08 00:02:13.561286 | orchestrator | } 2026-04-08 00:02:13.561292 | orchestrator | } 2026-04-08 00:02:13.561298 | orchestrator | 2026-04-08 00:02:13.561304 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-04-08 00:02:13.561311 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-08 00:02:13.561317 | orchestrator | + admin_state_up = (known after apply) 2026-04-08 00:02:13.561323 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-08 00:02:13.561329 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-08 00:02:13.561336 | orchestrator | + all_tags = (known after apply) 2026-04-08 00:02:13.561342 | orchestrator | + device_id = (known after apply) 2026-04-08 00:02:13.561348 | orchestrator | + device_owner = (known after apply) 2026-04-08 00:02:13.561355 | orchestrator | + dns_assignment = (known after apply) 2026-04-08 00:02:13.561361 | orchestrator | + dns_name = (known after apply) 2026-04-08 00:02:13.561367 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.561373 | orchestrator | + mac_address = (known after apply) 2026-04-08 00:02:13.561379 | orchestrator | + network_id = (known after apply) 2026-04-08 00:02:13.561385 | orchestrator | + port_security_enabled = (known after apply) 2026-04-08 00:02:13.561392 | orchestrator | + qos_policy_id = (known after apply) 2026-04-08 00:02:13.561398 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.561409 | orchestrator | + security_group_ids = (known after apply) 2026-04-08 00:02:13.561416 | orchestrator | + tenant_id = (known after apply) 2026-04-08 00:02:13.561422 | orchestrator | 2026-04-08 00:02:13.561428 | orchestrator | + allowed_address_pairs { 2026-04-08 00:02:13.561434 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-08 00:02:13.561440 | orchestrator | } 2026-04-08 00:02:13.561447 | orchestrator | + allowed_address_pairs { 2026-04-08 00:02:13.561453 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-08 00:02:13.561459 | orchestrator | } 2026-04-08 00:02:13.561465 | orchestrator | + allowed_address_pairs { 2026-04-08 00:02:13.561472 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-08 00:02:13.561478 | orchestrator | } 2026-04-08 00:02:13.561484 | orchestrator | 2026-04-08 00:02:13.561490 | orchestrator | + binding (known after apply) 2026-04-08 00:02:13.561496 | orchestrator | 2026-04-08 00:02:13.561502 | orchestrator | + fixed_ip { 2026-04-08 00:02:13.561509 | orchestrator | + ip_address = "192.168.16.10" 2026-04-08 00:02:13.561515 | orchestrator | + subnet_id = (known after apply) 2026-04-08 00:02:13.561521 | orchestrator | } 2026-04-08 00:02:13.561527 | orchestrator | } 2026-04-08 00:02:13.561534 | orchestrator | 2026-04-08 00:02:13.561540 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-04-08 00:02:13.561546 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-08 00:02:13.561556 | orchestrator | + admin_state_up = (known after apply) 2026-04-08 00:02:13.561562 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-08 00:02:13.561569 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-08 00:02:13.561575 | orchestrator | + all_tags = (known after apply) 2026-04-08 00:02:13.561581 | orchestrator | + device_id = (known after apply) 2026-04-08 00:02:13.561587 | orchestrator | + device_owner = (known after apply) 2026-04-08 00:02:13.561594 | orchestrator | + dns_assignment = (known after apply) 2026-04-08 00:02:13.561600 | orchestrator | + dns_name = (known after apply) 2026-04-08 00:02:13.561606 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.561612 | orchestrator | + mac_address = (known after apply) 2026-04-08 00:02:13.561619 | orchestrator | + network_id = (known after apply) 2026-04-08 00:02:13.561625 | orchestrator | + port_security_enabled = (known after apply) 2026-04-08 00:02:13.561631 | orchestrator | + qos_policy_id = (known after apply) 2026-04-08 00:02:13.561637 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.561643 | orchestrator | + security_group_ids = (known after apply) 2026-04-08 00:02:13.561649 | orchestrator | + tenant_id = (known after apply) 2026-04-08 00:02:13.561656 | orchestrator | 2026-04-08 00:02:13.561662 | orchestrator | + allowed_address_pairs { 2026-04-08 00:02:13.561668 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-08 00:02:13.561674 | orchestrator | } 2026-04-08 00:02:13.561680 | orchestrator | + allowed_address_pairs { 2026-04-08 00:02:13.561687 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-08 00:02:13.561693 | orchestrator | } 2026-04-08 00:02:13.561699 | orchestrator | + allowed_address_pairs { 2026-04-08 00:02:13.561705 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-08 00:02:13.561712 | orchestrator | } 2026-04-08 00:02:13.561718 | orchestrator | 2026-04-08 00:02:13.561724 | orchestrator | + binding (known after apply) 2026-04-08 00:02:13.561730 | orchestrator | 2026-04-08 00:02:13.561736 | orchestrator | + fixed_ip { 2026-04-08 00:02:13.561742 | orchestrator | + ip_address = "192.168.16.11" 2026-04-08 00:02:13.561748 | orchestrator | + subnet_id = (known after apply) 2026-04-08 00:02:13.561755 | orchestrator | } 2026-04-08 00:02:13.561761 | orchestrator | } 2026-04-08 00:02:13.561767 | orchestrator | 2026-04-08 00:02:13.561773 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-04-08 00:02:13.561779 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-08 00:02:13.561786 | orchestrator | + admin_state_up = (known after apply) 2026-04-08 00:02:13.561792 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-08 00:02:13.561798 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-08 00:02:13.561805 | orchestrator | + all_tags = (known after apply) 2026-04-08 00:02:13.561815 | orchestrator | + device_id = (known after apply) 2026-04-08 00:02:13.561821 | orchestrator | + device_owner = (known after apply) 2026-04-08 00:02:13.561828 | orchestrator | + dns_assignment = (known after apply) 2026-04-08 00:02:13.561834 | orchestrator | + dns_name = (known after apply) 2026-04-08 00:02:13.561840 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.561846 | orchestrator | + mac_address = (known after apply) 2026-04-08 00:02:13.561852 | orchestrator | + network_id = (known after apply) 2026-04-08 00:02:13.561858 | orchestrator | + port_security_enabled = (known after apply) 2026-04-08 00:02:13.561865 | orchestrator | + qos_policy_id = (known after apply) 2026-04-08 00:02:13.561875 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.561881 | orchestrator | + security_group_ids = (known after apply) 2026-04-08 00:02:13.561887 | orchestrator | + tenant_id = (known after apply) 2026-04-08 00:02:13.561894 | orchestrator | 2026-04-08 00:02:13.561900 | orchestrator | + allowed_address_pairs { 2026-04-08 00:02:13.561906 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-08 00:02:13.561912 | orchestrator | } 2026-04-08 00:02:13.561919 | orchestrator | + allowed_address_pairs { 2026-04-08 00:02:13.561925 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-08 00:02:13.561931 | orchestrator | } 2026-04-08 00:02:13.561937 | orchestrator | + allowed_address_pairs { 2026-04-08 00:02:13.561944 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-08 00:02:13.561950 | orchestrator | } 2026-04-08 00:02:13.561956 | orchestrator | 2026-04-08 00:02:13.561962 | orchestrator | + binding (known after apply) 2026-04-08 00:02:13.561969 | orchestrator | 2026-04-08 00:02:13.561975 | orchestrator | + fixed_ip { 2026-04-08 00:02:13.561984 | orchestrator | + ip_address = "192.168.16.12" 2026-04-08 00:02:13.561994 | orchestrator | + subnet_id = (known after apply) 2026-04-08 00:02:13.562005 | orchestrator | } 2026-04-08 00:02:13.562037 | orchestrator | } 2026-04-08 00:02:13.562110 | orchestrator | 2026-04-08 00:02:13.562117 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-04-08 00:02:13.562124 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-08 00:02:13.562130 | orchestrator | + admin_state_up = (known after apply) 2026-04-08 00:02:13.562136 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-08 00:02:13.562143 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-08 00:02:13.562149 | orchestrator | + all_tags = (known after apply) 2026-04-08 00:02:13.562155 | orchestrator | + device_id = (known after apply) 2026-04-08 00:02:13.562161 | orchestrator | + device_owner = (known after apply) 2026-04-08 00:02:13.562167 | orchestrator | + dns_assignment = (known after apply) 2026-04-08 00:02:13.562174 | orchestrator | + dns_name = (known after apply) 2026-04-08 00:02:13.562180 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.562186 | orchestrator | + mac_address = (known after apply) 2026-04-08 00:02:13.562192 | orchestrator | + network_id = (known after apply) 2026-04-08 00:02:13.562199 | orchestrator | + port_security_enabled = (known after apply) 2026-04-08 00:02:13.562205 | orchestrator | + qos_policy_id = (known after apply) 2026-04-08 00:02:13.562211 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.562217 | orchestrator | + security_group_ids = (known after apply) 2026-04-08 00:02:13.562224 | orchestrator | + tenant_id = (known after apply) 2026-04-08 00:02:13.562230 | orchestrator | 2026-04-08 00:02:13.562236 | orchestrator | + allowed_address_pairs { 2026-04-08 00:02:13.562243 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-08 00:02:13.562249 | orchestrator | } 2026-04-08 00:02:13.562255 | orchestrator | + allowed_address_pairs { 2026-04-08 00:02:13.562261 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-08 00:02:13.562267 | orchestrator | } 2026-04-08 00:02:13.562274 | orchestrator | + allowed_address_pairs { 2026-04-08 00:02:13.562280 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-08 00:02:13.562286 | orchestrator | } 2026-04-08 00:02:13.562292 | orchestrator | 2026-04-08 00:02:13.562304 | orchestrator | + binding (known after apply) 2026-04-08 00:02:13.562311 | orchestrator | 2026-04-08 00:02:13.562317 | orchestrator | + fixed_ip { 2026-04-08 00:02:13.562323 | orchestrator | + ip_address = "192.168.16.13" 2026-04-08 00:02:13.562329 | orchestrator | + subnet_id = (known after apply) 2026-04-08 00:02:13.562335 | orchestrator | } 2026-04-08 00:02:13.562341 | orchestrator | } 2026-04-08 00:02:13.562347 | orchestrator | 2026-04-08 00:02:13.562354 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-04-08 00:02:13.562360 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-08 00:02:13.562366 | orchestrator | + admin_state_up = (known after apply) 2026-04-08 00:02:13.562372 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-08 00:02:13.562378 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-08 00:02:13.562385 | orchestrator | + all_tags = (known after apply) 2026-04-08 00:02:13.562391 | orchestrator | + device_id = (known after apply) 2026-04-08 00:02:13.562397 | orchestrator | + device_owner = (known after apply) 2026-04-08 00:02:13.562403 | orchestrator | + dns_assignment = (known after apply) 2026-04-08 00:02:13.562409 | orchestrator | + dns_name = (known after apply) 2026-04-08 00:02:13.562424 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.562430 | orchestrator | + mac_address = (known after apply) 2026-04-08 00:02:13.562436 | orchestrator | + network_id = (known after apply) 2026-04-08 00:02:13.562443 | orchestrator | + port_security_enabled = (known after apply) 2026-04-08 00:02:13.562449 | orchestrator | + qos_policy_id = (known after apply) 2026-04-08 00:02:13.562455 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.562461 | orchestrator | + security_group_ids = (known after apply) 2026-04-08 00:02:13.562467 | orchestrator | + tenant_id = (known after apply) 2026-04-08 00:02:13.562475 | orchestrator | 2026-04-08 00:02:13.562481 | orchestrator | + allowed_address_pairs { 2026-04-08 00:02:13.562491 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-08 00:02:13.562497 | orchestrator | } 2026-04-08 00:02:13.562504 | orchestrator | + allowed_address_pairs { 2026-04-08 00:02:13.562510 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-08 00:02:13.562516 | orchestrator | } 2026-04-08 00:02:13.562522 | orchestrator | + allowed_address_pairs { 2026-04-08 00:02:13.562529 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-08 00:02:13.562535 | orchestrator | } 2026-04-08 00:02:13.562541 | orchestrator | 2026-04-08 00:02:13.562547 | orchestrator | + binding (known after apply) 2026-04-08 00:02:13.562553 | orchestrator | 2026-04-08 00:02:13.562558 | orchestrator | + fixed_ip { 2026-04-08 00:02:13.562564 | orchestrator | + ip_address = "192.168.16.14" 2026-04-08 00:02:13.562569 | orchestrator | + subnet_id = (known after apply) 2026-04-08 00:02:13.562575 | orchestrator | } 2026-04-08 00:02:13.562580 | orchestrator | } 2026-04-08 00:02:13.562585 | orchestrator | 2026-04-08 00:02:13.562591 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-04-08 00:02:13.562596 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-08 00:02:13.562602 | orchestrator | + admin_state_up = (known after apply) 2026-04-08 00:02:13.562607 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-08 00:02:13.562613 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-08 00:02:13.562618 | orchestrator | + all_tags = (known after apply) 2026-04-08 00:02:13.562624 | orchestrator | + device_id = (known after apply) 2026-04-08 00:02:13.562629 | orchestrator | + device_owner = (known after apply) 2026-04-08 00:02:13.562635 | orchestrator | + dns_assignment = (known after apply) 2026-04-08 00:02:13.562640 | orchestrator | + dns_name = (known after apply) 2026-04-08 00:02:13.562645 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.562651 | orchestrator | + mac_address = (known after apply) 2026-04-08 00:02:13.562656 | orchestrator | + network_id = (known after apply) 2026-04-08 00:02:13.562667 | orchestrator | + port_security_enabled = (known after apply) 2026-04-08 00:02:13.562672 | orchestrator | + qos_policy_id = (known after apply) 2026-04-08 00:02:13.562682 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.562687 | orchestrator | + security_group_ids = (known after apply) 2026-04-08 00:02:13.562693 | orchestrator | + tenant_id = (known after apply) 2026-04-08 00:02:13.562698 | orchestrator | 2026-04-08 00:02:13.562704 | orchestrator | + allowed_address_pairs { 2026-04-08 00:02:13.562709 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-08 00:02:13.562715 | orchestrator | } 2026-04-08 00:02:13.562720 | orchestrator | + allowed_address_pairs { 2026-04-08 00:02:13.562725 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-08 00:02:13.562731 | orchestrator | } 2026-04-08 00:02:13.562736 | orchestrator | + allowed_address_pairs { 2026-04-08 00:02:13.562742 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-08 00:02:13.562747 | orchestrator | } 2026-04-08 00:02:13.562753 | orchestrator | 2026-04-08 00:02:13.562758 | orchestrator | + binding (known after apply) 2026-04-08 00:02:13.562764 | orchestrator | 2026-04-08 00:02:13.562769 | orchestrator | + fixed_ip { 2026-04-08 00:02:13.562775 | orchestrator | + ip_address = "192.168.16.15" 2026-04-08 00:02:13.562780 | orchestrator | + subnet_id = (known after apply) 2026-04-08 00:02:13.562786 | orchestrator | } 2026-04-08 00:02:13.562791 | orchestrator | } 2026-04-08 00:02:13.562796 | orchestrator | 2026-04-08 00:02:13.562802 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-04-08 00:02:13.562808 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-04-08 00:02:13.562813 | orchestrator | + force_destroy = false 2026-04-08 00:02:13.562818 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.562824 | orchestrator | + port_id = (known after apply) 2026-04-08 00:02:13.562829 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.562835 | orchestrator | + router_id = (known after apply) 2026-04-08 00:02:13.562840 | orchestrator | + subnet_id = (known after apply) 2026-04-08 00:02:13.562845 | orchestrator | } 2026-04-08 00:02:13.562851 | orchestrator | 2026-04-08 00:02:13.562856 | orchestrator | # openstack_networking_router_v2.router will be created 2026-04-08 00:02:13.562862 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-04-08 00:02:13.562867 | orchestrator | + admin_state_up = (known after apply) 2026-04-08 00:02:13.562873 | orchestrator | + all_tags = (known after apply) 2026-04-08 00:02:13.562878 | orchestrator | + availability_zone_hints = [ 2026-04-08 00:02:13.562892 | orchestrator | + "nova", 2026-04-08 00:02:13.562898 | orchestrator | ] 2026-04-08 00:02:13.562903 | orchestrator | + distributed = (known after apply) 2026-04-08 00:02:13.562909 | orchestrator | + enable_snat = (known after apply) 2026-04-08 00:02:13.562914 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-04-08 00:02:13.562920 | orchestrator | + external_qos_policy_id = (known after apply) 2026-04-08 00:02:13.562925 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.562931 | orchestrator | + name = "testbed" 2026-04-08 00:02:13.562936 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.562942 | orchestrator | + tenant_id = (known after apply) 2026-04-08 00:02:13.562947 | orchestrator | 2026-04-08 00:02:13.562953 | orchestrator | + external_fixed_ip (known after apply) 2026-04-08 00:02:13.562958 | orchestrator | } 2026-04-08 00:02:13.562963 | orchestrator | 2026-04-08 00:02:13.562969 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-04-08 00:02:13.562974 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-04-08 00:02:13.562980 | orchestrator | + description = "ssh" 2026-04-08 00:02:13.562985 | orchestrator | + direction = "ingress" 2026-04-08 00:02:13.562991 | orchestrator | + ethertype = "IPv4" 2026-04-08 00:02:13.562996 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.563001 | orchestrator | + port_range_max = 22 2026-04-08 00:02:13.563007 | orchestrator | + port_range_min = 22 2026-04-08 00:02:13.563012 | orchestrator | + protocol = "tcp" 2026-04-08 00:02:13.563017 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.563027 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-08 00:02:13.563032 | orchestrator | + remote_group_id = (known after apply) 2026-04-08 00:02:13.563037 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-08 00:02:13.563043 | orchestrator | + security_group_id = (known after apply) 2026-04-08 00:02:13.563062 | orchestrator | + tenant_id = (known after apply) 2026-04-08 00:02:13.563067 | orchestrator | } 2026-04-08 00:02:13.563073 | orchestrator | 2026-04-08 00:02:13.563078 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-04-08 00:02:13.563084 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-04-08 00:02:13.563089 | orchestrator | + description = "wireguard" 2026-04-08 00:02:13.563094 | orchestrator | + direction = "ingress" 2026-04-08 00:02:13.563100 | orchestrator | + ethertype = "IPv4" 2026-04-08 00:02:13.563105 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.563111 | orchestrator | + port_range_max = 51820 2026-04-08 00:02:13.563116 | orchestrator | + port_range_min = 51820 2026-04-08 00:02:13.563121 | orchestrator | + protocol = "udp" 2026-04-08 00:02:13.563127 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.563132 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-08 00:02:13.563138 | orchestrator | + remote_group_id = (known after apply) 2026-04-08 00:02:13.563143 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-08 00:02:13.563148 | orchestrator | + security_group_id = (known after apply) 2026-04-08 00:02:13.563154 | orchestrator | + tenant_id = (known after apply) 2026-04-08 00:02:13.563159 | orchestrator | } 2026-04-08 00:02:13.563165 | orchestrator | 2026-04-08 00:02:13.563170 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-04-08 00:02:13.563176 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-04-08 00:02:13.563185 | orchestrator | + direction = "ingress" 2026-04-08 00:02:13.563190 | orchestrator | + ethertype = "IPv4" 2026-04-08 00:02:13.563196 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.563201 | orchestrator | + protocol = "tcp" 2026-04-08 00:02:13.563206 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.563212 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-08 00:02:13.563217 | orchestrator | + remote_group_id = (known after apply) 2026-04-08 00:02:13.563223 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-08 00:02:13.563232 | orchestrator | + security_group_id = (known after apply) 2026-04-08 00:02:13.563238 | orchestrator | + tenant_id = (known after apply) 2026-04-08 00:02:13.563243 | orchestrator | } 2026-04-08 00:02:13.563249 | orchestrator | 2026-04-08 00:02:13.563254 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-04-08 00:02:13.563260 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-04-08 00:02:13.563265 | orchestrator | + direction = "ingress" 2026-04-08 00:02:13.563270 | orchestrator | + ethertype = "IPv4" 2026-04-08 00:02:13.563276 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.563281 | orchestrator | + protocol = "udp" 2026-04-08 00:02:13.563286 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.563292 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-08 00:02:13.563297 | orchestrator | + remote_group_id = (known after apply) 2026-04-08 00:02:13.563303 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-08 00:02:13.563308 | orchestrator | + security_group_id = (known after apply) 2026-04-08 00:02:13.563314 | orchestrator | + tenant_id = (known after apply) 2026-04-08 00:02:13.563319 | orchestrator | } 2026-04-08 00:02:13.563325 | orchestrator | 2026-04-08 00:02:13.563330 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-04-08 00:02:13.563340 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-04-08 00:02:13.563345 | orchestrator | + direction = "ingress" 2026-04-08 00:02:13.563350 | orchestrator | + ethertype = "IPv4" 2026-04-08 00:02:13.563356 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.563361 | orchestrator | + protocol = "icmp" 2026-04-08 00:02:13.563367 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.563372 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-08 00:02:13.563377 | orchestrator | + remote_group_id = (known after apply) 2026-04-08 00:02:13.563383 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-08 00:02:13.563388 | orchestrator | + security_group_id = (known after apply) 2026-04-08 00:02:13.563394 | orchestrator | + tenant_id = (known after apply) 2026-04-08 00:02:13.563399 | orchestrator | } 2026-04-08 00:02:13.563404 | orchestrator | 2026-04-08 00:02:13.563410 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-04-08 00:02:13.563415 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-04-08 00:02:13.563421 | orchestrator | + direction = "ingress" 2026-04-08 00:02:13.563426 | orchestrator | + ethertype = "IPv4" 2026-04-08 00:02:13.563432 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.563437 | orchestrator | + protocol = "tcp" 2026-04-08 00:02:13.563442 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.563448 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-08 00:02:13.563453 | orchestrator | + remote_group_id = (known after apply) 2026-04-08 00:02:13.563459 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-08 00:02:13.563464 | orchestrator | + security_group_id = (known after apply) 2026-04-08 00:02:13.563470 | orchestrator | + tenant_id = (known after apply) 2026-04-08 00:02:13.563475 | orchestrator | } 2026-04-08 00:02:13.563480 | orchestrator | 2026-04-08 00:02:13.563486 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-04-08 00:02:13.563491 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-04-08 00:02:13.563497 | orchestrator | + direction = "ingress" 2026-04-08 00:02:13.563502 | orchestrator | + ethertype = "IPv4" 2026-04-08 00:02:13.563507 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.563516 | orchestrator | + protocol = "udp" 2026-04-08 00:02:13.563526 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.563535 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-08 00:02:13.563543 | orchestrator | + remote_group_id = (known after apply) 2026-04-08 00:02:13.563552 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-08 00:02:13.563560 | orchestrator | + security_group_id = (known after apply) 2026-04-08 00:02:13.563569 | orchestrator | + tenant_id = (known after apply) 2026-04-08 00:02:13.563578 | orchestrator | } 2026-04-08 00:02:13.563586 | orchestrator | 2026-04-08 00:02:13.563595 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-04-08 00:02:13.563605 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-04-08 00:02:13.563614 | orchestrator | + direction = "ingress" 2026-04-08 00:02:13.563623 | orchestrator | + ethertype = "IPv4" 2026-04-08 00:02:13.563632 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.563642 | orchestrator | + protocol = "icmp" 2026-04-08 00:02:13.563648 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.563653 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-08 00:02:13.563659 | orchestrator | + remote_group_id = (known after apply) 2026-04-08 00:02:13.563664 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-08 00:02:13.563669 | orchestrator | + security_group_id = (known after apply) 2026-04-08 00:02:13.563675 | orchestrator | + tenant_id = (known after apply) 2026-04-08 00:02:13.563685 | orchestrator | } 2026-04-08 00:02:13.563691 | orchestrator | 2026-04-08 00:02:13.563696 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-04-08 00:02:13.563702 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-04-08 00:02:13.563707 | orchestrator | + description = "vrrp" 2026-04-08 00:02:13.563713 | orchestrator | + direction = "ingress" 2026-04-08 00:02:13.563718 | orchestrator | + ethertype = "IPv4" 2026-04-08 00:02:13.563723 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.563729 | orchestrator | + protocol = "112" 2026-04-08 00:02:13.563734 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.563739 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-08 00:02:13.563745 | orchestrator | + remote_group_id = (known after apply) 2026-04-08 00:02:13.563750 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-08 00:02:13.563756 | orchestrator | + security_group_id = (known after apply) 2026-04-08 00:02:13.563765 | orchestrator | + tenant_id = (known after apply) 2026-04-08 00:02:13.563771 | orchestrator | } 2026-04-08 00:02:13.563777 | orchestrator | 2026-04-08 00:02:13.563782 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-04-08 00:02:13.563788 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-04-08 00:02:13.563793 | orchestrator | + all_tags = (known after apply) 2026-04-08 00:02:13.563798 | orchestrator | + description = "management security group" 2026-04-08 00:02:13.563804 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.563809 | orchestrator | + name = "testbed-management" 2026-04-08 00:02:13.563815 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.563820 | orchestrator | + stateful = (known after apply) 2026-04-08 00:02:13.563826 | orchestrator | + tenant_id = (known after apply) 2026-04-08 00:02:13.563831 | orchestrator | } 2026-04-08 00:02:13.563836 | orchestrator | 2026-04-08 00:02:13.563842 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-04-08 00:02:13.563847 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-04-08 00:02:13.563853 | orchestrator | + all_tags = (known after apply) 2026-04-08 00:02:13.563858 | orchestrator | + description = "node security group" 2026-04-08 00:02:13.563864 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.563869 | orchestrator | + name = "testbed-node" 2026-04-08 00:02:13.563875 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.563880 | orchestrator | + stateful = (known after apply) 2026-04-08 00:02:13.563885 | orchestrator | + tenant_id = (known after apply) 2026-04-08 00:02:13.563891 | orchestrator | } 2026-04-08 00:02:13.563896 | orchestrator | 2026-04-08 00:02:13.563902 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-04-08 00:02:13.563907 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-04-08 00:02:13.563912 | orchestrator | + all_tags = (known after apply) 2026-04-08 00:02:13.563918 | orchestrator | + cidr = "192.168.16.0/20" 2026-04-08 00:02:13.563923 | orchestrator | + dns_nameservers = [ 2026-04-08 00:02:13.563929 | orchestrator | + "8.8.8.8", 2026-04-08 00:02:13.563934 | orchestrator | + "9.9.9.9", 2026-04-08 00:02:13.563940 | orchestrator | ] 2026-04-08 00:02:13.563945 | orchestrator | + enable_dhcp = true 2026-04-08 00:02:13.563951 | orchestrator | + gateway_ip = (known after apply) 2026-04-08 00:02:13.563960 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.563966 | orchestrator | + ip_version = 4 2026-04-08 00:02:13.563971 | orchestrator | + ipv6_address_mode = (known after apply) 2026-04-08 00:02:13.563976 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-04-08 00:02:13.563982 | orchestrator | + name = "subnet-testbed-management" 2026-04-08 00:02:13.563987 | orchestrator | + network_id = (known after apply) 2026-04-08 00:02:13.563993 | orchestrator | + no_gateway = false 2026-04-08 00:02:13.563998 | orchestrator | + region = (known after apply) 2026-04-08 00:02:13.564003 | orchestrator | + service_types = (known after apply) 2026-04-08 00:02:13.564012 | orchestrator | + tenant_id = (known after apply) 2026-04-08 00:02:13.564018 | orchestrator | 2026-04-08 00:02:13.564023 | orchestrator | + allocation_pool { 2026-04-08 00:02:13.564029 | orchestrator | + end = "192.168.31.250" 2026-04-08 00:02:13.564034 | orchestrator | + start = "192.168.31.200" 2026-04-08 00:02:13.564040 | orchestrator | } 2026-04-08 00:02:13.564059 | orchestrator | } 2026-04-08 00:02:13.564065 | orchestrator | 2026-04-08 00:02:13.564070 | orchestrator | # terraform_data.image will be created 2026-04-08 00:02:13.564075 | orchestrator | + resource "terraform_data" "image" { 2026-04-08 00:02:13.564081 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.564086 | orchestrator | + input = "Ubuntu 24.04" 2026-04-08 00:02:13.564092 | orchestrator | + output = (known after apply) 2026-04-08 00:02:13.564097 | orchestrator | } 2026-04-08 00:02:13.564102 | orchestrator | 2026-04-08 00:02:13.564108 | orchestrator | # terraform_data.image_node will be created 2026-04-08 00:02:13.564113 | orchestrator | + resource "terraform_data" "image_node" { 2026-04-08 00:02:13.564119 | orchestrator | + id = (known after apply) 2026-04-08 00:02:13.564124 | orchestrator | + input = "Ubuntu 24.04" 2026-04-08 00:02:13.564129 | orchestrator | + output = (known after apply) 2026-04-08 00:02:13.564135 | orchestrator | } 2026-04-08 00:02:13.564140 | orchestrator | 2026-04-08 00:02:13.564146 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-04-08 00:02:13.564151 | orchestrator | 2026-04-08 00:02:13.564156 | orchestrator | Changes to Outputs: 2026-04-08 00:02:13.564162 | orchestrator | + manager_address = (sensitive value) 2026-04-08 00:02:13.564167 | orchestrator | + private_key = (sensitive value) 2026-04-08 00:02:13.596720 | orchestrator | terraform_data.image_node: Creating... 2026-04-08 00:02:13.721352 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=6c6b02d2-59f6-fa1a-fcd8-77e672d6fed0] 2026-04-08 00:02:13.723394 | orchestrator | terraform_data.image: Creating... 2026-04-08 00:02:13.723638 | orchestrator | terraform_data.image: Creation complete after 0s [id=90899c23-82f9-d713-3ebd-8187dde46c76] 2026-04-08 00:02:13.744894 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-04-08 00:02:13.745384 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-04-08 00:02:13.754621 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-04-08 00:02:13.761163 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-04-08 00:02:13.762329 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-04-08 00:02:13.763173 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-04-08 00:02:13.763936 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-04-08 00:02:13.764641 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-04-08 00:02:13.764954 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-04-08 00:02:13.766272 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-04-08 00:02:14.246796 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-04-08 00:02:14.254038 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-04-08 00:02:14.255489 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-08 00:02:14.256933 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-08 00:02:14.258962 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-04-08 00:02:14.260300 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-04-08 00:02:14.870580 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=c56597f1-4c08-4fff-9149-ccee0a201f2f] 2026-04-08 00:02:14.888606 | orchestrator | local_file.id_rsa_pub: Creating... 2026-04-08 00:02:14.893275 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=34c384004325ecfaa29ab4ff23d5a1129b658e18] 2026-04-08 00:02:14.904038 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-04-08 00:02:14.908111 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=a029d128745db50633681c8615c7bf65f3eb445b] 2026-04-08 00:02:14.914751 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-04-08 00:02:17.372687 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=10ca2d35-3b66-46f3-ab0f-253d8a66f2e6] 2026-04-08 00:02:17.383677 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-04-08 00:02:17.406148 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=3c9bb5e0-782f-4e13-9d09-525e18a95d4a] 2026-04-08 00:02:17.413700 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-04-08 00:02:17.425453 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=8b42dfd4-7d2e-4d67-9c6f-8993b8aa5f54] 2026-04-08 00:02:17.430176 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=49047f2d-69c1-4fac-a475-f46440c51814] 2026-04-08 00:02:17.430556 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-04-08 00:02:17.434763 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-04-08 00:02:17.456897 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=7b1a6d0f-b1ea-4446-8ff8-479db77ebf36] 2026-04-08 00:02:17.457207 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=0a17ff12-522e-4235-8e4c-edb4898b90f5] 2026-04-08 00:02:17.467405 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-04-08 00:02:17.476627 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-04-08 00:02:17.505845 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=9851de66-42ac-4afe-9f6b-65921d8ebe77] 2026-04-08 00:02:17.514675 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-04-08 00:02:17.524385 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=22a44c82-a679-4e37-857c-f96ffb845a8a] 2026-04-08 00:02:17.533298 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=29911cfa-2062-4b30-9263-aae8438640a0] 2026-04-08 00:02:18.253606 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=200542b7-40f0-42fe-9d28-23579a781b3e] 2026-04-08 00:02:18.635416 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 2s [id=1d640ddb-aa37-4424-9528-28ea11c102b7] 2026-04-08 00:02:18.640292 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-04-08 00:02:20.780901 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=4596a618-b0c7-4f6c-b3f8-3bb0eece7c92] 2026-04-08 00:02:20.846559 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=d75891d9-cfe8-446f-818d-bc8c8304d51b] 2026-04-08 00:02:20.850521 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=e5ab6e22-4bb9-43f8-8e9a-32a0c49ce343] 2026-04-08 00:02:20.911949 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=7e6c101f-9165-4e11-b46f-dc6c65af7f32] 2026-04-08 00:02:20.999749 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=1f87f253-d467-48bc-bac0-692ec5abf0aa] 2026-04-08 00:02:21.063605 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=e2181995-561c-469f-942a-3ff6a519a6a0] 2026-04-08 00:02:21.740504 | orchestrator | openstack_networking_router_v2.router: Creation complete after 3s [id=22879136-0fd9-4ec8-8480-b5ae82c2ded2] 2026-04-08 00:02:21.744189 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-04-08 00:02:21.745160 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-04-08 00:02:21.746726 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-04-08 00:02:21.969995 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=dfb7bb16-007b-421e-b744-4a4d18b53f91] 2026-04-08 00:02:21.978422 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-04-08 00:02:21.981982 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-04-08 00:02:21.982128 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-04-08 00:02:21.982160 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-04-08 00:02:21.982169 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-04-08 00:02:21.983766 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-04-08 00:02:22.000775 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=b6e0e9d5-b9c8-41b6-bb37-71534ee01ba1] 2026-04-08 00:02:22.007307 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-04-08 00:02:22.011813 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-04-08 00:02:22.013562 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-04-08 00:02:22.179397 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=1963b309-1367-4719-b68e-ad1014edc2ed] 2026-04-08 00:02:22.184917 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-04-08 00:02:22.201974 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=be0c0e5d-7b47-47fd-9790-0cddca0ae69a] 2026-04-08 00:02:22.212141 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-04-08 00:02:22.325429 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=63fe272a-3115-4141-86d8-ed4e2515897d] 2026-04-08 00:02:22.352576 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-04-08 00:02:22.360711 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=77360c6f-1279-4b5c-8546-2c973510d6b3] 2026-04-08 00:02:22.369133 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-04-08 00:02:22.530134 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=55d82307-3644-4ebb-9aba-4809a1d6e903] 2026-04-08 00:02:22.544194 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-04-08 00:02:22.585081 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=9ecfb5ce-3071-4233-a00e-319ab0050572] 2026-04-08 00:02:22.597728 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-04-08 00:02:22.641093 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=e9d4034d-0963-4f8d-b121-8b1e467e1ddb] 2026-04-08 00:02:22.648723 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-04-08 00:02:22.810432 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=9d257491-e985-47bd-abfd-a121c7359998] 2026-04-08 00:02:23.004642 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=81c9d017-b23e-459b-81a4-dfdef21fa65f] 2026-04-08 00:02:23.097664 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=5b1a87c9-4483-4232-8a8e-7bd4830d337e] 2026-04-08 00:02:23.250284 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=7f062bbf-642d-440f-8147-87ef15fffa4b] 2026-04-08 00:02:23.290881 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 0s [id=f40c56aa-471a-4e30-9296-cf582196ee53] 2026-04-08 00:02:23.292952 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 0s [id=b1c5ba36-f26a-48c5-9e05-713d363ec01b] 2026-04-08 00:02:23.321671 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=7ca39984-3574-4e73-894a-1f7e25effff3] 2026-04-08 00:02:23.770906 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 0s [id=7372f077-57ae-4538-8cd2-d59aaad18a41] 2026-04-08 00:02:23.770987 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 2s [id=75da94a1-313d-4eab-9ccb-09f9f7e13170] 2026-04-08 00:02:24.673302 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=18985e18-488e-4e96-b4a5-72daa357a93d] 2026-04-08 00:02:24.697618 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-04-08 00:02:24.713210 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-04-08 00:02:24.716630 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-04-08 00:02:24.728038 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-04-08 00:02:24.730608 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-04-08 00:02:24.731374 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-04-08 00:02:24.740693 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-04-08 00:02:26.896513 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=9395b535-1ced-4171-bd06-c0807c5d7643] 2026-04-08 00:02:26.906832 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-04-08 00:02:26.912386 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-04-08 00:02:26.913951 | orchestrator | local_file.inventory: Creating... 2026-04-08 00:02:27.284140 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=047312e72d5739f0b65fe348dad69f37bdff1035] 2026-04-08 00:02:27.288962 | orchestrator | local_file.inventory: Creation complete after 0s [id=b73f068eeedbcbeede80171c1da1ce582f8ea790] 2026-04-08 00:02:27.687638 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=9395b535-1ced-4171-bd06-c0807c5d7643] 2026-04-08 00:02:34.720431 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-04-08 00:02:34.765549 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-04-08 00:02:34.765674 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-04-08 00:02:34.765700 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-04-08 00:02:34.765720 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-04-08 00:02:34.765741 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-04-08 00:02:44.729345 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-04-08 00:02:44.729469 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-04-08 00:02:44.739591 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-04-08 00:02:44.740798 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-04-08 00:02:44.740906 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-04-08 00:02:44.742009 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-04-08 00:02:45.309183 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 20s [id=e050578f-774f-4a3d-ad8b-6726340e2786] 2026-04-08 00:02:45.487786 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 20s [id=5c6ac434-37b7-45a7-b737-b2b16ed1ed1e] 2026-04-08 00:02:45.524653 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 21s [id=a9e000ce-175b-41dd-9894-0697ce2ad916] 2026-04-08 00:02:46.231641 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 21s [id=f6cf1212-8af3-4cab-b9d7-13e872dfb33f] 2026-04-08 00:02:54.737344 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-04-08 00:02:54.741520 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-04-08 00:02:56.059588 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=aa6514d4-8bad-48fa-ba0c-1232c6e11f1d] 2026-04-08 00:02:57.276726 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 32s [id=c832f7c0-b181-4eb3-adb3-0997a7bea04c] 2026-04-08 00:02:57.290578 | orchestrator | null_resource.node_semaphore: Creating... 2026-04-08 00:02:57.293440 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=4056941985954061040] 2026-04-08 00:02:57.299764 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-04-08 00:02:57.301690 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-04-08 00:02:57.313012 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-04-08 00:02:57.318419 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-04-08 00:02:57.323559 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-04-08 00:02:57.324219 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-04-08 00:02:57.328557 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-04-08 00:02:57.335584 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-04-08 00:02:57.335925 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-04-08 00:02:57.339797 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-04-08 00:03:00.713118 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 4s [id=5c6ac434-37b7-45a7-b737-b2b16ed1ed1e/10ca2d35-3b66-46f3-ab0f-253d8a66f2e6] 2026-04-08 00:03:00.728716 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=a9e000ce-175b-41dd-9894-0697ce2ad916/29911cfa-2062-4b30-9263-aae8438640a0] 2026-04-08 00:03:00.742180 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=5c6ac434-37b7-45a7-b737-b2b16ed1ed1e/3c9bb5e0-782f-4e13-9d09-525e18a95d4a] 2026-04-08 00:03:00.747037 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 4s [id=e050578f-774f-4a3d-ad8b-6726340e2786/22a44c82-a679-4e37-857c-f96ffb845a8a] 2026-04-08 00:03:00.762607 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 4s [id=a9e000ce-175b-41dd-9894-0697ce2ad916/7b1a6d0f-b1ea-4446-8ff8-479db77ebf36] 2026-04-08 00:03:00.766277 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=e050578f-774f-4a3d-ad8b-6726340e2786/0a17ff12-522e-4235-8e4c-edb4898b90f5] 2026-04-08 00:03:06.853244 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 10s [id=5c6ac434-37b7-45a7-b737-b2b16ed1ed1e/9851de66-42ac-4afe-9f6b-65921d8ebe77] 2026-04-08 00:03:06.866617 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 10s [id=a9e000ce-175b-41dd-9894-0697ce2ad916/8b42dfd4-7d2e-4d67-9c6f-8993b8aa5f54] 2026-04-08 00:03:06.888336 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 10s [id=e050578f-774f-4a3d-ad8b-6726340e2786/49047f2d-69c1-4fac-a475-f46440c51814] 2026-04-08 00:03:07.340784 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-04-08 00:03:17.341677 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-04-08 00:03:17.804888 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=d73a8119-9780-4dfd-8f6a-b47e4db99923] 2026-04-08 00:03:17.833844 | orchestrator | 2026-04-08 00:03:17.833917 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-04-08 00:03:17.833924 | orchestrator | 2026-04-08 00:03:17.833929 | orchestrator | Outputs: 2026-04-08 00:03:17.833933 | orchestrator | 2026-04-08 00:03:17.833945 | orchestrator | manager_address = 2026-04-08 00:03:17.833952 | orchestrator | private_key = 2026-04-08 00:03:18.309825 | orchestrator | ok: Runtime: 0:01:09.622983 2026-04-08 00:03:18.332779 | 2026-04-08 00:03:18.332913 | TASK [Fetch manager address] 2026-04-08 00:03:18.798140 | orchestrator | ok 2026-04-08 00:03:18.806390 | 2026-04-08 00:03:18.806520 | TASK [Set manager_host address] 2026-04-08 00:03:18.880062 | orchestrator | ok 2026-04-08 00:03:18.887714 | 2026-04-08 00:03:18.887841 | LOOP [Update ansible collections] 2026-04-08 00:03:19.833337 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-08 00:03:19.834336 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-08 00:03:19.834424 | orchestrator | Starting galaxy collection install process 2026-04-08 00:03:19.834460 | orchestrator | Process install dependency map 2026-04-08 00:03:19.834502 | orchestrator | Starting collection install process 2026-04-08 00:03:19.834530 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2026-04-08 00:03:19.834565 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2026-04-08 00:03:19.834597 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-04-08 00:03:19.834667 | orchestrator | ok: Item: commons Runtime: 0:00:00.593433 2026-04-08 00:03:20.906808 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-08 00:03:20.907012 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-08 00:03:20.907045 | orchestrator | Starting galaxy collection install process 2026-04-08 00:03:20.907068 | orchestrator | Process install dependency map 2026-04-08 00:03:20.907090 | orchestrator | Starting collection install process 2026-04-08 00:03:20.907168 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2026-04-08 00:03:20.907193 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2026-04-08 00:03:20.907214 | orchestrator | osism.services:999.0.0 was installed successfully 2026-04-08 00:03:20.907291 | orchestrator | ok: Item: services Runtime: 0:00:00.756593 2026-04-08 00:03:20.934087 | 2026-04-08 00:03:20.934229 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-08 00:03:31.522067 | orchestrator | ok 2026-04-08 00:03:31.532856 | 2026-04-08 00:03:31.532998 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-08 00:04:31.577602 | orchestrator | ok 2026-04-08 00:04:31.588316 | 2026-04-08 00:04:31.588573 | TASK [Fetch manager ssh hostkey] 2026-04-08 00:04:33.173486 | orchestrator | Output suppressed because no_log was given 2026-04-08 00:04:33.188928 | 2026-04-08 00:04:33.190285 | TASK [Get ssh keypair from terraform environment] 2026-04-08 00:04:33.740542 | orchestrator | ok: Runtime: 0:00:00.009277 2026-04-08 00:04:33.750132 | 2026-04-08 00:04:33.750267 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-08 00:04:33.791794 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-04-08 00:04:33.799691 | 2026-04-08 00:04:33.799803 | TASK [Run manager part 0] 2026-04-08 00:04:34.828422 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-08 00:04:34.875260 | orchestrator | 2026-04-08 00:04:34.875294 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-04-08 00:04:34.875300 | orchestrator | 2026-04-08 00:04:34.875313 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-04-08 00:04:36.925924 | orchestrator | ok: [testbed-manager] 2026-04-08 00:04:36.925963 | orchestrator | 2026-04-08 00:04:36.925984 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-08 00:04:36.925993 | orchestrator | 2026-04-08 00:04:36.926001 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-08 00:04:38.647398 | orchestrator | ok: [testbed-manager] 2026-04-08 00:04:38.647437 | orchestrator | 2026-04-08 00:04:38.647447 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-08 00:04:39.261056 | orchestrator | ok: [testbed-manager] 2026-04-08 00:04:39.261141 | orchestrator | 2026-04-08 00:04:39.261151 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-08 00:04:39.302284 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:04:39.302315 | orchestrator | 2026-04-08 00:04:39.302324 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-04-08 00:04:39.330264 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:04:39.330290 | orchestrator | 2026-04-08 00:04:39.330297 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-04-08 00:04:39.358687 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:04:39.358729 | orchestrator | 2026-04-08 00:04:39.358735 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-04-08 00:04:40.077320 | orchestrator | changed: [testbed-manager] 2026-04-08 00:04:40.077365 | orchestrator | 2026-04-08 00:04:40.077373 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-04-08 00:07:29.675654 | orchestrator | changed: [testbed-manager] 2026-04-08 00:07:29.675728 | orchestrator | 2026-04-08 00:07:29.675748 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-08 00:09:01.018190 | orchestrator | changed: [testbed-manager] 2026-04-08 00:09:01.018294 | orchestrator | 2026-04-08 00:09:01.018316 | orchestrator | TASK [Install required packages] *********************************************** 2026-04-08 00:09:21.297365 | orchestrator | changed: [testbed-manager] 2026-04-08 00:09:21.297447 | orchestrator | 2026-04-08 00:09:21.297459 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-04-08 00:09:29.804915 | orchestrator | changed: [testbed-manager] 2026-04-08 00:09:29.805014 | orchestrator | 2026-04-08 00:09:29.805033 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-08 00:09:29.854466 | orchestrator | ok: [testbed-manager] 2026-04-08 00:09:29.854549 | orchestrator | 2026-04-08 00:09:29.854568 | orchestrator | TASK [Get current user] ******************************************************** 2026-04-08 00:09:30.649085 | orchestrator | ok: [testbed-manager] 2026-04-08 00:09:30.649154 | orchestrator | 2026-04-08 00:09:30.649165 | orchestrator | TASK [Create venv directory] *************************************************** 2026-04-08 00:09:31.409000 | orchestrator | changed: [testbed-manager] 2026-04-08 00:09:31.409105 | orchestrator | 2026-04-08 00:09:31.409127 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-04-08 00:09:37.299149 | orchestrator | changed: [testbed-manager] 2026-04-08 00:09:37.299194 | orchestrator | 2026-04-08 00:09:37.299203 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-04-08 00:09:42.852943 | orchestrator | changed: [testbed-manager] 2026-04-08 00:09:42.853069 | orchestrator | 2026-04-08 00:09:42.853089 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-04-08 00:09:45.567231 | orchestrator | changed: [testbed-manager] 2026-04-08 00:09:45.567458 | orchestrator | 2026-04-08 00:09:45.567479 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-04-08 00:09:47.261564 | orchestrator | changed: [testbed-manager] 2026-04-08 00:09:47.261714 | orchestrator | 2026-04-08 00:09:47.261722 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-04-08 00:09:48.329097 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-08 00:09:48.329358 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-08 00:09:48.329378 | orchestrator | 2026-04-08 00:09:48.329392 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-04-08 00:09:48.370557 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-08 00:09:48.370632 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-08 00:09:48.370651 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-08 00:09:48.370667 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-08 00:09:51.522891 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-08 00:09:51.522963 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-08 00:09:51.522973 | orchestrator | 2026-04-08 00:09:51.522981 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-04-08 00:09:52.094177 | orchestrator | changed: [testbed-manager] 2026-04-08 00:09:52.094281 | orchestrator | 2026-04-08 00:09:52.094304 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-04-08 00:10:13.658188 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-04-08 00:10:13.658356 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-04-08 00:10:13.658376 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-04-08 00:10:13.658389 | orchestrator | 2026-04-08 00:10:13.658401 | orchestrator | TASK [Install local collections] *********************************************** 2026-04-08 00:10:15.923758 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-04-08 00:10:15.923861 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-04-08 00:10:15.923879 | orchestrator | 2026-04-08 00:10:15.923894 | orchestrator | PLAY [Create operator user] **************************************************** 2026-04-08 00:10:15.923907 | orchestrator | 2026-04-08 00:10:15.923918 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-08 00:10:17.307100 | orchestrator | ok: [testbed-manager] 2026-04-08 00:10:17.307185 | orchestrator | 2026-04-08 00:10:17.307201 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-08 00:10:17.358316 | orchestrator | ok: [testbed-manager] 2026-04-08 00:10:17.358409 | orchestrator | 2026-04-08 00:10:17.358427 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-08 00:10:17.453218 | orchestrator | ok: [testbed-manager] 2026-04-08 00:10:17.453271 | orchestrator | 2026-04-08 00:10:17.453278 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-08 00:10:18.202126 | orchestrator | changed: [testbed-manager] 2026-04-08 00:10:18.202184 | orchestrator | 2026-04-08 00:10:18.202194 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-08 00:10:18.915188 | orchestrator | changed: [testbed-manager] 2026-04-08 00:10:18.916141 | orchestrator | 2026-04-08 00:10:18.916171 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-08 00:10:20.265418 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-04-08 00:10:20.265479 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-04-08 00:10:20.265493 | orchestrator | 2026-04-08 00:10:20.265506 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-08 00:10:21.640794 | orchestrator | changed: [testbed-manager] 2026-04-08 00:10:21.640840 | orchestrator | 2026-04-08 00:10:21.640849 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-08 00:10:23.369794 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-04-08 00:10:23.370365 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-04-08 00:10:23.370417 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-04-08 00:10:23.370436 | orchestrator | 2026-04-08 00:10:23.370459 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-08 00:10:23.430856 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:10:23.431092 | orchestrator | 2026-04-08 00:10:23.431117 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-08 00:10:23.495095 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:10:23.495182 | orchestrator | 2026-04-08 00:10:23.495197 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-08 00:10:24.050282 | orchestrator | changed: [testbed-manager] 2026-04-08 00:10:24.050368 | orchestrator | 2026-04-08 00:10:24.050385 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-08 00:10:24.124869 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:10:24.124922 | orchestrator | 2026-04-08 00:10:24.124928 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-08 00:10:24.984345 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-08 00:10:24.984444 | orchestrator | changed: [testbed-manager] 2026-04-08 00:10:24.984462 | orchestrator | 2026-04-08 00:10:24.984474 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-08 00:10:25.023149 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:10:25.023201 | orchestrator | 2026-04-08 00:10:25.023208 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-08 00:10:25.059570 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:10:25.059603 | orchestrator | 2026-04-08 00:10:25.059610 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-08 00:10:25.091534 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:10:25.091570 | orchestrator | 2026-04-08 00:10:25.091577 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-08 00:10:25.169498 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:10:25.169538 | orchestrator | 2026-04-08 00:10:25.169546 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-08 00:10:25.853405 | orchestrator | ok: [testbed-manager] 2026-04-08 00:10:25.853445 | orchestrator | 2026-04-08 00:10:25.853454 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-08 00:10:25.853462 | orchestrator | 2026-04-08 00:10:25.853471 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-08 00:10:27.134352 | orchestrator | ok: [testbed-manager] 2026-04-08 00:10:27.134398 | orchestrator | 2026-04-08 00:10:27.134408 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-04-08 00:10:28.104792 | orchestrator | changed: [testbed-manager] 2026-04-08 00:10:28.104904 | orchestrator | 2026-04-08 00:10:28.104922 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:10:28.104937 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=10 rescued=0 ignored=0 2026-04-08 00:10:28.104950 | orchestrator | 2026-04-08 00:10:28.533022 | orchestrator | ok: Runtime: 0:05:54.074216 2026-04-08 00:10:28.550278 | 2026-04-08 00:10:28.550406 | TASK [Point out that the log in on the manager is now possible] 2026-04-08 00:10:28.599706 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-04-08 00:10:28.610887 | 2026-04-08 00:10:28.611043 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-08 00:10:28.653570 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-04-08 00:10:28.663559 | 2026-04-08 00:10:28.663692 | TASK [Run manager part 1 + 2] 2026-04-08 00:10:29.502714 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-08 00:10:29.560020 | orchestrator | 2026-04-08 00:10:29.560094 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-04-08 00:10:29.560100 | orchestrator | 2026-04-08 00:10:29.560113 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-08 00:10:32.003744 | orchestrator | ok: [testbed-manager] 2026-04-08 00:10:32.003802 | orchestrator | 2026-04-08 00:10:32.003829 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-04-08 00:10:32.038572 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:10:32.038636 | orchestrator | 2026-04-08 00:10:32.038648 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-08 00:10:32.078601 | orchestrator | ok: [testbed-manager] 2026-04-08 00:10:32.078650 | orchestrator | 2026-04-08 00:10:32.078664 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-08 00:10:32.115543 | orchestrator | ok: [testbed-manager] 2026-04-08 00:10:32.115620 | orchestrator | 2026-04-08 00:10:32.115637 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-08 00:10:32.184241 | orchestrator | ok: [testbed-manager] 2026-04-08 00:10:32.184318 | orchestrator | 2026-04-08 00:10:32.184331 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-08 00:10:32.274560 | orchestrator | ok: [testbed-manager] 2026-04-08 00:10:32.274624 | orchestrator | 2026-04-08 00:10:32.274635 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-08 00:10:32.320057 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-04-08 00:10:32.320108 | orchestrator | 2026-04-08 00:10:32.320115 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-08 00:10:32.955391 | orchestrator | ok: [testbed-manager] 2026-04-08 00:10:32.955432 | orchestrator | 2026-04-08 00:10:32.955439 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-08 00:10:32.995508 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:10:32.995582 | orchestrator | 2026-04-08 00:10:32.995597 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-08 00:10:34.173355 | orchestrator | changed: [testbed-manager] 2026-04-08 00:10:34.173416 | orchestrator | 2026-04-08 00:10:34.173426 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-08 00:10:34.666986 | orchestrator | ok: [testbed-manager] 2026-04-08 00:10:34.667059 | orchestrator | 2026-04-08 00:10:34.667068 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-08 00:10:35.698143 | orchestrator | changed: [testbed-manager] 2026-04-08 00:10:35.698486 | orchestrator | 2026-04-08 00:10:35.698518 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-08 00:10:50.240998 | orchestrator | changed: [testbed-manager] 2026-04-08 00:10:50.241085 | orchestrator | 2026-04-08 00:10:50.241095 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-08 00:10:50.899498 | orchestrator | ok: [testbed-manager] 2026-04-08 00:10:50.899582 | orchestrator | 2026-04-08 00:10:50.899597 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-08 00:10:50.959832 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:10:50.959893 | orchestrator | 2026-04-08 00:10:50.959899 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-04-08 00:10:51.924156 | orchestrator | changed: [testbed-manager] 2026-04-08 00:10:51.924225 | orchestrator | 2026-04-08 00:10:51.924242 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-04-08 00:10:52.884086 | orchestrator | changed: [testbed-manager] 2026-04-08 00:10:52.884176 | orchestrator | 2026-04-08 00:10:52.884191 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-04-08 00:10:53.445108 | orchestrator | changed: [testbed-manager] 2026-04-08 00:10:53.445203 | orchestrator | 2026-04-08 00:10:53.445219 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-04-08 00:10:53.487851 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-08 00:10:53.487946 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-08 00:10:53.487957 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-08 00:10:53.487966 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-08 00:10:55.978155 | orchestrator | changed: [testbed-manager] 2026-04-08 00:10:55.978254 | orchestrator | 2026-04-08 00:10:55.978272 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-04-08 00:11:05.744380 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-04-08 00:11:05.744442 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-04-08 00:11:05.744451 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-04-08 00:11:05.744458 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-04-08 00:11:05.744467 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-04-08 00:11:05.744474 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-04-08 00:11:05.744480 | orchestrator | 2026-04-08 00:11:05.744486 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-04-08 00:11:06.766139 | orchestrator | changed: [testbed-manager] 2026-04-08 00:11:06.766226 | orchestrator | 2026-04-08 00:11:06.766243 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-04-08 00:11:09.711380 | orchestrator | changed: [testbed-manager] 2026-04-08 00:11:09.712653 | orchestrator | 2026-04-08 00:11:09.712720 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-04-08 00:11:09.745104 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:11:09.745195 | orchestrator | 2026-04-08 00:11:09.745212 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-04-08 00:12:44.583776 | orchestrator | changed: [testbed-manager] 2026-04-08 00:12:44.583879 | orchestrator | 2026-04-08 00:12:44.583897 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-08 00:12:45.700662 | orchestrator | ok: [testbed-manager] 2026-04-08 00:12:45.700773 | orchestrator | 2026-04-08 00:12:45.700792 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:12:45.700806 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 2026-04-08 00:12:45.700819 | orchestrator | 2026-04-08 00:12:46.289098 | orchestrator | ok: Runtime: 0:02:16.841528 2026-04-08 00:12:46.308062 | 2026-04-08 00:12:46.308217 | TASK [Reboot manager] 2026-04-08 00:12:47.846750 | orchestrator | ok: Runtime: 0:00:00.982095 2026-04-08 00:12:47.862434 | 2026-04-08 00:12:47.862584 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-08 00:13:02.863152 | orchestrator | ok 2026-04-08 00:13:02.876112 | 2026-04-08 00:13:02.876283 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-08 00:14:02.937963 | orchestrator | ok 2026-04-08 00:14:02.947659 | 2026-04-08 00:14:02.947787 | TASK [Deploy manager + bootstrap nodes] 2026-04-08 00:14:05.602461 | orchestrator | 2026-04-08 00:14:05.602676 | orchestrator | # DEPLOY MANAGER 2026-04-08 00:14:05.602701 | orchestrator | 2026-04-08 00:14:05.602715 | orchestrator | + set -e 2026-04-08 00:14:05.602728 | orchestrator | + echo 2026-04-08 00:14:05.602742 | orchestrator | + echo '# DEPLOY MANAGER' 2026-04-08 00:14:05.602759 | orchestrator | + echo 2026-04-08 00:14:05.602804 | orchestrator | + cat /opt/manager-vars.sh 2026-04-08 00:14:05.605746 | orchestrator | export NUMBER_OF_NODES=6 2026-04-08 00:14:05.605778 | orchestrator | 2026-04-08 00:14:05.605815 | orchestrator | export CEPH_VERSION= 2026-04-08 00:14:05.605829 | orchestrator | export CONFIGURATION_VERSION=main 2026-04-08 00:14:05.605841 | orchestrator | export MANAGER_VERSION=10.0.0 2026-04-08 00:14:05.605854 | orchestrator | export OPENSTACK_VERSION= 2026-04-08 00:14:05.605865 | orchestrator | 2026-04-08 00:14:05.605876 | orchestrator | export ARA=false 2026-04-08 00:14:05.605893 | orchestrator | export DEPLOY_MODE=manager 2026-04-08 00:14:05.605905 | orchestrator | export TEMPEST=true 2026-04-08 00:14:05.605916 | orchestrator | export IS_ZUUL=true 2026-04-08 00:14:05.605934 | orchestrator | 2026-04-08 00:14:05.605953 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.114 2026-04-08 00:14:05.605965 | orchestrator | export EXTERNAL_API=false 2026-04-08 00:14:05.605976 | orchestrator | 2026-04-08 00:14:05.605993 | orchestrator | export IMAGE_USER=ubuntu 2026-04-08 00:14:05.606004 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-04-08 00:14:05.606060 | orchestrator | 2026-04-08 00:14:05.606079 | orchestrator | export CEPH_STACK=ceph-ansible 2026-04-08 00:14:05.606097 | orchestrator | 2026-04-08 00:14:05.606109 | orchestrator | + echo 2026-04-08 00:14:05.606120 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-08 00:14:05.606606 | orchestrator | ++ export INTERACTIVE=false 2026-04-08 00:14:05.606627 | orchestrator | ++ INTERACTIVE=false 2026-04-08 00:14:05.606638 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-08 00:14:05.606650 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-08 00:14:05.606905 | orchestrator | + source /opt/manager-vars.sh 2026-04-08 00:14:05.606920 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-08 00:14:05.606931 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-08 00:14:05.606942 | orchestrator | ++ export CEPH_VERSION= 2026-04-08 00:14:05.606953 | orchestrator | ++ CEPH_VERSION= 2026-04-08 00:14:05.606969 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-08 00:14:05.606980 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-08 00:14:05.606991 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-08 00:14:05.607002 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-08 00:14:05.607013 | orchestrator | ++ export OPENSTACK_VERSION= 2026-04-08 00:14:05.607024 | orchestrator | ++ OPENSTACK_VERSION= 2026-04-08 00:14:05.607035 | orchestrator | ++ export ARA=false 2026-04-08 00:14:05.607046 | orchestrator | ++ ARA=false 2026-04-08 00:14:05.607057 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-08 00:14:05.607078 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-08 00:14:05.607090 | orchestrator | ++ export TEMPEST=true 2026-04-08 00:14:05.607101 | orchestrator | ++ TEMPEST=true 2026-04-08 00:14:05.607111 | orchestrator | ++ export IS_ZUUL=true 2026-04-08 00:14:05.607122 | orchestrator | ++ IS_ZUUL=true 2026-04-08 00:14:05.607133 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.114 2026-04-08 00:14:05.607145 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.114 2026-04-08 00:14:05.607156 | orchestrator | ++ export EXTERNAL_API=false 2026-04-08 00:14:05.607170 | orchestrator | ++ EXTERNAL_API=false 2026-04-08 00:14:05.607181 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-08 00:14:05.607192 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-08 00:14:05.607203 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-08 00:14:05.607214 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-08 00:14:05.607225 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-08 00:14:05.607236 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-08 00:14:05.607248 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-04-08 00:14:05.655164 | orchestrator | + docker version 2026-04-08 00:14:05.747775 | orchestrator | Client: Docker Engine - Community 2026-04-08 00:14:05.747867 | orchestrator | Version: 27.5.1 2026-04-08 00:14:05.747881 | orchestrator | API version: 1.47 2026-04-08 00:14:05.747893 | orchestrator | Go version: go1.22.11 2026-04-08 00:14:05.747904 | orchestrator | Git commit: 9f9e405 2026-04-08 00:14:05.747915 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-08 00:14:05.747928 | orchestrator | OS/Arch: linux/amd64 2026-04-08 00:14:05.747939 | orchestrator | Context: default 2026-04-08 00:14:05.747950 | orchestrator | 2026-04-08 00:14:05.747962 | orchestrator | Server: Docker Engine - Community 2026-04-08 00:14:05.747974 | orchestrator | Engine: 2026-04-08 00:14:05.747996 | orchestrator | Version: 27.5.1 2026-04-08 00:14:05.748008 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-04-08 00:14:05.748049 | orchestrator | Go version: go1.22.11 2026-04-08 00:14:05.748061 | orchestrator | Git commit: 4c9b3b0 2026-04-08 00:14:05.748072 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-08 00:14:05.748083 | orchestrator | OS/Arch: linux/amd64 2026-04-08 00:14:05.748094 | orchestrator | Experimental: false 2026-04-08 00:14:05.748105 | orchestrator | containerd: 2026-04-08 00:14:05.748120 | orchestrator | Version: v2.2.2 2026-04-08 00:14:05.748131 | orchestrator | GitCommit: 301b2dac98f15c27117da5c8af12118a041a31d9 2026-04-08 00:14:05.748143 | orchestrator | runc: 2026-04-08 00:14:05.748154 | orchestrator | Version: 1.3.4 2026-04-08 00:14:05.748165 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-04-08 00:14:05.748176 | orchestrator | docker-init: 2026-04-08 00:14:05.748187 | orchestrator | Version: 0.19.0 2026-04-08 00:14:05.748199 | orchestrator | GitCommit: de40ad0 2026-04-08 00:14:05.751193 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-04-08 00:14:05.763610 | orchestrator | + set -e 2026-04-08 00:14:05.763685 | orchestrator | + source /opt/manager-vars.sh 2026-04-08 00:14:05.763699 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-08 00:14:05.763710 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-08 00:14:05.763721 | orchestrator | ++ export CEPH_VERSION= 2026-04-08 00:14:05.763732 | orchestrator | ++ CEPH_VERSION= 2026-04-08 00:14:05.763744 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-08 00:14:05.763756 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-08 00:14:05.763767 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-08 00:14:05.763779 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-08 00:14:05.763790 | orchestrator | ++ export OPENSTACK_VERSION= 2026-04-08 00:14:05.763801 | orchestrator | ++ OPENSTACK_VERSION= 2026-04-08 00:14:05.763812 | orchestrator | ++ export ARA=false 2026-04-08 00:14:05.763823 | orchestrator | ++ ARA=false 2026-04-08 00:14:05.763834 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-08 00:14:05.763845 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-08 00:14:05.763855 | orchestrator | ++ export TEMPEST=true 2026-04-08 00:14:05.763866 | orchestrator | ++ TEMPEST=true 2026-04-08 00:14:05.763876 | orchestrator | ++ export IS_ZUUL=true 2026-04-08 00:14:05.763892 | orchestrator | ++ IS_ZUUL=true 2026-04-08 00:14:05.763910 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.114 2026-04-08 00:14:05.763929 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.114 2026-04-08 00:14:05.763959 | orchestrator | ++ export EXTERNAL_API=false 2026-04-08 00:14:05.763971 | orchestrator | ++ EXTERNAL_API=false 2026-04-08 00:14:05.763982 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-08 00:14:05.763993 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-08 00:14:05.764004 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-08 00:14:05.764015 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-08 00:14:05.764026 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-08 00:14:05.764037 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-08 00:14:05.764047 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-08 00:14:05.764058 | orchestrator | ++ export INTERACTIVE=false 2026-04-08 00:14:05.764069 | orchestrator | ++ INTERACTIVE=false 2026-04-08 00:14:05.764080 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-08 00:14:05.764094 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-08 00:14:05.764106 | orchestrator | + [[ 10.0.0 != \l\a\t\e\s\t ]] 2026-04-08 00:14:05.764125 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 10.0.0 2026-04-08 00:14:05.770685 | orchestrator | + set -e 2026-04-08 00:14:05.771248 | orchestrator | + VERSION=10.0.0 2026-04-08 00:14:05.771288 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 10.0.0/g' /opt/configuration/environments/manager/configuration.yml 2026-04-08 00:14:05.780408 | orchestrator | + [[ 10.0.0 != \l\a\t\e\s\t ]] 2026-04-08 00:14:05.780465 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-04-08 00:14:05.784470 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-04-08 00:14:05.788217 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-04-08 00:14:05.794978 | orchestrator | /opt/configuration ~ 2026-04-08 00:14:05.795027 | orchestrator | + set -e 2026-04-08 00:14:05.795039 | orchestrator | + pushd /opt/configuration 2026-04-08 00:14:05.795051 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-08 00:14:05.796456 | orchestrator | + source /opt/venv/bin/activate 2026-04-08 00:14:05.797613 | orchestrator | ++ deactivate nondestructive 2026-04-08 00:14:05.797648 | orchestrator | ++ '[' -n '' ']' 2026-04-08 00:14:05.797660 | orchestrator | ++ '[' -n '' ']' 2026-04-08 00:14:05.797672 | orchestrator | ++ hash -r 2026-04-08 00:14:05.797920 | orchestrator | ++ '[' -n '' ']' 2026-04-08 00:14:05.797938 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-08 00:14:05.797949 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-08 00:14:05.797960 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-08 00:14:05.798008 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-08 00:14:05.798071 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-08 00:14:05.798242 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-08 00:14:05.798258 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-08 00:14:05.798271 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-08 00:14:05.798283 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-08 00:14:05.798294 | orchestrator | ++ export PATH 2026-04-08 00:14:05.798305 | orchestrator | ++ '[' -n '' ']' 2026-04-08 00:14:05.798316 | orchestrator | ++ '[' -z '' ']' 2026-04-08 00:14:05.798327 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-08 00:14:05.798338 | orchestrator | ++ PS1='(venv) ' 2026-04-08 00:14:05.798348 | orchestrator | ++ export PS1 2026-04-08 00:14:05.798364 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-08 00:14:05.798375 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-08 00:14:05.798386 | orchestrator | ++ hash -r 2026-04-08 00:14:05.798522 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-04-08 00:14:06.794117 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-04-08 00:14:06.794694 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.33.1) 2026-04-08 00:14:06.796066 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-04-08 00:14:06.797172 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-04-08 00:14:06.798310 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-04-08 00:14:06.808064 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.2) 2026-04-08 00:14:06.809232 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-04-08 00:14:06.810316 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-04-08 00:14:06.811509 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-04-08 00:14:06.839826 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.7) 2026-04-08 00:14:06.841171 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-04-08 00:14:06.842745 | orchestrator | Requirement already satisfied: urllib3<3,>=1.26 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-04-08 00:14:06.844044 | orchestrator | Requirement already satisfied: certifi>=2023.5.7 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-04-08 00:14:06.847759 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-04-08 00:14:07.041471 | orchestrator | ++ which gilt 2026-04-08 00:14:07.045247 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-04-08 00:14:07.045315 | orchestrator | + /opt/venv/bin/gilt overlay 2026-04-08 00:14:07.257163 | orchestrator | osism.cfg-generics: 2026-04-08 00:14:07.434251 | orchestrator | - copied (v0.20260319.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-04-08 00:14:07.434348 | orchestrator | - copied (v0.20260319.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-04-08 00:14:07.434527 | orchestrator | - copied (v0.20260319.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-04-08 00:14:07.434591 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-04-08 00:14:08.167919 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-04-08 00:14:08.177721 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-04-08 00:14:08.510355 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-04-08 00:14:08.663234 | orchestrator | ~ 2026-04-08 00:14:08.663305 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-08 00:14:08.663321 | orchestrator | + deactivate 2026-04-08 00:14:08.663336 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-08 00:14:08.663349 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-08 00:14:08.663360 | orchestrator | + export PATH 2026-04-08 00:14:08.663372 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-08 00:14:08.663383 | orchestrator | + '[' -n '' ']' 2026-04-08 00:14:08.663394 | orchestrator | + hash -r 2026-04-08 00:14:08.663405 | orchestrator | + '[' -n '' ']' 2026-04-08 00:14:08.663416 | orchestrator | + unset VIRTUAL_ENV 2026-04-08 00:14:08.663426 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-08 00:14:08.663438 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-08 00:14:08.663448 | orchestrator | + unset -f deactivate 2026-04-08 00:14:08.663459 | orchestrator | + popd 2026-04-08 00:14:08.663470 | orchestrator | + [[ 10.0.0 == \l\a\t\e\s\t ]] 2026-04-08 00:14:08.663481 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-04-08 00:14:08.663492 | orchestrator | ++ semver 10.0.0 7.0.0 2026-04-08 00:14:08.663503 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-08 00:14:08.663514 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-04-08 00:14:08.663526 | orchestrator | ++ semver 10.0.0 10.0.0-0 2026-04-08 00:14:08.675296 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-08 00:14:08.675385 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-04-08 00:14:08.682064 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-04-08 00:14:08.686566 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-04-08 00:14:08.774903 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-08 00:14:08.775002 | orchestrator | + source /opt/venv/bin/activate 2026-04-08 00:14:08.775026 | orchestrator | ++ deactivate nondestructive 2026-04-08 00:14:08.775047 | orchestrator | ++ '[' -n '' ']' 2026-04-08 00:14:08.775067 | orchestrator | ++ '[' -n '' ']' 2026-04-08 00:14:08.775087 | orchestrator | ++ hash -r 2026-04-08 00:14:08.775106 | orchestrator | ++ '[' -n '' ']' 2026-04-08 00:14:08.775130 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-08 00:14:08.775149 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-08 00:14:08.775168 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-08 00:14:08.775190 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-08 00:14:08.775210 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-08 00:14:08.775245 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-08 00:14:08.775266 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-08 00:14:08.775286 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-08 00:14:08.775307 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-08 00:14:08.775326 | orchestrator | ++ export PATH 2026-04-08 00:14:08.775342 | orchestrator | ++ '[' -n '' ']' 2026-04-08 00:14:08.775353 | orchestrator | ++ '[' -z '' ']' 2026-04-08 00:14:08.775364 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-08 00:14:08.775375 | orchestrator | ++ PS1='(venv) ' 2026-04-08 00:14:08.775386 | orchestrator | ++ export PS1 2026-04-08 00:14:08.775397 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-08 00:14:08.775407 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-08 00:14:08.775419 | orchestrator | ++ hash -r 2026-04-08 00:14:08.775435 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-04-08 00:14:09.791249 | orchestrator | 2026-04-08 00:14:09.791362 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-04-08 00:14:09.791386 | orchestrator | 2026-04-08 00:14:09.791403 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-08 00:14:10.320726 | orchestrator | ok: [testbed-manager] 2026-04-08 00:14:10.320853 | orchestrator | 2026-04-08 00:14:10.320870 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-08 00:14:11.220262 | orchestrator | changed: [testbed-manager] 2026-04-08 00:14:11.220355 | orchestrator | 2026-04-08 00:14:11.220369 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-04-08 00:14:11.220380 | orchestrator | 2026-04-08 00:14:11.220390 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-08 00:14:13.383393 | orchestrator | ok: [testbed-manager] 2026-04-08 00:14:13.383480 | orchestrator | 2026-04-08 00:14:13.383493 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-04-08 00:14:13.432383 | orchestrator | ok: [testbed-manager] 2026-04-08 00:14:13.432481 | orchestrator | 2026-04-08 00:14:13.432499 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-04-08 00:14:13.877517 | orchestrator | changed: [testbed-manager] 2026-04-08 00:14:13.877699 | orchestrator | 2026-04-08 00:14:13.877726 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-04-08 00:14:13.916994 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:14:13.917081 | orchestrator | 2026-04-08 00:14:13.917095 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-08 00:14:14.258590 | orchestrator | changed: [testbed-manager] 2026-04-08 00:14:14.258697 | orchestrator | 2026-04-08 00:14:14.258715 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-04-08 00:14:14.589178 | orchestrator | ok: [testbed-manager] 2026-04-08 00:14:14.589276 | orchestrator | 2026-04-08 00:14:14.589293 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-04-08 00:14:14.691986 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:14:14.692074 | orchestrator | 2026-04-08 00:14:14.692088 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-04-08 00:14:14.692101 | orchestrator | 2026-04-08 00:14:14.692113 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-08 00:14:16.353797 | orchestrator | ok: [testbed-manager] 2026-04-08 00:14:16.353897 | orchestrator | 2026-04-08 00:14:16.353913 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-04-08 00:14:16.447647 | orchestrator | included: osism.services.traefik for testbed-manager 2026-04-08 00:14:16.447747 | orchestrator | 2026-04-08 00:14:16.447766 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-04-08 00:14:16.497866 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-04-08 00:14:16.497954 | orchestrator | 2026-04-08 00:14:16.497969 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-04-08 00:14:17.529491 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-04-08 00:14:17.529610 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-04-08 00:14:17.529626 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-04-08 00:14:17.529638 | orchestrator | 2026-04-08 00:14:17.529649 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-04-08 00:14:19.256872 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-04-08 00:14:19.256964 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-04-08 00:14:19.256980 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-04-08 00:14:19.256993 | orchestrator | 2026-04-08 00:14:19.257006 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-04-08 00:14:19.890240 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-08 00:14:19.890335 | orchestrator | changed: [testbed-manager] 2026-04-08 00:14:19.890350 | orchestrator | 2026-04-08 00:14:19.890363 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-04-08 00:14:20.505778 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-08 00:14:20.505890 | orchestrator | changed: [testbed-manager] 2026-04-08 00:14:20.505915 | orchestrator | 2026-04-08 00:14:20.505928 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-04-08 00:14:20.559884 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:14:20.559963 | orchestrator | 2026-04-08 00:14:20.559975 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-04-08 00:14:20.921260 | orchestrator | ok: [testbed-manager] 2026-04-08 00:14:20.921388 | orchestrator | 2026-04-08 00:14:20.921405 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-04-08 00:14:20.984657 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-04-08 00:14:20.984747 | orchestrator | 2026-04-08 00:14:20.984762 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-04-08 00:14:22.011547 | orchestrator | changed: [testbed-manager] 2026-04-08 00:14:22.011648 | orchestrator | 2026-04-08 00:14:22.011664 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-04-08 00:14:22.722269 | orchestrator | changed: [testbed-manager] 2026-04-08 00:14:22.722364 | orchestrator | 2026-04-08 00:14:22.722381 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-04-08 00:14:33.609403 | orchestrator | changed: [testbed-manager] 2026-04-08 00:14:33.609539 | orchestrator | 2026-04-08 00:14:33.609558 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-04-08 00:14:33.651385 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:14:33.651516 | orchestrator | 2026-04-08 00:14:33.651535 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-04-08 00:14:33.651548 | orchestrator | 2026-04-08 00:14:33.651560 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-08 00:14:35.415176 | orchestrator | ok: [testbed-manager] 2026-04-08 00:14:35.415299 | orchestrator | 2026-04-08 00:14:35.415316 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-04-08 00:14:35.520036 | orchestrator | included: osism.services.manager for testbed-manager 2026-04-08 00:14:35.520127 | orchestrator | 2026-04-08 00:14:35.520143 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-04-08 00:14:35.571969 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-04-08 00:14:35.572057 | orchestrator | 2026-04-08 00:14:35.572071 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-04-08 00:14:37.810210 | orchestrator | ok: [testbed-manager] 2026-04-08 00:14:37.810288 | orchestrator | 2026-04-08 00:14:37.810299 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-04-08 00:14:37.852680 | orchestrator | ok: [testbed-manager] 2026-04-08 00:14:37.852772 | orchestrator | 2026-04-08 00:14:37.852786 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-04-08 00:14:37.975080 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-04-08 00:14:37.975175 | orchestrator | 2026-04-08 00:14:37.975214 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-04-08 00:14:40.663949 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-04-08 00:14:40.664049 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-04-08 00:14:40.664065 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-04-08 00:14:40.664077 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-04-08 00:14:40.664089 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-04-08 00:14:40.664100 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-04-08 00:14:40.664111 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-04-08 00:14:40.664122 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-04-08 00:14:40.664134 | orchestrator | 2026-04-08 00:14:40.664146 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-04-08 00:14:41.291918 | orchestrator | changed: [testbed-manager] 2026-04-08 00:14:41.292032 | orchestrator | 2026-04-08 00:14:41.292054 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-04-08 00:14:41.899326 | orchestrator | changed: [testbed-manager] 2026-04-08 00:14:41.899427 | orchestrator | 2026-04-08 00:14:41.899444 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-04-08 00:14:41.981422 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-04-08 00:14:41.981595 | orchestrator | 2026-04-08 00:14:41.981613 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-04-08 00:14:43.112987 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-04-08 00:14:43.113082 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-04-08 00:14:43.113097 | orchestrator | 2026-04-08 00:14:43.113109 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-04-08 00:14:43.732826 | orchestrator | changed: [testbed-manager] 2026-04-08 00:14:43.732925 | orchestrator | 2026-04-08 00:14:43.732942 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-04-08 00:14:43.789830 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:14:43.789922 | orchestrator | 2026-04-08 00:14:43.789936 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-04-08 00:14:43.859019 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-04-08 00:14:43.859113 | orchestrator | 2026-04-08 00:14:43.859129 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-04-08 00:14:44.467331 | orchestrator | changed: [testbed-manager] 2026-04-08 00:14:44.467430 | orchestrator | 2026-04-08 00:14:44.467448 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-04-08 00:14:44.535607 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-04-08 00:14:44.535704 | orchestrator | 2026-04-08 00:14:44.535720 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-04-08 00:14:45.869234 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-08 00:14:45.869415 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-08 00:14:45.869433 | orchestrator | changed: [testbed-manager] 2026-04-08 00:14:45.869537 | orchestrator | 2026-04-08 00:14:45.870376 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-04-08 00:14:46.476582 | orchestrator | changed: [testbed-manager] 2026-04-08 00:14:46.476675 | orchestrator | 2026-04-08 00:14:46.476691 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-04-08 00:14:46.531504 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:14:46.531619 | orchestrator | 2026-04-08 00:14:46.531644 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-04-08 00:14:46.631904 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-04-08 00:14:46.632021 | orchestrator | 2026-04-08 00:14:46.632049 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-04-08 00:14:47.167508 | orchestrator | changed: [testbed-manager] 2026-04-08 00:14:47.167602 | orchestrator | 2026-04-08 00:14:47.167619 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-04-08 00:14:47.558953 | orchestrator | changed: [testbed-manager] 2026-04-08 00:14:47.559048 | orchestrator | 2026-04-08 00:14:47.559064 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-04-08 00:14:48.749981 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-04-08 00:14:48.750134 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-04-08 00:14:48.750157 | orchestrator | 2026-04-08 00:14:48.750174 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-04-08 00:14:49.365127 | orchestrator | changed: [testbed-manager] 2026-04-08 00:14:49.365194 | orchestrator | 2026-04-08 00:14:49.365200 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-04-08 00:14:49.714890 | orchestrator | ok: [testbed-manager] 2026-04-08 00:14:49.714993 | orchestrator | 2026-04-08 00:14:49.715009 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-04-08 00:14:50.055993 | orchestrator | changed: [testbed-manager] 2026-04-08 00:14:50.056085 | orchestrator | 2026-04-08 00:14:50.056100 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-04-08 00:14:50.094827 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:14:50.094948 | orchestrator | 2026-04-08 00:14:50.094965 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-04-08 00:14:50.155381 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-04-08 00:14:50.155499 | orchestrator | 2026-04-08 00:14:50.155516 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-04-08 00:14:50.191556 | orchestrator | ok: [testbed-manager] 2026-04-08 00:14:50.191641 | orchestrator | 2026-04-08 00:14:50.191654 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-04-08 00:14:52.162338 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-04-08 00:14:52.162471 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-04-08 00:14:52.162492 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-04-08 00:14:52.162505 | orchestrator | 2026-04-08 00:14:52.162518 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-04-08 00:14:52.830394 | orchestrator | changed: [testbed-manager] 2026-04-08 00:14:52.830517 | orchestrator | 2026-04-08 00:14:52.830534 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-04-08 00:14:53.517349 | orchestrator | changed: [testbed-manager] 2026-04-08 00:14:53.517488 | orchestrator | 2026-04-08 00:14:53.517506 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-04-08 00:14:54.180778 | orchestrator | changed: [testbed-manager] 2026-04-08 00:14:54.180893 | orchestrator | 2026-04-08 00:14:54.180911 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-04-08 00:14:54.253990 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-04-08 00:14:54.254168 | orchestrator | 2026-04-08 00:14:54.254197 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-04-08 00:14:54.289004 | orchestrator | ok: [testbed-manager] 2026-04-08 00:14:54.289097 | orchestrator | 2026-04-08 00:14:54.289112 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-04-08 00:14:54.966271 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-04-08 00:14:54.966379 | orchestrator | 2026-04-08 00:14:54.966396 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-04-08 00:14:55.047847 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-04-08 00:14:55.047928 | orchestrator | 2026-04-08 00:14:55.047941 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-04-08 00:14:55.726285 | orchestrator | changed: [testbed-manager] 2026-04-08 00:14:55.726381 | orchestrator | 2026-04-08 00:14:55.726396 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-04-08 00:14:56.319711 | orchestrator | ok: [testbed-manager] 2026-04-08 00:14:56.319809 | orchestrator | 2026-04-08 00:14:56.319823 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-04-08 00:14:56.375914 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:14:56.376000 | orchestrator | 2026-04-08 00:14:56.376014 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-04-08 00:14:56.435757 | orchestrator | ok: [testbed-manager] 2026-04-08 00:14:56.435850 | orchestrator | 2026-04-08 00:14:56.435865 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-04-08 00:14:57.210511 | orchestrator | changed: [testbed-manager] 2026-04-08 00:14:57.210614 | orchestrator | 2026-04-08 00:14:57.210629 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-04-08 00:16:09.278100 | orchestrator | changed: [testbed-manager] 2026-04-08 00:16:09.278235 | orchestrator | 2026-04-08 00:16:09.278256 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-04-08 00:16:10.243767 | orchestrator | ok: [testbed-manager] 2026-04-08 00:16:10.243869 | orchestrator | 2026-04-08 00:16:10.243885 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-04-08 00:16:10.300040 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:16:10.300160 | orchestrator | 2026-04-08 00:16:10.300185 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-04-08 00:16:14.348997 | orchestrator | changed: [testbed-manager] 2026-04-08 00:16:14.349101 | orchestrator | 2026-04-08 00:16:14.349123 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-04-08 00:16:14.403240 | orchestrator | ok: [testbed-manager] 2026-04-08 00:16:14.403354 | orchestrator | 2026-04-08 00:16:14.403368 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-08 00:16:14.403379 | orchestrator | 2026-04-08 00:16:14.403390 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-04-08 00:16:14.545991 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:16:14.546105 | orchestrator | 2026-04-08 00:16:14.546120 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-04-08 00:17:14.598940 | orchestrator | Pausing for 60 seconds 2026-04-08 00:17:14.599044 | orchestrator | changed: [testbed-manager] 2026-04-08 00:17:14.599060 | orchestrator | 2026-04-08 00:17:14.599073 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-04-08 00:17:17.587301 | orchestrator | changed: [testbed-manager] 2026-04-08 00:17:17.587425 | orchestrator | 2026-04-08 00:17:17.587453 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-04-08 00:17:58.973548 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-04-08 00:17:58.973624 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-04-08 00:17:58.973630 | orchestrator | changed: [testbed-manager] 2026-04-08 00:17:58.973636 | orchestrator | 2026-04-08 00:17:58.973641 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-04-08 00:18:04.435693 | orchestrator | changed: [testbed-manager] 2026-04-08 00:18:04.435893 | orchestrator | 2026-04-08 00:18:04.435926 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-04-08 00:18:04.516342 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-04-08 00:18:04.516468 | orchestrator | 2026-04-08 00:18:04.516495 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-08 00:18:04.516515 | orchestrator | 2026-04-08 00:18:04.516528 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-04-08 00:18:04.561736 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:18:04.561831 | orchestrator | 2026-04-08 00:18:04.561846 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-04-08 00:18:04.635688 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-04-08 00:18:04.635789 | orchestrator | 2026-04-08 00:18:04.635806 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-04-08 00:18:05.378295 | orchestrator | changed: [testbed-manager] 2026-04-08 00:18:05.378399 | orchestrator | 2026-04-08 00:18:05.378416 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-04-08 00:18:08.509688 | orchestrator | ok: [testbed-manager] 2026-04-08 00:18:08.509791 | orchestrator | 2026-04-08 00:18:08.509808 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-04-08 00:18:08.574192 | orchestrator | ok: [testbed-manager] => { 2026-04-08 00:18:08.574283 | orchestrator | "version_check_result.stdout_lines": [ 2026-04-08 00:18:08.574299 | orchestrator | "=== OSISM Container Version Check ===", 2026-04-08 00:18:08.574311 | orchestrator | "Checking running containers against expected versions...", 2026-04-08 00:18:08.574321 | orchestrator | "", 2026-04-08 00:18:08.574328 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-04-08 00:18:08.574335 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20260322.0", 2026-04-08 00:18:08.574343 | orchestrator | " Enabled: true", 2026-04-08 00:18:08.574350 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20260322.0", 2026-04-08 00:18:08.574356 | orchestrator | " Status: ✅ MATCH", 2026-04-08 00:18:08.574363 | orchestrator | "", 2026-04-08 00:18:08.574370 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-04-08 00:18:08.574402 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20260322.0", 2026-04-08 00:18:08.574409 | orchestrator | " Enabled: true", 2026-04-08 00:18:08.574416 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20260322.0", 2026-04-08 00:18:08.574422 | orchestrator | " Status: ✅ MATCH", 2026-04-08 00:18:08.574428 | orchestrator | "", 2026-04-08 00:18:08.574435 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-04-08 00:18:08.574441 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20260322.0", 2026-04-08 00:18:08.574447 | orchestrator | " Enabled: true", 2026-04-08 00:18:08.574454 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20260322.0", 2026-04-08 00:18:08.574460 | orchestrator | " Status: ✅ MATCH", 2026-04-08 00:18:08.574467 | orchestrator | "", 2026-04-08 00:18:08.574473 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-04-08 00:18:08.574479 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20260322.0", 2026-04-08 00:18:08.574486 | orchestrator | " Enabled: true", 2026-04-08 00:18:08.574492 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20260322.0", 2026-04-08 00:18:08.574498 | orchestrator | " Status: ✅ MATCH", 2026-04-08 00:18:08.574504 | orchestrator | "", 2026-04-08 00:18:08.574510 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-04-08 00:18:08.574517 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20260328.0", 2026-04-08 00:18:08.574523 | orchestrator | " Enabled: true", 2026-04-08 00:18:08.574529 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20260328.0", 2026-04-08 00:18:08.574535 | orchestrator | " Status: ✅ MATCH", 2026-04-08 00:18:08.574543 | orchestrator | "", 2026-04-08 00:18:08.574553 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-04-08 00:18:08.574563 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-08 00:18:08.574573 | orchestrator | " Enabled: true", 2026-04-08 00:18:08.574582 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-08 00:18:08.574591 | orchestrator | " Status: ✅ MATCH", 2026-04-08 00:18:08.574600 | orchestrator | "", 2026-04-08 00:18:08.574610 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-04-08 00:18:08.574621 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-08 00:18:08.574632 | orchestrator | " Enabled: true", 2026-04-08 00:18:08.574644 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-08 00:18:08.574651 | orchestrator | " Status: ✅ MATCH", 2026-04-08 00:18:08.574657 | orchestrator | "", 2026-04-08 00:18:08.574664 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-04-08 00:18:08.574670 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-08 00:18:08.574676 | orchestrator | " Enabled: true", 2026-04-08 00:18:08.574682 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-08 00:18:08.574688 | orchestrator | " Status: ✅ MATCH", 2026-04-08 00:18:08.574695 | orchestrator | "", 2026-04-08 00:18:08.574701 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-04-08 00:18:08.574707 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20260320.0", 2026-04-08 00:18:08.574713 | orchestrator | " Enabled: true", 2026-04-08 00:18:08.574720 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20260320.0", 2026-04-08 00:18:08.574726 | orchestrator | " Status: ✅ MATCH", 2026-04-08 00:18:08.574732 | orchestrator | "", 2026-04-08 00:18:08.574738 | orchestrator | "Checking service: redis (Redis Cache)", 2026-04-08 00:18:08.574745 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-08 00:18:08.574751 | orchestrator | " Enabled: true", 2026-04-08 00:18:08.574757 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-08 00:18:08.574763 | orchestrator | " Status: ✅ MATCH", 2026-04-08 00:18:08.574770 | orchestrator | "", 2026-04-08 00:18:08.574776 | orchestrator | "Checking service: api (OSISM API Service)", 2026-04-08 00:18:08.574789 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-08 00:18:08.574795 | orchestrator | " Enabled: true", 2026-04-08 00:18:08.574802 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-08 00:18:08.574808 | orchestrator | " Status: ✅ MATCH", 2026-04-08 00:18:08.574814 | orchestrator | "", 2026-04-08 00:18:08.574824 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-04-08 00:18:08.574831 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-08 00:18:08.574837 | orchestrator | " Enabled: true", 2026-04-08 00:18:08.574843 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-08 00:18:08.574849 | orchestrator | " Status: ✅ MATCH", 2026-04-08 00:18:08.574856 | orchestrator | "", 2026-04-08 00:18:08.574863 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-04-08 00:18:08.574869 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-08 00:18:08.574875 | orchestrator | " Enabled: true", 2026-04-08 00:18:08.574881 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-08 00:18:08.574887 | orchestrator | " Status: ✅ MATCH", 2026-04-08 00:18:08.574894 | orchestrator | "", 2026-04-08 00:18:08.574900 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-04-08 00:18:08.574906 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-08 00:18:08.574912 | orchestrator | " Enabled: true", 2026-04-08 00:18:08.574919 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-08 00:18:08.574937 | orchestrator | " Status: ✅ MATCH", 2026-04-08 00:18:08.574944 | orchestrator | "", 2026-04-08 00:18:08.574950 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-04-08 00:18:08.574956 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-08 00:18:08.574963 | orchestrator | " Enabled: true", 2026-04-08 00:18:08.574969 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-08 00:18:08.574975 | orchestrator | " Status: ✅ MATCH", 2026-04-08 00:18:08.574981 | orchestrator | "", 2026-04-08 00:18:08.574988 | orchestrator | "=== Summary ===", 2026-04-08 00:18:08.574994 | orchestrator | "Errors (version mismatches): 0", 2026-04-08 00:18:08.575000 | orchestrator | "Warnings (expected containers not running): 0", 2026-04-08 00:18:08.575006 | orchestrator | "", 2026-04-08 00:18:08.575013 | orchestrator | "✅ All running containers match expected versions!" 2026-04-08 00:18:08.575019 | orchestrator | ] 2026-04-08 00:18:08.575027 | orchestrator | } 2026-04-08 00:18:08.575039 | orchestrator | 2026-04-08 00:18:08.575050 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-04-08 00:18:08.624985 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:18:08.625080 | orchestrator | 2026-04-08 00:18:08.625095 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:18:08.625105 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-04-08 00:18:08.625113 | orchestrator | 2026-04-08 00:18:08.734723 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-08 00:18:08.734808 | orchestrator | + deactivate 2026-04-08 00:18:08.734823 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-08 00:18:08.734837 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-08 00:18:08.734848 | orchestrator | + export PATH 2026-04-08 00:18:08.734859 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-08 00:18:08.734871 | orchestrator | + '[' -n '' ']' 2026-04-08 00:18:08.734882 | orchestrator | + hash -r 2026-04-08 00:18:08.734893 | orchestrator | + '[' -n '' ']' 2026-04-08 00:18:08.734904 | orchestrator | + unset VIRTUAL_ENV 2026-04-08 00:18:08.734915 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-08 00:18:08.734926 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-08 00:18:08.734937 | orchestrator | + unset -f deactivate 2026-04-08 00:18:08.734949 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-04-08 00:18:08.740578 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-08 00:18:08.740616 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-08 00:18:08.740652 | orchestrator | + local max_attempts=60 2026-04-08 00:18:08.740664 | orchestrator | + local name=ceph-ansible 2026-04-08 00:18:08.740675 | orchestrator | + local attempt_num=1 2026-04-08 00:18:08.741068 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-08 00:18:08.780543 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-08 00:18:08.780608 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-08 00:18:08.780621 | orchestrator | + local max_attempts=60 2026-04-08 00:18:08.780632 | orchestrator | + local name=kolla-ansible 2026-04-08 00:18:08.780643 | orchestrator | + local attempt_num=1 2026-04-08 00:18:08.781049 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-08 00:18:08.815337 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-08 00:18:08.815401 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-08 00:18:08.815414 | orchestrator | + local max_attempts=60 2026-04-08 00:18:08.815426 | orchestrator | + local name=osism-ansible 2026-04-08 00:18:08.815437 | orchestrator | + local attempt_num=1 2026-04-08 00:18:08.815836 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-08 00:18:08.853185 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-08 00:18:08.853283 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-08 00:18:08.853308 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-08 00:18:09.532371 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-04-08 00:18:09.703351 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-04-08 00:18:09.703448 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20260322.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2026-04-08 00:18:09.703463 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20260328.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2026-04-08 00:18:09.703475 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2026-04-08 00:18:09.703501 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2026-04-08 00:18:09.703513 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2026-04-08 00:18:09.703524 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2026-04-08 00:18:09.703535 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20260322.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 52 seconds (healthy) 2026-04-08 00:18:09.703546 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2026-04-08 00:18:09.703558 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2026-04-08 00:18:09.703569 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2026-04-08 00:18:09.703580 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2026-04-08 00:18:09.703618 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20260322.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2026-04-08 00:18:09.703630 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20260320.0 "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2026-04-08 00:18:09.703641 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20260322.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2026-04-08 00:18:09.703652 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2026-04-08 00:18:09.707685 | orchestrator | ++ semver 10.0.0 7.0.0 2026-04-08 00:18:09.736236 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-08 00:18:09.736326 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-04-08 00:18:09.737846 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-04-08 00:18:22.323485 | orchestrator | 2026-04-08 00:18:22 | INFO  | Prepare task for execution of resolvconf. 2026-04-08 00:18:22.512865 | orchestrator | 2026-04-08 00:18:22 | INFO  | Task 1f718eee-408f-4f0c-b951-cb1c4303f32a (resolvconf) was prepared for execution. 2026-04-08 00:18:22.512931 | orchestrator | 2026-04-08 00:18:22 | INFO  | It takes a moment until task 1f718eee-408f-4f0c-b951-cb1c4303f32a (resolvconf) has been started and output is visible here. 2026-04-08 00:18:35.361593 | orchestrator | 2026-04-08 00:18:35.361736 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-04-08 00:18:35.361754 | orchestrator | 2026-04-08 00:18:35.361767 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-08 00:18:35.361778 | orchestrator | Wednesday 08 April 2026 00:18:25 +0000 (0:00:00.171) 0:00:00.171 ******* 2026-04-08 00:18:35.361789 | orchestrator | ok: [testbed-manager] 2026-04-08 00:18:35.361801 | orchestrator | 2026-04-08 00:18:35.361812 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-08 00:18:35.361824 | orchestrator | Wednesday 08 April 2026 00:18:29 +0000 (0:00:03.733) 0:00:03.904 ******* 2026-04-08 00:18:35.361835 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:18:35.361847 | orchestrator | 2026-04-08 00:18:35.361858 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-08 00:18:35.361869 | orchestrator | Wednesday 08 April 2026 00:18:29 +0000 (0:00:00.058) 0:00:03.963 ******* 2026-04-08 00:18:35.361880 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-04-08 00:18:35.361891 | orchestrator | 2026-04-08 00:18:35.361902 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-08 00:18:35.361913 | orchestrator | Wednesday 08 April 2026 00:18:29 +0000 (0:00:00.080) 0:00:04.043 ******* 2026-04-08 00:18:35.361925 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-04-08 00:18:35.361936 | orchestrator | 2026-04-08 00:18:35.361947 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-08 00:18:35.361958 | orchestrator | Wednesday 08 April 2026 00:18:29 +0000 (0:00:00.059) 0:00:04.103 ******* 2026-04-08 00:18:35.361969 | orchestrator | ok: [testbed-manager] 2026-04-08 00:18:35.361980 | orchestrator | 2026-04-08 00:18:35.361991 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-08 00:18:35.362002 | orchestrator | Wednesday 08 April 2026 00:18:30 +0000 (0:00:01.118) 0:00:05.222 ******* 2026-04-08 00:18:35.362013 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:18:35.362097 | orchestrator | 2026-04-08 00:18:35.362148 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-08 00:18:35.362164 | orchestrator | Wednesday 08 April 2026 00:18:30 +0000 (0:00:00.051) 0:00:05.273 ******* 2026-04-08 00:18:35.362177 | orchestrator | ok: [testbed-manager] 2026-04-08 00:18:35.362189 | orchestrator | 2026-04-08 00:18:35.362202 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-08 00:18:35.362214 | orchestrator | Wednesday 08 April 2026 00:18:31 +0000 (0:00:00.555) 0:00:05.828 ******* 2026-04-08 00:18:35.362227 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:18:35.362239 | orchestrator | 2026-04-08 00:18:35.362251 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-08 00:18:35.362265 | orchestrator | Wednesday 08 April 2026 00:18:31 +0000 (0:00:00.080) 0:00:05.909 ******* 2026-04-08 00:18:35.362277 | orchestrator | changed: [testbed-manager] 2026-04-08 00:18:35.362290 | orchestrator | 2026-04-08 00:18:35.362303 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-08 00:18:35.362315 | orchestrator | Wednesday 08 April 2026 00:18:31 +0000 (0:00:00.566) 0:00:06.475 ******* 2026-04-08 00:18:35.362328 | orchestrator | changed: [testbed-manager] 2026-04-08 00:18:35.362341 | orchestrator | 2026-04-08 00:18:35.362355 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-08 00:18:35.362368 | orchestrator | Wednesday 08 April 2026 00:18:32 +0000 (0:00:01.101) 0:00:07.576 ******* 2026-04-08 00:18:35.362381 | orchestrator | ok: [testbed-manager] 2026-04-08 00:18:35.362394 | orchestrator | 2026-04-08 00:18:35.362407 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-08 00:18:35.362419 | orchestrator | Wednesday 08 April 2026 00:18:33 +0000 (0:00:00.980) 0:00:08.557 ******* 2026-04-08 00:18:35.362432 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-04-08 00:18:35.362445 | orchestrator | 2026-04-08 00:18:35.362457 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-08 00:18:35.362470 | orchestrator | Wednesday 08 April 2026 00:18:34 +0000 (0:00:00.095) 0:00:08.653 ******* 2026-04-08 00:18:35.362483 | orchestrator | changed: [testbed-manager] 2026-04-08 00:18:35.362497 | orchestrator | 2026-04-08 00:18:35.362510 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:18:35.362522 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-08 00:18:35.362533 | orchestrator | 2026-04-08 00:18:35.362544 | orchestrator | 2026-04-08 00:18:35.362555 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:18:35.362566 | orchestrator | Wednesday 08 April 2026 00:18:35 +0000 (0:00:01.142) 0:00:09.796 ******* 2026-04-08 00:18:35.362577 | orchestrator | =============================================================================== 2026-04-08 00:18:35.362587 | orchestrator | Gathering Facts --------------------------------------------------------- 3.73s 2026-04-08 00:18:35.362608 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.14s 2026-04-08 00:18:35.362620 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.12s 2026-04-08 00:18:35.362631 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.10s 2026-04-08 00:18:35.362642 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.98s 2026-04-08 00:18:35.362653 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.57s 2026-04-08 00:18:35.362681 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.56s 2026-04-08 00:18:35.362693 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.10s 2026-04-08 00:18:35.362703 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-04-08 00:18:35.362714 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2026-04-08 00:18:35.362733 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.06s 2026-04-08 00:18:35.362744 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2026-04-08 00:18:35.362755 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.05s 2026-04-08 00:18:35.536474 | orchestrator | + osism apply sshconfig 2026-04-08 00:18:46.779916 | orchestrator | 2026-04-08 00:18:46 | INFO  | Prepare task for execution of sshconfig. 2026-04-08 00:18:46.849609 | orchestrator | 2026-04-08 00:18:46 | INFO  | Task 9f06e36c-1452-41ba-972c-eed5c9399b2a (sshconfig) was prepared for execution. 2026-04-08 00:18:46.849702 | orchestrator | 2026-04-08 00:18:46 | INFO  | It takes a moment until task 9f06e36c-1452-41ba-972c-eed5c9399b2a (sshconfig) has been started and output is visible here. 2026-04-08 00:18:57.186570 | orchestrator | 2026-04-08 00:18:57.186706 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-04-08 00:18:57.186724 | orchestrator | 2026-04-08 00:18:57.186736 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-04-08 00:18:57.186748 | orchestrator | Wednesday 08 April 2026 00:18:49 +0000 (0:00:00.170) 0:00:00.170 ******* 2026-04-08 00:18:57.186760 | orchestrator | ok: [testbed-manager] 2026-04-08 00:18:57.186772 | orchestrator | 2026-04-08 00:18:57.186784 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-04-08 00:18:57.186795 | orchestrator | Wednesday 08 April 2026 00:18:50 +0000 (0:00:00.905) 0:00:01.076 ******* 2026-04-08 00:18:57.186806 | orchestrator | changed: [testbed-manager] 2026-04-08 00:18:57.186818 | orchestrator | 2026-04-08 00:18:57.186829 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-04-08 00:18:57.186840 | orchestrator | Wednesday 08 April 2026 00:18:51 +0000 (0:00:00.479) 0:00:01.555 ******* 2026-04-08 00:18:57.186851 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-04-08 00:18:57.186863 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-04-08 00:18:57.186874 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-04-08 00:18:57.186885 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-04-08 00:18:57.186896 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-04-08 00:18:57.186907 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-04-08 00:18:57.186918 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-04-08 00:18:57.186928 | orchestrator | 2026-04-08 00:18:57.186939 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-04-08 00:18:57.186951 | orchestrator | Wednesday 08 April 2026 00:18:56 +0000 (0:00:05.442) 0:00:06.997 ******* 2026-04-08 00:18:57.186961 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:18:57.186972 | orchestrator | 2026-04-08 00:18:57.186983 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-04-08 00:18:57.186994 | orchestrator | Wednesday 08 April 2026 00:18:56 +0000 (0:00:00.094) 0:00:07.092 ******* 2026-04-08 00:18:57.187005 | orchestrator | changed: [testbed-manager] 2026-04-08 00:18:57.187016 | orchestrator | 2026-04-08 00:18:57.187027 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:18:57.187039 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-08 00:18:57.187051 | orchestrator | 2026-04-08 00:18:57.187062 | orchestrator | 2026-04-08 00:18:57.187073 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:18:57.187084 | orchestrator | Wednesday 08 April 2026 00:18:57 +0000 (0:00:00.484) 0:00:07.576 ******* 2026-04-08 00:18:57.187095 | orchestrator | =============================================================================== 2026-04-08 00:18:57.187157 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.44s 2026-04-08 00:18:57.187195 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.91s 2026-04-08 00:18:57.187207 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.48s 2026-04-08 00:18:57.187217 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.48s 2026-04-08 00:18:57.187228 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.09s 2026-04-08 00:18:57.291393 | orchestrator | + osism apply known-hosts 2026-04-08 00:19:08.400170 | orchestrator | 2026-04-08 00:19:08 | INFO  | Prepare task for execution of known-hosts. 2026-04-08 00:19:08.464316 | orchestrator | 2026-04-08 00:19:08 | INFO  | Task 04c62a1a-864b-40ed-a795-ea0172cccb5b (known-hosts) was prepared for execution. 2026-04-08 00:19:08.464431 | orchestrator | 2026-04-08 00:19:08 | INFO  | It takes a moment until task 04c62a1a-864b-40ed-a795-ea0172cccb5b (known-hosts) has been started and output is visible here. 2026-04-08 00:19:22.707427 | orchestrator | 2026-04-08 00:19:22.707533 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-04-08 00:19:22.707549 | orchestrator | 2026-04-08 00:19:22.707572 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-04-08 00:19:22.707586 | orchestrator | Wednesday 08 April 2026 00:19:11 +0000 (0:00:00.172) 0:00:00.172 ******* 2026-04-08 00:19:22.707598 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-08 00:19:22.707611 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-08 00:19:22.707622 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-08 00:19:22.707634 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-08 00:19:22.707645 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-08 00:19:22.707656 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-08 00:19:22.707667 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-08 00:19:22.707679 | orchestrator | 2026-04-08 00:19:22.707690 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-04-08 00:19:22.707703 | orchestrator | Wednesday 08 April 2026 00:19:17 +0000 (0:00:06.089) 0:00:06.261 ******* 2026-04-08 00:19:22.707715 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-08 00:19:22.707729 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-08 00:19:22.707740 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-08 00:19:22.707752 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-08 00:19:22.707763 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-08 00:19:22.707774 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-08 00:19:22.707786 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-08 00:19:22.707797 | orchestrator | 2026-04-08 00:19:22.707808 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-08 00:19:22.707819 | orchestrator | Wednesday 08 April 2026 00:19:17 +0000 (0:00:00.146) 0:00:06.408 ******* 2026-04-08 00:19:22.707831 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICqvuAb3f6IlOMJeLyLOgh7/nju8TXfXPm+f3RgcCx0g) 2026-04-08 00:19:22.707870 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQEhzLinhRXWE/n355n6T08K7aGjrQzwGx6168LuQtimvQa6VTsFWAuajj5lxHWzKi0kQ9Bl+/KZlX1gz8vsewnqixbvK2i/bIxfM1qzUIkc8oLstZBZHqDj5tFVEvQFyOkqhmQYE5KNWGZBPsezyLrv8U/rVS4CVHCOoioiBndHfZW3uejQvzvEWteKvji99Bdni4+8a58XNOeaYY/QD1vd4XTwK69MOdtL80rZkDxSnqsRDPytz58AGtI8b90urHcWBVjT+RYw6WEfPu/UJwnA4lRlAwTmEKRHhTbiZgBrvMxjyRyyS7OGSpFe8e9/FYV2Rprr9r1DzLZrx01pDa5nnqUGidn53lwOKWz1f8s6+ZFQAlmrLkUwvroOKGbJvk7Wlg7WYaxoq4hzs33XSKhcLF3wmNn9WrHLr12N97ww5FvuPN6WrA5Q/H5k3Nx8IiSdnOwGq3npLuU7rp8TH7Jh0emd9WYYy+9zftnHAJdy9Y93Pbk7ki2CVdQyNV75U=) 2026-04-08 00:19:22.707887 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHcH18p151y8YNdcCt+dhxEeUuCgxTbxnqYfQjbQ9ZMNw1DFhhZp6EoWUB7Q8MVG68t19czS1/vUPheiIKb4rh8=) 2026-04-08 00:19:22.707899 | orchestrator | 2026-04-08 00:19:22.707910 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-08 00:19:22.707922 | orchestrator | Wednesday 08 April 2026 00:19:18 +0000 (0:00:01.123) 0:00:07.531 ******* 2026-04-08 00:19:22.707952 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtYVGRdXyTXFR3A2IrU2Saj6V6+IGxCCuhhEziG9FXDDNB2i5mvrFUS2mihFuiJjrnPHdRnXCiOfb5ufpS/PtDdbF02Cy11kB0cM6tWwuyzx2FRCVgheT5o+BQcQ6EQLTjDAj/iVrli8x14VmMsWHDjeia+zlUM/OMzLoDvS1Fh4dH4fd8fWwbF2u1+WFrhL9TWwpml9uX6mQ6kODoZlDI91zKMOWa7xxeeLdq7AZYli8GL5T+Qe1mG3+t5MvMJyKsi1k8WglHNSkGtYExdBGDZ/6usB79PzdkcD1upbz8gi3RwO4NNXdvcB5f5q3c3u+mdckaCvIjnwdXC8b12VjKBPawBrpLmgK/MzmgECOT+2lB23Y5qVLyRg6nmXA+h7BzznfIzT42ltH19r7Zkld/ThdJehvgxrc0NPe+Ik8rTV1mdFrtCq4wntfM/1VApPcv6iCYte+hZwbcwtMuFBCFzzhtz7938VwBfj7lF4wGg/7YvDZiA1zYVeNzKPI0nyE=) 2026-04-08 00:19:22.707968 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKvREp20+uvBWCBGtFsnDnFg/7hYjTF8nxr8KJxS+NI1TWPgH2lCsaqL2TnLiLEf8jY2yI4aWE6KDE2+W9KMf+Q=) 2026-04-08 00:19:22.707980 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKUnWKGUU13YScOONItCKGDpTGAQaL48uT3vD3nVHcTW) 2026-04-08 00:19:22.707993 | orchestrator | 2026-04-08 00:19:22.708008 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-08 00:19:22.708029 | orchestrator | Wednesday 08 April 2026 00:19:19 +0000 (0:00:00.946) 0:00:08.478 ******* 2026-04-08 00:19:22.708168 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyrTmg9zPYbgCa3c8aP7mpPGPgeFqC0jufSFOmwy+QPRUUr76EM5XOEknk09GdsIWfiOZcP2y0suPT/lUrjjmmmrUXuQE76V35iF07G3nYb3H+8vXuf0Pr6f6+ElsUc4brfyA1on2peiYdEcmFqqNQeYURvEv04AJo7CvwnehBdbGv3f61JDbzhpYuInYo+gcyID+9kUWxfUy2vQxt1ErS6GtDNyrDT68+tGZ6J+c51AuqmthNJ36M2LmtSlziuk0QBtcWcY0sqkZ1QijHrVoPLlAqCTsuEd/Ze8+Ar4th+e+tHCaiYNkKq7FBn4nyt9diR3EfWIlsYpG49qX0C+PB3cR7skoMCTg+XEtB9/D5GO6POvwY6qEqNjk1kYEFvcL9bSIbCQfhBM02F7RpLFTWIDh7SdhI4ZtMoCvIJeo9DD1fxgnGog2fa4rEuxMrdYlBs0YsVWdLRUXy+44XHz4NpYXnxhgLR9ypmRlehpoLpnVnoOGx95Q7JxGPSxicDMs=) 2026-04-08 00:19:22.708189 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFqXQtTHadbZnPWUDpUIPbZZu5dwaTlUJbPrXI7IibUet6c2qNB8veghVkoAJ3GZfa1YAYKOBwzgAUtZE2lIaKM=) 2026-04-08 00:19:22.708207 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO9VNDUPvKm3Nk/Vs+jsjXyRs3O3IZHA84p2yFDzLi86) 2026-04-08 00:19:22.708221 | orchestrator | 2026-04-08 00:19:22.708233 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-08 00:19:22.708245 | orchestrator | Wednesday 08 April 2026 00:19:20 +0000 (0:00:00.916) 0:00:09.394 ******* 2026-04-08 00:19:22.708264 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCq043tqLxH2Ef8S2qj48YN4K8aqqnRufsMZ8BVk3qyMV8tLepauddzacvPNhXeVIjTl+XvBzM6WvWnYGFJRS9tdSExcd1+v99mXeg2HblexK8nFxqUG3C76tsn11+Rs3KM8CWVux2UzzDl+ssFPhFbFaxoru230BOlolq5gV6QY8AQh+hsRWyMNVMzPZ5ch+0MmCBkNCzVWkocSbaY+2+km7xfdY4Z2F40UOBhGHQ91SyxNWsbGyPZOuz2uEcVDRdqHhnooWhcSw0jqEsl90WVw49rp9cnjLfOGMi9S1mrY7LFl5ynReSFf4ABU5qtE79gahiY24kufrDI2toTEl+ypuoAoN5pjIA5t5T+EqgKYXYIMfX/vJdKDSDyWpd76KUqIyZm9DgUPVzNdBlN+gGYqYM0Uo6JBYCme9IQl7m1DELK42sIgnDV43TwdZ3Ggfdz3aB7Hafnlu90cC0NkpsHaloIihbTUv10m2Q7vXMyoC6K+bJEIx0ALMDlm1MkgtM=) 2026-04-08 00:19:22.708288 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPtrcYvYftVhA51KnsRKJZ6APYb7dZ3aMldhReYLB0r0) 2026-04-08 00:19:22.708302 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC1/024IAZ3snnUxWyfeeVylrqa+btB0welHEyy08afpAn9wJ0FWXGG8BNrdLuUfNl0Q0kGHtRNclW1mVr1s+8w=) 2026-04-08 00:19:22.708315 | orchestrator | 2026-04-08 00:19:22.708327 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-08 00:19:22.708339 | orchestrator | Wednesday 08 April 2026 00:19:21 +0000 (0:00:00.926) 0:00:10.321 ******* 2026-04-08 00:19:22.708350 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDXVg0b+gwiejISkksIBvAg3UPqjyi5QOIlZVNxqgaoCkThLPCUt0o7rdd0fm17YyhPI66VILx9ZxyHZSw6QD3M=) 2026-04-08 00:19:22.708361 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuA9sTIK9OCzZ7GbOSbUAw8dNkNduTcPvI4BWAKbrtxbMTjMK3Ncq/r1xTDILQF42ttXdOS9/gqH82VVDaQGNaj9Xr+gBZTxc2a3tadnUpTlZBOmYI3QZCsTfkJHP0tdeDyU6UxQPdHX1dALeiDuCzKYyk162YlZs5kJSF2H7pP0ajBH/cazJRHoWJJGwPuaMYLANXtn9hbnA8nu9dPHq7eLvbRsDuF8r35fBn/FBd22+FRTkAPceTXePPrfzkHJ5I1kF53W1QHGMIgewhsRisg5D91Mcj47Tj7RAMiQDRnhgEfsA6ZtDY4ZiL5iv1m2ppKMAl/+tpHOeMzWoMMQFkcOnBZ02fdplvOplXY6R8ZvVA716gLJQmsRkDJGASrSOZFdUsY/BWtZAGBFyY6vRPSFwJBLtpRJkFLB1gI9V/tgYMLrnRXyAx4QiyhRyoQRJD0Flm5rx19sU+6t3fV8XkxL3LaUCokUcOardMnGkXxZIUvbxRdsPvroT+20lq56M=) 2026-04-08 00:19:22.708373 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDYFFexjWDOY1K6etJHBgA+aaDCMxl1pwF1kCH9u8QUm) 2026-04-08 00:19:22.708384 | orchestrator | 2026-04-08 00:19:22.708395 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-08 00:19:22.708406 | orchestrator | Wednesday 08 April 2026 00:19:22 +0000 (0:00:00.959) 0:00:11.280 ******* 2026-04-08 00:19:22.708428 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDhgg1uSH8V9w1GuBTjKiEG4Pinjk8b/3oQ22+vKxEabwhpYleqhr0+iqbSVd8VAHtmVLBHUWskY5ZJ5163IHWLYq9s7glONNGKz0z23nmOaZz4erlvWlWwmVBF/AUHnUUgS3s8R64/+dE+Cz5ygfE0HkFQRcfkx5bzstUfdiCm/P13S8F4PQMrDf0au+uKeaKfXKwId9/Pw+SdLEv1k1gmQuFLOHpFE4KaSWMnSoxo6D8ZxzP6sOdazHClvWDPC/L2chzLsjTmTde4N7HqmVO7zDKbXCrBgcrEtdcABYcA6FNmfEAIcZdNPuGUsfEUYRh9HwDJtiJJy3DJ2ZcEwf4/rS1kTfgUMSWIgVI/hMrR22RY8mtHQ7i9mbAmRyygaosTehfWN3DyFIDwROUKNqLBFVusQTdF0ulrfigV9GmMpevtOPjhTVKOgU7K+YKE0KNRKXC/A+RyeYluJ/LvFpQ2Wfx6MHtiqDhhTgAFEsV+VYkx/hE/vpmzYONdtVRpEls=) 2026-04-08 00:19:33.347319 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE9wCWCKMQDPmxZLonjNczZK5BKmsdjgNMLEpcaLsTsDP89DoZ84WKOh2l4R6iFgKX0sBdEsEXn1/T8WXD7pyhU=) 2026-04-08 00:19:33.347417 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFpnKELlmuHX8BsioFwYIGmSElNQEamU+vOqwXBv0g1J) 2026-04-08 00:19:33.347430 | orchestrator | 2026-04-08 00:19:33.347441 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-08 00:19:33.347451 | orchestrator | Wednesday 08 April 2026 00:19:23 +0000 (0:00:00.930) 0:00:12.210 ******* 2026-04-08 00:19:33.347460 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAaPk4MZZFS7RUisX/wlJHqPfYGvNjm7Fv7SXm3OLDpTxD0U4MGBJ4Ic/QhSglEb5PXoW8OykZ6SvKgtrQ+jsFE=) 2026-04-08 00:19:33.347470 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOfneW0Qt0CBEJ7h/hPeD2RoJZxMALb8xOg/0ZoEHaYs) 2026-04-08 00:19:33.347513 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxdqiuffq7wyibB+/ZA2/uIh12u+5XQdFA6xPMV/49+uMtNX9aZRAYoM/C/nYQNHHLkuxtEdsBHMMLh0GLjj1DcqSuz9DQUMVxO27EzgedBYxk9mbGQ+9hvWjG5LfxOmfGtn28QCUiqyQCoEbWEE2bjXJJr0i/6EG8TMRjVLDQg6YLi4fFr58DzLAKkdbfdIlEVTk9XzdOQCg/3pF+WsVeRuAP5g6TlD8DEZoypN6MndxVSOKr2ZbWyU/2Uj2sFZmoPqmj92eUpS4ZuTDdYoLZN0kYjA83aRI/9qsiN1ExjIvACJAd5EH+yj/IAozKW0xLYptBPXhAf0NBp2pN4UMU8kZc9XdNiMgPaU0KmZLOkV6U5vZ8NxC3EsvKakLlFb3UNTbyQaqA6UH5NvnTUyft6RNOyXLF0EtHExM42aOkUIbeQ19aUc++gBk9DzrIYWk22xPBa953Uo3OvEVXAXH2GGWKHX8PzC+Yd6GC1RM5nmeAiv9VYfQO2aIiyIyWvps=) 2026-04-08 00:19:33.347525 | orchestrator | 2026-04-08 00:19:33.347535 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-04-08 00:19:33.347544 | orchestrator | Wednesday 08 April 2026 00:19:24 +0000 (0:00:00.904) 0:00:13.115 ******* 2026-04-08 00:19:33.347554 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-08 00:19:33.347563 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-08 00:19:33.347572 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-08 00:19:33.347581 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-08 00:19:33.347589 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-08 00:19:33.347598 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-08 00:19:33.347607 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-08 00:19:33.347615 | orchestrator | 2026-04-08 00:19:33.347639 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-04-08 00:19:33.347650 | orchestrator | Wednesday 08 April 2026 00:19:29 +0000 (0:00:04.947) 0:00:18.062 ******* 2026-04-08 00:19:33.347660 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-08 00:19:33.347671 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-08 00:19:33.347680 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-08 00:19:33.347689 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-08 00:19:33.347699 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-08 00:19:33.347708 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-08 00:19:33.347717 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-08 00:19:33.347726 | orchestrator | 2026-04-08 00:19:33.347735 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-08 00:19:33.347744 | orchestrator | Wednesday 08 April 2026 00:19:29 +0000 (0:00:00.159) 0:00:18.222 ******* 2026-04-08 00:19:33.347752 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICqvuAb3f6IlOMJeLyLOgh7/nju8TXfXPm+f3RgcCx0g) 2026-04-08 00:19:33.347783 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQEhzLinhRXWE/n355n6T08K7aGjrQzwGx6168LuQtimvQa6VTsFWAuajj5lxHWzKi0kQ9Bl+/KZlX1gz8vsewnqixbvK2i/bIxfM1qzUIkc8oLstZBZHqDj5tFVEvQFyOkqhmQYE5KNWGZBPsezyLrv8U/rVS4CVHCOoioiBndHfZW3uejQvzvEWteKvji99Bdni4+8a58XNOeaYY/QD1vd4XTwK69MOdtL80rZkDxSnqsRDPytz58AGtI8b90urHcWBVjT+RYw6WEfPu/UJwnA4lRlAwTmEKRHhTbiZgBrvMxjyRyyS7OGSpFe8e9/FYV2Rprr9r1DzLZrx01pDa5nnqUGidn53lwOKWz1f8s6+ZFQAlmrLkUwvroOKGbJvk7Wlg7WYaxoq4hzs33XSKhcLF3wmNn9WrHLr12N97ww5FvuPN6WrA5Q/H5k3Nx8IiSdnOwGq3npLuU7rp8TH7Jh0emd9WYYy+9zftnHAJdy9Y93Pbk7ki2CVdQyNV75U=) 2026-04-08 00:19:33.347801 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHcH18p151y8YNdcCt+dhxEeUuCgxTbxnqYfQjbQ9ZMNw1DFhhZp6EoWUB7Q8MVG68t19czS1/vUPheiIKb4rh8=) 2026-04-08 00:19:33.347811 | orchestrator | 2026-04-08 00:19:33.347821 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-08 00:19:33.347830 | orchestrator | Wednesday 08 April 2026 00:19:30 +0000 (0:00:01.000) 0:00:19.223 ******* 2026-04-08 00:19:33.347840 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKUnWKGUU13YScOONItCKGDpTGAQaL48uT3vD3nVHcTW) 2026-04-08 00:19:33.347850 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtYVGRdXyTXFR3A2IrU2Saj6V6+IGxCCuhhEziG9FXDDNB2i5mvrFUS2mihFuiJjrnPHdRnXCiOfb5ufpS/PtDdbF02Cy11kB0cM6tWwuyzx2FRCVgheT5o+BQcQ6EQLTjDAj/iVrli8x14VmMsWHDjeia+zlUM/OMzLoDvS1Fh4dH4fd8fWwbF2u1+WFrhL9TWwpml9uX6mQ6kODoZlDI91zKMOWa7xxeeLdq7AZYli8GL5T+Qe1mG3+t5MvMJyKsi1k8WglHNSkGtYExdBGDZ/6usB79PzdkcD1upbz8gi3RwO4NNXdvcB5f5q3c3u+mdckaCvIjnwdXC8b12VjKBPawBrpLmgK/MzmgECOT+2lB23Y5qVLyRg6nmXA+h7BzznfIzT42ltH19r7Zkld/ThdJehvgxrc0NPe+Ik8rTV1mdFrtCq4wntfM/1VApPcv6iCYte+hZwbcwtMuFBCFzzhtz7938VwBfj7lF4wGg/7YvDZiA1zYVeNzKPI0nyE=) 2026-04-08 00:19:33.347862 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKvREp20+uvBWCBGtFsnDnFg/7hYjTF8nxr8KJxS+NI1TWPgH2lCsaqL2TnLiLEf8jY2yI4aWE6KDE2+W9KMf+Q=) 2026-04-08 00:19:33.347872 | orchestrator | 2026-04-08 00:19:33.347882 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-08 00:19:33.347894 | orchestrator | Wednesday 08 April 2026 00:19:31 +0000 (0:00:00.997) 0:00:20.221 ******* 2026-04-08 00:19:33.347905 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyrTmg9zPYbgCa3c8aP7mpPGPgeFqC0jufSFOmwy+QPRUUr76EM5XOEknk09GdsIWfiOZcP2y0suPT/lUrjjmmmrUXuQE76V35iF07G3nYb3H+8vXuf0Pr6f6+ElsUc4brfyA1on2peiYdEcmFqqNQeYURvEv04AJo7CvwnehBdbGv3f61JDbzhpYuInYo+gcyID+9kUWxfUy2vQxt1ErS6GtDNyrDT68+tGZ6J+c51AuqmthNJ36M2LmtSlziuk0QBtcWcY0sqkZ1QijHrVoPLlAqCTsuEd/Ze8+Ar4th+e+tHCaiYNkKq7FBn4nyt9diR3EfWIlsYpG49qX0C+PB3cR7skoMCTg+XEtB9/D5GO6POvwY6qEqNjk1kYEFvcL9bSIbCQfhBM02F7RpLFTWIDh7SdhI4ZtMoCvIJeo9DD1fxgnGog2fa4rEuxMrdYlBs0YsVWdLRUXy+44XHz4NpYXnxhgLR9ypmRlehpoLpnVnoOGx95Q7JxGPSxicDMs=) 2026-04-08 00:19:33.347915 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFqXQtTHadbZnPWUDpUIPbZZu5dwaTlUJbPrXI7IibUet6c2qNB8veghVkoAJ3GZfa1YAYKOBwzgAUtZE2lIaKM=) 2026-04-08 00:19:33.347926 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO9VNDUPvKm3Nk/Vs+jsjXyRs3O3IZHA84p2yFDzLi86) 2026-04-08 00:19:33.347936 | orchestrator | 2026-04-08 00:19:33.347946 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-08 00:19:33.347957 | orchestrator | Wednesday 08 April 2026 00:19:32 +0000 (0:00:01.039) 0:00:21.260 ******* 2026-04-08 00:19:33.347967 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCq043tqLxH2Ef8S2qj48YN4K8aqqnRufsMZ8BVk3qyMV8tLepauddzacvPNhXeVIjTl+XvBzM6WvWnYGFJRS9tdSExcd1+v99mXeg2HblexK8nFxqUG3C76tsn11+Rs3KM8CWVux2UzzDl+ssFPhFbFaxoru230BOlolq5gV6QY8AQh+hsRWyMNVMzPZ5ch+0MmCBkNCzVWkocSbaY+2+km7xfdY4Z2F40UOBhGHQ91SyxNWsbGyPZOuz2uEcVDRdqHhnooWhcSw0jqEsl90WVw49rp9cnjLfOGMi9S1mrY7LFl5ynReSFf4ABU5qtE79gahiY24kufrDI2toTEl+ypuoAoN5pjIA5t5T+EqgKYXYIMfX/vJdKDSDyWpd76KUqIyZm9DgUPVzNdBlN+gGYqYM0Uo6JBYCme9IQl7m1DELK42sIgnDV43TwdZ3Ggfdz3aB7Hafnlu90cC0NkpsHaloIihbTUv10m2Q7vXMyoC6K+bJEIx0ALMDlm1MkgtM=) 2026-04-08 00:19:33.347984 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC1/024IAZ3snnUxWyfeeVylrqa+btB0welHEyy08afpAn9wJ0FWXGG8BNrdLuUfNl0Q0kGHtRNclW1mVr1s+8w=) 2026-04-08 00:19:33.348004 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPtrcYvYftVhA51KnsRKJZ6APYb7dZ3aMldhReYLB0r0) 2026-04-08 00:19:37.326575 | orchestrator | 2026-04-08 00:19:37.326707 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-08 00:19:37.326724 | orchestrator | Wednesday 08 April 2026 00:19:33 +0000 (0:00:00.997) 0:00:22.258 ******* 2026-04-08 00:19:37.326760 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDXVg0b+gwiejISkksIBvAg3UPqjyi5QOIlZVNxqgaoCkThLPCUt0o7rdd0fm17YyhPI66VILx9ZxyHZSw6QD3M=) 2026-04-08 00:19:37.326777 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuA9sTIK9OCzZ7GbOSbUAw8dNkNduTcPvI4BWAKbrtxbMTjMK3Ncq/r1xTDILQF42ttXdOS9/gqH82VVDaQGNaj9Xr+gBZTxc2a3tadnUpTlZBOmYI3QZCsTfkJHP0tdeDyU6UxQPdHX1dALeiDuCzKYyk162YlZs5kJSF2H7pP0ajBH/cazJRHoWJJGwPuaMYLANXtn9hbnA8nu9dPHq7eLvbRsDuF8r35fBn/FBd22+FRTkAPceTXePPrfzkHJ5I1kF53W1QHGMIgewhsRisg5D91Mcj47Tj7RAMiQDRnhgEfsA6ZtDY4ZiL5iv1m2ppKMAl/+tpHOeMzWoMMQFkcOnBZ02fdplvOplXY6R8ZvVA716gLJQmsRkDJGASrSOZFdUsY/BWtZAGBFyY6vRPSFwJBLtpRJkFLB1gI9V/tgYMLrnRXyAx4QiyhRyoQRJD0Flm5rx19sU+6t3fV8XkxL3LaUCokUcOardMnGkXxZIUvbxRdsPvroT+20lq56M=) 2026-04-08 00:19:37.326792 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDYFFexjWDOY1K6etJHBgA+aaDCMxl1pwF1kCH9u8QUm) 2026-04-08 00:19:37.326804 | orchestrator | 2026-04-08 00:19:37.326816 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-08 00:19:37.326827 | orchestrator | Wednesday 08 April 2026 00:19:34 +0000 (0:00:01.008) 0:00:23.267 ******* 2026-04-08 00:19:37.326843 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDhgg1uSH8V9w1GuBTjKiEG4Pinjk8b/3oQ22+vKxEabwhpYleqhr0+iqbSVd8VAHtmVLBHUWskY5ZJ5163IHWLYq9s7glONNGKz0z23nmOaZz4erlvWlWwmVBF/AUHnUUgS3s8R64/+dE+Cz5ygfE0HkFQRcfkx5bzstUfdiCm/P13S8F4PQMrDf0au+uKeaKfXKwId9/Pw+SdLEv1k1gmQuFLOHpFE4KaSWMnSoxo6D8ZxzP6sOdazHClvWDPC/L2chzLsjTmTde4N7HqmVO7zDKbXCrBgcrEtdcABYcA6FNmfEAIcZdNPuGUsfEUYRh9HwDJtiJJy3DJ2ZcEwf4/rS1kTfgUMSWIgVI/hMrR22RY8mtHQ7i9mbAmRyygaosTehfWN3DyFIDwROUKNqLBFVusQTdF0ulrfigV9GmMpevtOPjhTVKOgU7K+YKE0KNRKXC/A+RyeYluJ/LvFpQ2Wfx6MHtiqDhhTgAFEsV+VYkx/hE/vpmzYONdtVRpEls=) 2026-04-08 00:19:37.326856 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE9wCWCKMQDPmxZLonjNczZK5BKmsdjgNMLEpcaLsTsDP89DoZ84WKOh2l4R6iFgKX0sBdEsEXn1/T8WXD7pyhU=) 2026-04-08 00:19:37.326867 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFpnKELlmuHX8BsioFwYIGmSElNQEamU+vOqwXBv0g1J) 2026-04-08 00:19:37.326878 | orchestrator | 2026-04-08 00:19:37.326891 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-08 00:19:37.326903 | orchestrator | Wednesday 08 April 2026 00:19:35 +0000 (0:00:01.056) 0:00:24.324 ******* 2026-04-08 00:19:37.326914 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAaPk4MZZFS7RUisX/wlJHqPfYGvNjm7Fv7SXm3OLDpTxD0U4MGBJ4Ic/QhSglEb5PXoW8OykZ6SvKgtrQ+jsFE=) 2026-04-08 00:19:37.326926 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxdqiuffq7wyibB+/ZA2/uIh12u+5XQdFA6xPMV/49+uMtNX9aZRAYoM/C/nYQNHHLkuxtEdsBHMMLh0GLjj1DcqSuz9DQUMVxO27EzgedBYxk9mbGQ+9hvWjG5LfxOmfGtn28QCUiqyQCoEbWEE2bjXJJr0i/6EG8TMRjVLDQg6YLi4fFr58DzLAKkdbfdIlEVTk9XzdOQCg/3pF+WsVeRuAP5g6TlD8DEZoypN6MndxVSOKr2ZbWyU/2Uj2sFZmoPqmj92eUpS4ZuTDdYoLZN0kYjA83aRI/9qsiN1ExjIvACJAd5EH+yj/IAozKW0xLYptBPXhAf0NBp2pN4UMU8kZc9XdNiMgPaU0KmZLOkV6U5vZ8NxC3EsvKakLlFb3UNTbyQaqA6UH5NvnTUyft6RNOyXLF0EtHExM42aOkUIbeQ19aUc++gBk9DzrIYWk22xPBa953Uo3OvEVXAXH2GGWKHX8PzC+Yd6GC1RM5nmeAiv9VYfQO2aIiyIyWvps=) 2026-04-08 00:19:37.326960 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOfneW0Qt0CBEJ7h/hPeD2RoJZxMALb8xOg/0ZoEHaYs) 2026-04-08 00:19:37.326972 | orchestrator | 2026-04-08 00:19:37.326983 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-04-08 00:19:37.326994 | orchestrator | Wednesday 08 April 2026 00:19:36 +0000 (0:00:00.986) 0:00:25.311 ******* 2026-04-08 00:19:37.327006 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-08 00:19:37.327018 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-08 00:19:37.327029 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-08 00:19:37.327040 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-08 00:19:37.327050 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-08 00:19:37.327061 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-08 00:19:37.327103 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-08 00:19:37.327123 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:19:37.327146 | orchestrator | 2026-04-08 00:19:37.327196 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-04-08 00:19:37.327215 | orchestrator | Wednesday 08 April 2026 00:19:36 +0000 (0:00:00.175) 0:00:25.487 ******* 2026-04-08 00:19:37.327234 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:19:37.327250 | orchestrator | 2026-04-08 00:19:37.327269 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-04-08 00:19:37.327287 | orchestrator | Wednesday 08 April 2026 00:19:36 +0000 (0:00:00.051) 0:00:25.538 ******* 2026-04-08 00:19:37.327305 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:19:37.327325 | orchestrator | 2026-04-08 00:19:37.327344 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-04-08 00:19:37.327362 | orchestrator | Wednesday 08 April 2026 00:19:36 +0000 (0:00:00.041) 0:00:25.580 ******* 2026-04-08 00:19:37.327381 | orchestrator | changed: [testbed-manager] 2026-04-08 00:19:37.327399 | orchestrator | 2026-04-08 00:19:37.327418 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:19:37.327437 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-08 00:19:37.327458 | orchestrator | 2026-04-08 00:19:37.327476 | orchestrator | 2026-04-08 00:19:37.327493 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:19:37.327510 | orchestrator | Wednesday 08 April 2026 00:19:37 +0000 (0:00:00.458) 0:00:26.038 ******* 2026-04-08 00:19:37.327526 | orchestrator | =============================================================================== 2026-04-08 00:19:37.327543 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.09s 2026-04-08 00:19:37.327560 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 4.95s 2026-04-08 00:19:37.327579 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-04-08 00:19:37.327597 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-04-08 00:19:37.327615 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-04-08 00:19:37.327634 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-04-08 00:19:37.327653 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-04-08 00:19:37.327670 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-04-08 00:19:37.327687 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-04-08 00:19:37.327706 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2026-04-08 00:19:37.327743 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.96s 2026-04-08 00:19:37.327762 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.95s 2026-04-08 00:19:37.327779 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.93s 2026-04-08 00:19:37.327795 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.93s 2026-04-08 00:19:37.327811 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.92s 2026-04-08 00:19:37.327828 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.90s 2026-04-08 00:19:37.327846 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.46s 2026-04-08 00:19:37.327865 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.18s 2026-04-08 00:19:37.327883 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.16s 2026-04-08 00:19:37.327902 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.15s 2026-04-08 00:19:37.491786 | orchestrator | + osism apply squid 2026-04-08 00:19:48.724187 | orchestrator | 2026-04-08 00:19:48 | INFO  | Prepare task for execution of squid. 2026-04-08 00:19:48.795520 | orchestrator | 2026-04-08 00:19:48 | INFO  | Task e8ca4110-d343-4859-bedf-f7f08b871999 (squid) was prepared for execution. 2026-04-08 00:19:48.795612 | orchestrator | 2026-04-08 00:19:48 | INFO  | It takes a moment until task e8ca4110-d343-4859-bedf-f7f08b871999 (squid) has been started and output is visible here. 2026-04-08 00:21:40.011942 | orchestrator | 2026-04-08 00:21:40.012092 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-04-08 00:21:40.012105 | orchestrator | 2026-04-08 00:21:40.012132 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-04-08 00:21:40.012141 | orchestrator | Wednesday 08 April 2026 00:19:51 +0000 (0:00:00.187) 0:00:00.187 ******* 2026-04-08 00:21:40.012149 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-04-08 00:21:40.012157 | orchestrator | 2026-04-08 00:21:40.012165 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-04-08 00:21:40.012173 | orchestrator | Wednesday 08 April 2026 00:19:51 +0000 (0:00:00.078) 0:00:00.265 ******* 2026-04-08 00:21:40.012180 | orchestrator | ok: [testbed-manager] 2026-04-08 00:21:40.012189 | orchestrator | 2026-04-08 00:21:40.012197 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-04-08 00:21:40.012204 | orchestrator | Wednesday 08 April 2026 00:19:53 +0000 (0:00:02.011) 0:00:02.277 ******* 2026-04-08 00:21:40.012213 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-04-08 00:21:40.012220 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-04-08 00:21:40.012228 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-04-08 00:21:40.012235 | orchestrator | 2026-04-08 00:21:40.012243 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-04-08 00:21:40.012250 | orchestrator | Wednesday 08 April 2026 00:19:54 +0000 (0:00:01.076) 0:00:03.354 ******* 2026-04-08 00:21:40.012258 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-04-08 00:21:40.012265 | orchestrator | 2026-04-08 00:21:40.012273 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-04-08 00:21:40.012280 | orchestrator | Wednesday 08 April 2026 00:19:55 +0000 (0:00:00.904) 0:00:04.258 ******* 2026-04-08 00:21:40.012288 | orchestrator | ok: [testbed-manager] 2026-04-08 00:21:40.012296 | orchestrator | 2026-04-08 00:21:40.012304 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-04-08 00:21:40.012311 | orchestrator | Wednesday 08 April 2026 00:19:56 +0000 (0:00:00.323) 0:00:04.582 ******* 2026-04-08 00:21:40.012319 | orchestrator | changed: [testbed-manager] 2026-04-08 00:21:40.012347 | orchestrator | 2026-04-08 00:21:40.012358 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-04-08 00:21:40.012366 | orchestrator | Wednesday 08 April 2026 00:19:56 +0000 (0:00:00.812) 0:00:05.394 ******* 2026-04-08 00:21:40.012373 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-04-08 00:21:40.012382 | orchestrator | ok: [testbed-manager] 2026-04-08 00:21:40.012389 | orchestrator | 2026-04-08 00:21:40.012396 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-04-08 00:21:40.012404 | orchestrator | Wednesday 08 April 2026 00:20:27 +0000 (0:00:30.229) 0:00:35.623 ******* 2026-04-08 00:21:40.012411 | orchestrator | changed: [testbed-manager] 2026-04-08 00:21:40.012418 | orchestrator | 2026-04-08 00:21:40.012425 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-04-08 00:21:40.012433 | orchestrator | Wednesday 08 April 2026 00:20:39 +0000 (0:00:11.917) 0:00:47.540 ******* 2026-04-08 00:21:40.012440 | orchestrator | Pausing for 60 seconds 2026-04-08 00:21:40.012448 | orchestrator | changed: [testbed-manager] 2026-04-08 00:21:40.012455 | orchestrator | 2026-04-08 00:21:40.012462 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-04-08 00:21:40.012473 | orchestrator | Wednesday 08 April 2026 00:21:39 +0000 (0:01:00.076) 0:01:47.617 ******* 2026-04-08 00:21:40.012481 | orchestrator | ok: [testbed-manager] 2026-04-08 00:21:40.012488 | orchestrator | 2026-04-08 00:21:40.012495 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-04-08 00:21:40.012505 | orchestrator | Wednesday 08 April 2026 00:21:39 +0000 (0:00:00.055) 0:01:47.672 ******* 2026-04-08 00:21:40.012514 | orchestrator | changed: [testbed-manager] 2026-04-08 00:21:40.012522 | orchestrator | 2026-04-08 00:21:40.012531 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:21:40.012540 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:21:40.012548 | orchestrator | 2026-04-08 00:21:40.012556 | orchestrator | 2026-04-08 00:21:40.012564 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:21:40.012574 | orchestrator | Wednesday 08 April 2026 00:21:39 +0000 (0:00:00.560) 0:01:48.233 ******* 2026-04-08 00:21:40.012582 | orchestrator | =============================================================================== 2026-04-08 00:21:40.012590 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2026-04-08 00:21:40.012598 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 30.23s 2026-04-08 00:21:40.012607 | orchestrator | osism.services.squid : Restart squid service --------------------------- 11.92s 2026-04-08 00:21:40.012615 | orchestrator | osism.services.squid : Install required packages ------------------------ 2.01s 2026-04-08 00:21:40.012623 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.08s 2026-04-08 00:21:40.012632 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 0.90s 2026-04-08 00:21:40.012640 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.81s 2026-04-08 00:21:40.012652 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.56s 2026-04-08 00:21:40.012664 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.32s 2026-04-08 00:21:40.012676 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2026-04-08 00:21:40.012688 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2026-04-08 00:21:40.163274 | orchestrator | + [[ 10.0.0 != \l\a\t\e\s\t ]] 2026-04-08 00:21:40.163357 | orchestrator | ++ semver 10.0.0 10.0.0-0 2026-04-08 00:21:40.245278 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-08 00:21:40.245369 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release/ 2026-04-08 00:21:40.251774 | orchestrator | + set -e 2026-04-08 00:21:40.251807 | orchestrator | + NAMESPACE=kolla/release/ 2026-04-08 00:21:40.251821 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release/#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-04-08 00:21:40.258449 | orchestrator | ++ semver 10.0.0 9.0.0 2026-04-08 00:21:40.315617 | orchestrator | + [[ 1 -lt 0 ]] 2026-04-08 00:21:40.316253 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-04-08 00:21:51.611854 | orchestrator | 2026-04-08 00:21:51 | INFO  | Prepare task for execution of operator. 2026-04-08 00:21:51.687267 | orchestrator | 2026-04-08 00:21:51 | INFO  | Task b7937317-c742-44fd-997b-dcd7e531a1ff (operator) was prepared for execution. 2026-04-08 00:21:51.687378 | orchestrator | 2026-04-08 00:21:51 | INFO  | It takes a moment until task b7937317-c742-44fd-997b-dcd7e531a1ff (operator) has been started and output is visible here. 2026-04-08 00:22:05.679144 | orchestrator | 2026-04-08 00:22:05.679256 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-04-08 00:22:05.679273 | orchestrator | 2026-04-08 00:22:05.679285 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-08 00:22:05.679297 | orchestrator | Wednesday 08 April 2026 00:21:54 +0000 (0:00:00.180) 0:00:00.180 ******* 2026-04-08 00:22:05.679308 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:22:05.679320 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:22:05.679331 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:22:05.679342 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:22:05.679353 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:22:05.679364 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:22:05.679375 | orchestrator | 2026-04-08 00:22:05.679386 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-04-08 00:22:05.679397 | orchestrator | Wednesday 08 April 2026 00:21:57 +0000 (0:00:02.921) 0:00:03.102 ******* 2026-04-08 00:22:05.679408 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:22:05.679419 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:22:05.679430 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:22:05.679441 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:22:05.679452 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:22:05.679463 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:22:05.679474 | orchestrator | 2026-04-08 00:22:05.679485 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-04-08 00:22:05.679496 | orchestrator | 2026-04-08 00:22:05.679507 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-08 00:22:05.679518 | orchestrator | Wednesday 08 April 2026 00:21:58 +0000 (0:00:00.684) 0:00:03.786 ******* 2026-04-08 00:22:05.679529 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:22:05.679541 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:22:05.679552 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:22:05.679563 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:22:05.679574 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:22:05.679584 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:22:05.679595 | orchestrator | 2026-04-08 00:22:05.679606 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-08 00:22:05.679617 | orchestrator | Wednesday 08 April 2026 00:21:58 +0000 (0:00:00.166) 0:00:03.953 ******* 2026-04-08 00:22:05.679628 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:22:05.679639 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:22:05.679649 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:22:05.679660 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:22:05.679671 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:22:05.679685 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:22:05.679697 | orchestrator | 2026-04-08 00:22:05.679711 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-08 00:22:05.679724 | orchestrator | Wednesday 08 April 2026 00:21:58 +0000 (0:00:00.134) 0:00:04.087 ******* 2026-04-08 00:22:05.679737 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:22:05.679751 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:22:05.679764 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:22:05.679777 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:22:05.679789 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:22:05.679828 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:22:05.679842 | orchestrator | 2026-04-08 00:22:05.679856 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-08 00:22:05.679870 | orchestrator | Wednesday 08 April 2026 00:21:59 +0000 (0:00:00.661) 0:00:04.749 ******* 2026-04-08 00:22:05.679889 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:22:05.679908 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:22:05.679927 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:22:05.679946 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:22:05.680001 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:22:05.680023 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:22:05.680042 | orchestrator | 2026-04-08 00:22:05.680061 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-08 00:22:05.680080 | orchestrator | Wednesday 08 April 2026 00:22:00 +0000 (0:00:00.838) 0:00:05.587 ******* 2026-04-08 00:22:05.680093 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-04-08 00:22:05.680104 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-04-08 00:22:05.680115 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-04-08 00:22:05.680126 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-04-08 00:22:05.680137 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-04-08 00:22:05.680150 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-04-08 00:22:05.680169 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-04-08 00:22:05.680187 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-04-08 00:22:05.680205 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-04-08 00:22:05.680223 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-04-08 00:22:05.680241 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-04-08 00:22:05.680259 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-04-08 00:22:05.680277 | orchestrator | 2026-04-08 00:22:05.680295 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-08 00:22:05.680336 | orchestrator | Wednesday 08 April 2026 00:22:01 +0000 (0:00:01.134) 0:00:06.722 ******* 2026-04-08 00:22:05.680356 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:22:05.680375 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:22:05.680390 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:22:05.680401 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:22:05.680412 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:22:05.680423 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:22:05.680434 | orchestrator | 2026-04-08 00:22:05.680445 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-08 00:22:05.680456 | orchestrator | Wednesday 08 April 2026 00:22:02 +0000 (0:00:01.254) 0:00:07.976 ******* 2026-04-08 00:22:05.680468 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-04-08 00:22:05.680479 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-04-08 00:22:05.680490 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-04-08 00:22:05.680500 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-04-08 00:22:05.680511 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-04-08 00:22:05.680543 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-04-08 00:22:05.680555 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-04-08 00:22:05.680566 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-04-08 00:22:05.680576 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-04-08 00:22:05.680587 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-04-08 00:22:05.680598 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-04-08 00:22:05.680608 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-04-08 00:22:05.680619 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-04-08 00:22:05.680641 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-04-08 00:22:05.680652 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-04-08 00:22:05.680663 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-04-08 00:22:05.680674 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-04-08 00:22:05.680684 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-04-08 00:22:05.680695 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-04-08 00:22:05.680705 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-04-08 00:22:05.680716 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-04-08 00:22:05.680727 | orchestrator | 2026-04-08 00:22:05.680737 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-08 00:22:05.680749 | orchestrator | Wednesday 08 April 2026 00:22:03 +0000 (0:00:01.189) 0:00:09.166 ******* 2026-04-08 00:22:05.680760 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:22:05.680771 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:22:05.680782 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:22:05.680793 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:22:05.680803 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:22:05.680814 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:22:05.680825 | orchestrator | 2026-04-08 00:22:05.680836 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-08 00:22:05.680847 | orchestrator | Wednesday 08 April 2026 00:22:03 +0000 (0:00:00.139) 0:00:09.305 ******* 2026-04-08 00:22:05.680863 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:22:05.680874 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:22:05.680885 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:22:05.680896 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:22:05.680907 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:22:05.680917 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:22:05.680928 | orchestrator | 2026-04-08 00:22:05.680939 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-08 00:22:05.680950 | orchestrator | Wednesday 08 April 2026 00:22:04 +0000 (0:00:00.152) 0:00:09.458 ******* 2026-04-08 00:22:05.680986 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:22:05.681005 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:22:05.681017 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:22:05.681028 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:22:05.681038 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:22:05.681055 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:22:05.681073 | orchestrator | 2026-04-08 00:22:05.681091 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-08 00:22:05.681109 | orchestrator | Wednesday 08 April 2026 00:22:04 +0000 (0:00:00.538) 0:00:09.996 ******* 2026-04-08 00:22:05.681127 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:22:05.681145 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:22:05.681163 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:22:05.681180 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:22:05.681196 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:22:05.681260 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:22:05.681283 | orchestrator | 2026-04-08 00:22:05.681302 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-08 00:22:05.681322 | orchestrator | Wednesday 08 April 2026 00:22:04 +0000 (0:00:00.178) 0:00:10.174 ******* 2026-04-08 00:22:05.681334 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-08 00:22:05.681345 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:22:05.681357 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-08 00:22:05.681367 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-08 00:22:05.681378 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-08 00:22:05.681399 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:22:05.681410 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:22:05.681421 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:22:05.681432 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-08 00:22:05.681444 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-08 00:22:05.681455 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:22:05.681465 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:22:05.681476 | orchestrator | 2026-04-08 00:22:05.681488 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-08 00:22:05.681499 | orchestrator | Wednesday 08 April 2026 00:22:05 +0000 (0:00:00.655) 0:00:10.829 ******* 2026-04-08 00:22:05.681510 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:22:05.681521 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:22:05.681532 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:22:05.681543 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:22:05.681554 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:22:05.681572 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:22:05.681590 | orchestrator | 2026-04-08 00:22:05.681606 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-08 00:22:05.681624 | orchestrator | Wednesday 08 April 2026 00:22:05 +0000 (0:00:00.133) 0:00:10.963 ******* 2026-04-08 00:22:05.681643 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:22:05.681661 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:22:05.681679 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:22:05.681698 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:22:05.681732 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:22:06.816392 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:22:06.816502 | orchestrator | 2026-04-08 00:22:06.816516 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-08 00:22:06.816525 | orchestrator | Wednesday 08 April 2026 00:22:05 +0000 (0:00:00.135) 0:00:11.099 ******* 2026-04-08 00:22:06.816532 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:22:06.816539 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:22:06.816546 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:22:06.816553 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:22:06.816560 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:22:06.816567 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:22:06.816574 | orchestrator | 2026-04-08 00:22:06.816581 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-08 00:22:06.816588 | orchestrator | Wednesday 08 April 2026 00:22:05 +0000 (0:00:00.131) 0:00:11.230 ******* 2026-04-08 00:22:06.816595 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:22:06.816601 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:22:06.816608 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:22:06.816615 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:22:06.816622 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:22:06.816628 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:22:06.816635 | orchestrator | 2026-04-08 00:22:06.816642 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-08 00:22:06.816649 | orchestrator | Wednesday 08 April 2026 00:22:06 +0000 (0:00:00.620) 0:00:11.850 ******* 2026-04-08 00:22:06.816655 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:22:06.816662 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:22:06.816669 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:22:06.816675 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:22:06.816682 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:22:06.816688 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:22:06.816695 | orchestrator | 2026-04-08 00:22:06.816702 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:22:06.816710 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-08 00:22:06.816741 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-08 00:22:06.816748 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-08 00:22:06.816755 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-08 00:22:06.816761 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-08 00:22:06.816768 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-08 00:22:06.816775 | orchestrator | 2026-04-08 00:22:06.816781 | orchestrator | 2026-04-08 00:22:06.816788 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:22:06.816800 | orchestrator | Wednesday 08 April 2026 00:22:06 +0000 (0:00:00.200) 0:00:12.050 ******* 2026-04-08 00:22:06.816810 | orchestrator | =============================================================================== 2026-04-08 00:22:06.816821 | orchestrator | Gathering Facts --------------------------------------------------------- 2.92s 2026-04-08 00:22:06.816834 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.25s 2026-04-08 00:22:06.816845 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.19s 2026-04-08 00:22:06.816856 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.13s 2026-04-08 00:22:06.816868 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.84s 2026-04-08 00:22:06.816880 | orchestrator | Do not require tty for all users ---------------------------------------- 0.68s 2026-04-08 00:22:06.816891 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.66s 2026-04-08 00:22:06.816902 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.66s 2026-04-08 00:22:06.816910 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.62s 2026-04-08 00:22:06.816917 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.54s 2026-04-08 00:22:06.816923 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.20s 2026-04-08 00:22:06.816930 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.18s 2026-04-08 00:22:06.816937 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.17s 2026-04-08 00:22:06.816944 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.15s 2026-04-08 00:22:06.816952 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.14s 2026-04-08 00:22:06.817166 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.14s 2026-04-08 00:22:06.817177 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.13s 2026-04-08 00:22:06.817185 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.13s 2026-04-08 00:22:06.817232 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.13s 2026-04-08 00:22:06.973486 | orchestrator | + osism apply --environment custom facts 2026-04-08 00:22:08.207040 | orchestrator | 2026-04-08 00:22:08 | INFO  | Trying to run play facts in environment custom 2026-04-08 00:22:18.291319 | orchestrator | 2026-04-08 00:22:18 | INFO  | Prepare task for execution of facts. 2026-04-08 00:22:18.357498 | orchestrator | 2026-04-08 00:22:18 | INFO  | Task 57afbdec-4422-49d7-b42b-f4b200382ce7 (facts) was prepared for execution. 2026-04-08 00:22:18.357593 | orchestrator | 2026-04-08 00:22:18 | INFO  | It takes a moment until task 57afbdec-4422-49d7-b42b-f4b200382ce7 (facts) has been started and output is visible here. 2026-04-08 00:22:59.443797 | orchestrator | 2026-04-08 00:22:59.443905 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-04-08 00:22:59.443991 | orchestrator | 2026-04-08 00:22:59.444005 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-08 00:22:59.444017 | orchestrator | Wednesday 08 April 2026 00:22:21 +0000 (0:00:00.112) 0:00:00.112 ******* 2026-04-08 00:22:59.444028 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:22:59.444041 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:22:59.444052 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:22:59.444064 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:22:59.444075 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:22:59.444086 | orchestrator | ok: [testbed-manager] 2026-04-08 00:22:59.444097 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:22:59.444108 | orchestrator | 2026-04-08 00:22:59.444120 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-04-08 00:22:59.444131 | orchestrator | Wednesday 08 April 2026 00:22:22 +0000 (0:00:01.356) 0:00:01.469 ******* 2026-04-08 00:22:59.444142 | orchestrator | ok: [testbed-manager] 2026-04-08 00:22:59.444153 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:22:59.444164 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:22:59.444175 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:22:59.444185 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:22:59.444196 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:22:59.444207 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:22:59.444218 | orchestrator | 2026-04-08 00:22:59.444229 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-04-08 00:22:59.444240 | orchestrator | 2026-04-08 00:22:59.444252 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-08 00:22:59.444280 | orchestrator | Wednesday 08 April 2026 00:22:23 +0000 (0:00:01.215) 0:00:02.684 ******* 2026-04-08 00:22:59.444291 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:22:59.444303 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:22:59.444314 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:22:59.444325 | orchestrator | 2026-04-08 00:22:59.444337 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-08 00:22:59.444350 | orchestrator | Wednesday 08 April 2026 00:22:24 +0000 (0:00:00.091) 0:00:02.776 ******* 2026-04-08 00:22:59.444363 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:22:59.444376 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:22:59.444389 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:22:59.444402 | orchestrator | 2026-04-08 00:22:59.444414 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-08 00:22:59.444427 | orchestrator | Wednesday 08 April 2026 00:22:24 +0000 (0:00:00.183) 0:00:02.959 ******* 2026-04-08 00:22:59.444440 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:22:59.444453 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:22:59.444465 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:22:59.444477 | orchestrator | 2026-04-08 00:22:59.444489 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-08 00:22:59.444502 | orchestrator | Wednesday 08 April 2026 00:22:24 +0000 (0:00:00.199) 0:00:03.159 ******* 2026-04-08 00:22:59.444515 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:22:59.444529 | orchestrator | 2026-04-08 00:22:59.444541 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-08 00:22:59.444555 | orchestrator | Wednesday 08 April 2026 00:22:24 +0000 (0:00:00.133) 0:00:03.292 ******* 2026-04-08 00:22:59.444567 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:22:59.444579 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:22:59.444592 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:22:59.444605 | orchestrator | 2026-04-08 00:22:59.444617 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-08 00:22:59.444654 | orchestrator | Wednesday 08 April 2026 00:22:24 +0000 (0:00:00.441) 0:00:03.734 ******* 2026-04-08 00:22:59.444666 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:22:59.444680 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:22:59.444692 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:22:59.444703 | orchestrator | 2026-04-08 00:22:59.444714 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-08 00:22:59.444725 | orchestrator | Wednesday 08 April 2026 00:22:25 +0000 (0:00:00.105) 0:00:03.839 ******* 2026-04-08 00:22:59.444735 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:22:59.444746 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:22:59.444757 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:22:59.444768 | orchestrator | 2026-04-08 00:22:59.444779 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-08 00:22:59.444790 | orchestrator | Wednesday 08 April 2026 00:22:26 +0000 (0:00:01.011) 0:00:04.851 ******* 2026-04-08 00:22:59.444800 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:22:59.444811 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:22:59.444822 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:22:59.444833 | orchestrator | 2026-04-08 00:22:59.444844 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-08 00:22:59.444855 | orchestrator | Wednesday 08 April 2026 00:22:26 +0000 (0:00:00.456) 0:00:05.307 ******* 2026-04-08 00:22:59.444866 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:22:59.444876 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:22:59.444888 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:22:59.444898 | orchestrator | 2026-04-08 00:22:59.444910 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-08 00:22:59.444938 | orchestrator | Wednesday 08 April 2026 00:22:27 +0000 (0:00:01.000) 0:00:06.308 ******* 2026-04-08 00:22:59.444949 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:22:59.444960 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:22:59.444971 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:22:59.444982 | orchestrator | 2026-04-08 00:22:59.444993 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-04-08 00:22:59.445004 | orchestrator | Wednesday 08 April 2026 00:22:42 +0000 (0:00:15.322) 0:00:21.631 ******* 2026-04-08 00:22:59.445015 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:22:59.445025 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:22:59.445037 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:22:59.445048 | orchestrator | 2026-04-08 00:22:59.445059 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-04-08 00:22:59.445087 | orchestrator | Wednesday 08 April 2026 00:22:42 +0000 (0:00:00.104) 0:00:21.735 ******* 2026-04-08 00:22:59.445098 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:22:59.445109 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:22:59.445121 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:22:59.445131 | orchestrator | 2026-04-08 00:22:59.445143 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-08 00:22:59.445153 | orchestrator | Wednesday 08 April 2026 00:22:50 +0000 (0:00:07.784) 0:00:29.520 ******* 2026-04-08 00:22:59.445164 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:22:59.445175 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:22:59.445186 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:22:59.445197 | orchestrator | 2026-04-08 00:22:59.445208 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-08 00:22:59.445219 | orchestrator | Wednesday 08 April 2026 00:22:51 +0000 (0:00:00.432) 0:00:29.952 ******* 2026-04-08 00:22:59.445230 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-04-08 00:22:59.445240 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-04-08 00:22:59.445251 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-04-08 00:22:59.445262 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-04-08 00:22:59.445272 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-04-08 00:22:59.445291 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-04-08 00:22:59.445302 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-04-08 00:22:59.445319 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-04-08 00:22:59.445330 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-04-08 00:22:59.445341 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-04-08 00:22:59.445351 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-04-08 00:22:59.445362 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-04-08 00:22:59.445373 | orchestrator | 2026-04-08 00:22:59.445384 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-08 00:22:59.445395 | orchestrator | Wednesday 08 April 2026 00:22:54 +0000 (0:00:03.341) 0:00:33.294 ******* 2026-04-08 00:22:59.445406 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:22:59.445416 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:22:59.445427 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:22:59.445438 | orchestrator | 2026-04-08 00:22:59.445449 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-08 00:22:59.445460 | orchestrator | 2026-04-08 00:22:59.445470 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-08 00:22:59.445481 | orchestrator | Wednesday 08 April 2026 00:22:55 +0000 (0:00:01.268) 0:00:34.562 ******* 2026-04-08 00:22:59.445493 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:22:59.445504 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:22:59.445515 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:22:59.445525 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:22:59.445536 | orchestrator | ok: [testbed-manager] 2026-04-08 00:22:59.445546 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:22:59.445557 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:22:59.445568 | orchestrator | 2026-04-08 00:22:59.445579 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:22:59.445591 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:22:59.445602 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:22:59.445613 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:22:59.445624 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:22:59.445635 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:22:59.445646 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:22:59.445657 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:22:59.445668 | orchestrator | 2026-04-08 00:22:59.445679 | orchestrator | 2026-04-08 00:22:59.445690 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:22:59.445701 | orchestrator | Wednesday 08 April 2026 00:22:59 +0000 (0:00:03.599) 0:00:38.162 ******* 2026-04-08 00:22:59.445712 | orchestrator | =============================================================================== 2026-04-08 00:22:59.445723 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.32s 2026-04-08 00:22:59.445733 | orchestrator | Install required packages (Debian) -------------------------------------- 7.78s 2026-04-08 00:22:59.445744 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.60s 2026-04-08 00:22:59.445762 | orchestrator | Copy fact files --------------------------------------------------------- 3.34s 2026-04-08 00:22:59.445772 | orchestrator | Create custom facts directory ------------------------------------------- 1.36s 2026-04-08 00:22:59.445783 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.27s 2026-04-08 00:22:59.445801 | orchestrator | Copy fact file ---------------------------------------------------------- 1.22s 2026-04-08 00:22:59.608351 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.01s 2026-04-08 00:22:59.608441 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.00s 2026-04-08 00:22:59.608452 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.46s 2026-04-08 00:22:59.608463 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.44s 2026-04-08 00:22:59.608473 | orchestrator | Create custom facts directory ------------------------------------------- 0.43s 2026-04-08 00:22:59.608483 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.20s 2026-04-08 00:22:59.608493 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.18s 2026-04-08 00:22:59.608503 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.13s 2026-04-08 00:22:59.608513 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.11s 2026-04-08 00:22:59.608523 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2026-04-08 00:22:59.608533 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.09s 2026-04-08 00:22:59.767352 | orchestrator | + osism apply bootstrap 2026-04-08 00:23:11.086389 | orchestrator | 2026-04-08 00:23:11 | INFO  | Prepare task for execution of bootstrap. 2026-04-08 00:23:11.159079 | orchestrator | 2026-04-08 00:23:11 | INFO  | Task 397ec146-2758-4716-97bc-7a48d8138bd9 (bootstrap) was prepared for execution. 2026-04-08 00:23:11.159160 | orchestrator | 2026-04-08 00:23:11 | INFO  | It takes a moment until task 397ec146-2758-4716-97bc-7a48d8138bd9 (bootstrap) has been started and output is visible here. 2026-04-08 00:23:26.106693 | orchestrator | 2026-04-08 00:23:26.106769 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-04-08 00:23:26.106777 | orchestrator | 2026-04-08 00:23:26.106781 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-04-08 00:23:26.106786 | orchestrator | Wednesday 08 April 2026 00:23:14 +0000 (0:00:00.190) 0:00:00.190 ******* 2026-04-08 00:23:26.106791 | orchestrator | ok: [testbed-manager] 2026-04-08 00:23:26.106796 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:23:26.106800 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:23:26.106804 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:23:26.106809 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:23:26.106812 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:23:26.106817 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:23:26.106820 | orchestrator | 2026-04-08 00:23:26.106825 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-08 00:23:26.106828 | orchestrator | 2026-04-08 00:23:26.106832 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-08 00:23:26.106837 | orchestrator | Wednesday 08 April 2026 00:23:14 +0000 (0:00:00.295) 0:00:00.486 ******* 2026-04-08 00:23:26.106840 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:23:26.106844 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:23:26.106848 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:23:26.106852 | orchestrator | ok: [testbed-manager] 2026-04-08 00:23:26.106856 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:23:26.106860 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:23:26.106864 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:23:26.106868 | orchestrator | 2026-04-08 00:23:26.106872 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-04-08 00:23:26.106875 | orchestrator | 2026-04-08 00:23:26.106934 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-08 00:23:26.106939 | orchestrator | Wednesday 08 April 2026 00:23:19 +0000 (0:00:04.692) 0:00:05.178 ******* 2026-04-08 00:23:26.106943 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-08 00:23:26.106948 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-08 00:23:26.106952 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-04-08 00:23:26.106956 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-08 00:23:26.106959 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-08 00:23:26.106963 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-08 00:23:26.106967 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-04-08 00:23:26.106971 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-08 00:23:26.106975 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-08 00:23:26.106979 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-08 00:23:26.106983 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-08 00:23:26.106986 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-08 00:23:26.106990 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-08 00:23:26.106994 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-04-08 00:23:26.106998 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-08 00:23:26.107001 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-08 00:23:26.107005 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-08 00:23:26.107009 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-08 00:23:26.107013 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-08 00:23:26.107016 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-08 00:23:26.107020 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-04-08 00:23:26.107024 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:23:26.107028 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-08 00:23:26.107031 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-08 00:23:26.107035 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-08 00:23:26.107039 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-08 00:23:26.107043 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:23:26.107046 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-08 00:23:26.107050 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:23:26.107054 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-08 00:23:26.107058 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-04-08 00:23:26.107061 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-08 00:23:26.107065 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-08 00:23:26.107069 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-08 00:23:26.107073 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-08 00:23:26.107076 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-08 00:23:26.107080 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-04-08 00:23:26.107084 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-08 00:23:26.107088 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-08 00:23:26.107091 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-08 00:23:26.107095 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:23:26.107099 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-08 00:23:26.107112 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-08 00:23:26.107120 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-08 00:23:26.107123 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-08 00:23:26.107127 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-08 00:23:26.107142 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-08 00:23:26.107146 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:23:26.107150 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-08 00:23:26.107154 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-08 00:23:26.107158 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-08 00:23:26.107161 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-08 00:23:26.107165 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:23:26.107169 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-08 00:23:26.107173 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-08 00:23:26.107177 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:23:26.107180 | orchestrator | 2026-04-08 00:23:26.107184 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-04-08 00:23:26.107188 | orchestrator | 2026-04-08 00:23:26.107192 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-04-08 00:23:26.107196 | orchestrator | Wednesday 08 April 2026 00:23:19 +0000 (0:00:00.443) 0:00:05.622 ******* 2026-04-08 00:23:26.107200 | orchestrator | ok: [testbed-manager] 2026-04-08 00:23:26.107203 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:23:26.107207 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:23:26.107211 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:23:26.107215 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:23:26.107218 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:23:26.107222 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:23:26.107226 | orchestrator | 2026-04-08 00:23:26.107230 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-04-08 00:23:26.107234 | orchestrator | Wednesday 08 April 2026 00:23:21 +0000 (0:00:01.164) 0:00:06.787 ******* 2026-04-08 00:23:26.107237 | orchestrator | ok: [testbed-manager] 2026-04-08 00:23:26.107241 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:23:26.107245 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:23:26.107248 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:23:26.107252 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:23:26.107257 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:23:26.107261 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:23:26.107266 | orchestrator | 2026-04-08 00:23:26.107270 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-04-08 00:23:26.107274 | orchestrator | Wednesday 08 April 2026 00:23:22 +0000 (0:00:01.227) 0:00:08.014 ******* 2026-04-08 00:23:26.107279 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:23:26.107286 | orchestrator | 2026-04-08 00:23:26.107290 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-04-08 00:23:26.107295 | orchestrator | Wednesday 08 April 2026 00:23:22 +0000 (0:00:00.245) 0:00:08.260 ******* 2026-04-08 00:23:26.107299 | orchestrator | changed: [testbed-manager] 2026-04-08 00:23:26.107304 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:23:26.107308 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:23:26.107312 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:23:26.107317 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:23:26.107321 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:23:26.107325 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:23:26.107329 | orchestrator | 2026-04-08 00:23:26.107334 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-04-08 00:23:26.107338 | orchestrator | Wednesday 08 April 2026 00:23:23 +0000 (0:00:01.350) 0:00:09.611 ******* 2026-04-08 00:23:26.107346 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:23:26.107350 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:23:26.107355 | orchestrator | 2026-04-08 00:23:26.107359 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-04-08 00:23:26.107363 | orchestrator | Wednesday 08 April 2026 00:23:24 +0000 (0:00:00.247) 0:00:09.858 ******* 2026-04-08 00:23:26.107367 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:23:26.107370 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:23:26.107374 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:23:26.107378 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:23:26.107382 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:23:26.107386 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:23:26.107389 | orchestrator | 2026-04-08 00:23:26.107393 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-04-08 00:23:26.107397 | orchestrator | Wednesday 08 April 2026 00:23:25 +0000 (0:00:00.965) 0:00:10.824 ******* 2026-04-08 00:23:26.107401 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:23:26.107405 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:23:26.107408 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:23:26.107412 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:23:26.107416 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:23:26.107420 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:23:26.107423 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:23:26.107427 | orchestrator | 2026-04-08 00:23:26.107431 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-04-08 00:23:26.107435 | orchestrator | Wednesday 08 April 2026 00:23:25 +0000 (0:00:00.546) 0:00:11.370 ******* 2026-04-08 00:23:26.107439 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:23:26.107442 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:23:26.107446 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:23:26.107450 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:23:26.107454 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:23:26.107458 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:23:26.107462 | orchestrator | ok: [testbed-manager] 2026-04-08 00:23:26.107465 | orchestrator | 2026-04-08 00:23:26.107469 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-08 00:23:26.107473 | orchestrator | Wednesday 08 April 2026 00:23:25 +0000 (0:00:00.407) 0:00:11.778 ******* 2026-04-08 00:23:26.107477 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:23:26.107481 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:23:26.107487 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:23:38.271640 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:23:38.271747 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:23:38.271761 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:23:38.271773 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:23:38.271784 | orchestrator | 2026-04-08 00:23:38.271797 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-08 00:23:38.271810 | orchestrator | Wednesday 08 April 2026 00:23:26 +0000 (0:00:00.200) 0:00:11.979 ******* 2026-04-08 00:23:38.271823 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:23:38.271852 | orchestrator | 2026-04-08 00:23:38.271864 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-08 00:23:38.271876 | orchestrator | Wednesday 08 April 2026 00:23:26 +0000 (0:00:00.268) 0:00:12.248 ******* 2026-04-08 00:23:38.271946 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:23:38.271984 | orchestrator | 2026-04-08 00:23:38.271996 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-08 00:23:38.272007 | orchestrator | Wednesday 08 April 2026 00:23:26 +0000 (0:00:00.294) 0:00:12.542 ******* 2026-04-08 00:23:38.272018 | orchestrator | ok: [testbed-manager] 2026-04-08 00:23:38.272030 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:23:38.272041 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:23:38.272052 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:23:38.272062 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:23:38.272073 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:23:38.272084 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:23:38.272095 | orchestrator | 2026-04-08 00:23:38.272106 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-08 00:23:38.272117 | orchestrator | Wednesday 08 April 2026 00:23:28 +0000 (0:00:01.309) 0:00:13.852 ******* 2026-04-08 00:23:38.272128 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:23:38.272140 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:23:38.272150 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:23:38.272161 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:23:38.272172 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:23:38.272185 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:23:38.272197 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:23:38.272209 | orchestrator | 2026-04-08 00:23:38.272221 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-08 00:23:38.272234 | orchestrator | Wednesday 08 April 2026 00:23:28 +0000 (0:00:00.212) 0:00:14.064 ******* 2026-04-08 00:23:38.272247 | orchestrator | ok: [testbed-manager] 2026-04-08 00:23:38.272259 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:23:38.272271 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:23:38.272283 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:23:38.272296 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:23:38.272308 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:23:38.272321 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:23:38.272333 | orchestrator | 2026-04-08 00:23:38.272345 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-08 00:23:38.272357 | orchestrator | Wednesday 08 April 2026 00:23:28 +0000 (0:00:00.528) 0:00:14.593 ******* 2026-04-08 00:23:38.272371 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:23:38.272384 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:23:38.272395 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:23:38.272406 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:23:38.272417 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:23:38.272427 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:23:38.272438 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:23:38.272449 | orchestrator | 2026-04-08 00:23:38.272460 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-08 00:23:38.272471 | orchestrator | Wednesday 08 April 2026 00:23:29 +0000 (0:00:00.235) 0:00:14.829 ******* 2026-04-08 00:23:38.272482 | orchestrator | ok: [testbed-manager] 2026-04-08 00:23:38.272493 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:23:38.272504 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:23:38.272515 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:23:38.272526 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:23:38.272537 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:23:38.272547 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:23:38.272558 | orchestrator | 2026-04-08 00:23:38.272581 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-08 00:23:38.272593 | orchestrator | Wednesday 08 April 2026 00:23:29 +0000 (0:00:00.518) 0:00:15.347 ******* 2026-04-08 00:23:38.272603 | orchestrator | ok: [testbed-manager] 2026-04-08 00:23:38.272614 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:23:38.272625 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:23:38.272644 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:23:38.272655 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:23:38.272666 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:23:38.272677 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:23:38.272688 | orchestrator | 2026-04-08 00:23:38.272699 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-08 00:23:38.272710 | orchestrator | Wednesday 08 April 2026 00:23:30 +0000 (0:00:01.051) 0:00:16.399 ******* 2026-04-08 00:23:38.272721 | orchestrator | ok: [testbed-manager] 2026-04-08 00:23:38.272732 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:23:38.272743 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:23:38.272754 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:23:38.272765 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:23:38.272780 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:23:38.272792 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:23:38.272803 | orchestrator | 2026-04-08 00:23:38.272814 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-08 00:23:38.272825 | orchestrator | Wednesday 08 April 2026 00:23:32 +0000 (0:00:01.944) 0:00:18.343 ******* 2026-04-08 00:23:38.272856 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:23:38.272868 | orchestrator | 2026-04-08 00:23:38.272880 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-08 00:23:38.272911 | orchestrator | Wednesday 08 April 2026 00:23:32 +0000 (0:00:00.310) 0:00:18.654 ******* 2026-04-08 00:23:38.272922 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:23:38.272933 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:23:38.272944 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:23:38.272955 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:23:38.272966 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:23:38.272977 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:23:38.272988 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:23:38.272999 | orchestrator | 2026-04-08 00:23:38.273010 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-08 00:23:38.273021 | orchestrator | Wednesday 08 April 2026 00:23:34 +0000 (0:00:01.232) 0:00:19.887 ******* 2026-04-08 00:23:38.273032 | orchestrator | ok: [testbed-manager] 2026-04-08 00:23:38.273043 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:23:38.273054 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:23:38.273065 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:23:38.273076 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:23:38.273087 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:23:38.273097 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:23:38.273108 | orchestrator | 2026-04-08 00:23:38.273119 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-08 00:23:38.273130 | orchestrator | Wednesday 08 April 2026 00:23:34 +0000 (0:00:00.216) 0:00:20.103 ******* 2026-04-08 00:23:38.273141 | orchestrator | ok: [testbed-manager] 2026-04-08 00:23:38.273152 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:23:38.273163 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:23:38.273174 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:23:38.273184 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:23:38.273195 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:23:38.273206 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:23:38.273217 | orchestrator | 2026-04-08 00:23:38.273228 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-08 00:23:38.273239 | orchestrator | Wednesday 08 April 2026 00:23:34 +0000 (0:00:00.209) 0:00:20.313 ******* 2026-04-08 00:23:38.273250 | orchestrator | ok: [testbed-manager] 2026-04-08 00:23:38.273261 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:23:38.273272 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:23:38.273282 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:23:38.273293 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:23:38.273313 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:23:38.273323 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:23:38.273334 | orchestrator | 2026-04-08 00:23:38.273345 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-08 00:23:38.273356 | orchestrator | Wednesday 08 April 2026 00:23:34 +0000 (0:00:00.223) 0:00:20.536 ******* 2026-04-08 00:23:38.273368 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:23:38.273381 | orchestrator | 2026-04-08 00:23:38.273393 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-08 00:23:38.273404 | orchestrator | Wednesday 08 April 2026 00:23:35 +0000 (0:00:00.257) 0:00:20.793 ******* 2026-04-08 00:23:38.273415 | orchestrator | ok: [testbed-manager] 2026-04-08 00:23:38.273425 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:23:38.273436 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:23:38.273447 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:23:38.273458 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:23:38.273469 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:23:38.273480 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:23:38.273490 | orchestrator | 2026-04-08 00:23:38.273501 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-08 00:23:38.273513 | orchestrator | Wednesday 08 April 2026 00:23:35 +0000 (0:00:00.495) 0:00:21.289 ******* 2026-04-08 00:23:38.273524 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:23:38.273535 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:23:38.273546 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:23:38.273557 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:23:38.273568 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:23:38.273578 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:23:38.273589 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:23:38.273600 | orchestrator | 2026-04-08 00:23:38.273611 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-08 00:23:38.273622 | orchestrator | Wednesday 08 April 2026 00:23:35 +0000 (0:00:00.252) 0:00:21.541 ******* 2026-04-08 00:23:38.273633 | orchestrator | ok: [testbed-manager] 2026-04-08 00:23:38.273644 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:23:38.273655 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:23:38.273666 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:23:38.273677 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:23:38.273688 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:23:38.273699 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:23:38.273710 | orchestrator | 2026-04-08 00:23:38.273721 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-08 00:23:38.273732 | orchestrator | Wednesday 08 April 2026 00:23:36 +0000 (0:00:01.042) 0:00:22.584 ******* 2026-04-08 00:23:38.273743 | orchestrator | ok: [testbed-manager] 2026-04-08 00:23:38.273753 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:23:38.273764 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:23:38.273775 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:23:38.273786 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:23:38.273797 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:23:38.273808 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:23:38.273819 | orchestrator | 2026-04-08 00:23:38.273835 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-08 00:23:38.273847 | orchestrator | Wednesday 08 April 2026 00:23:37 +0000 (0:00:00.533) 0:00:23.118 ******* 2026-04-08 00:23:38.273858 | orchestrator | ok: [testbed-manager] 2026-04-08 00:23:38.273869 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:23:38.273880 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:23:38.273908 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:23:38.273927 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:24:15.257766 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:24:15.257962 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:24:15.258094 | orchestrator | 2026-04-08 00:24:15.258125 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-08 00:24:15.258148 | orchestrator | Wednesday 08 April 2026 00:23:38 +0000 (0:00:00.986) 0:00:24.104 ******* 2026-04-08 00:24:15.258170 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:24:15.258189 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:24:15.258208 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:24:15.258227 | orchestrator | changed: [testbed-manager] 2026-04-08 00:24:15.258247 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:24:15.258269 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:24:15.258293 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:24:15.258317 | orchestrator | 2026-04-08 00:24:15.258343 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-04-08 00:24:15.258368 | orchestrator | Wednesday 08 April 2026 00:23:53 +0000 (0:00:14.686) 0:00:38.791 ******* 2026-04-08 00:24:15.258392 | orchestrator | ok: [testbed-manager] 2026-04-08 00:24:15.258410 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:24:15.258429 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:24:15.258452 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:24:15.258477 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:24:15.258503 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:24:15.258523 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:24:15.258542 | orchestrator | 2026-04-08 00:24:15.258562 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-04-08 00:24:15.258581 | orchestrator | Wednesday 08 April 2026 00:23:53 +0000 (0:00:00.161) 0:00:38.952 ******* 2026-04-08 00:24:15.258601 | orchestrator | ok: [testbed-manager] 2026-04-08 00:24:15.258620 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:24:15.258640 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:24:15.258658 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:24:15.258676 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:24:15.258695 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:24:15.258713 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:24:15.258731 | orchestrator | 2026-04-08 00:24:15.258750 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-04-08 00:24:15.258769 | orchestrator | Wednesday 08 April 2026 00:23:53 +0000 (0:00:00.180) 0:00:39.132 ******* 2026-04-08 00:24:15.258788 | orchestrator | ok: [testbed-manager] 2026-04-08 00:24:15.258807 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:24:15.258828 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:24:15.258848 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:24:15.258894 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:24:15.258914 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:24:15.258934 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:24:15.258953 | orchestrator | 2026-04-08 00:24:15.258972 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-04-08 00:24:15.258992 | orchestrator | Wednesday 08 April 2026 00:23:53 +0000 (0:00:00.166) 0:00:39.299 ******* 2026-04-08 00:24:15.259014 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:24:15.259036 | orchestrator | 2026-04-08 00:24:15.259055 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-04-08 00:24:15.259073 | orchestrator | Wednesday 08 April 2026 00:23:53 +0000 (0:00:00.229) 0:00:39.529 ******* 2026-04-08 00:24:15.259093 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:24:15.259112 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:24:15.259133 | orchestrator | ok: [testbed-manager] 2026-04-08 00:24:15.259152 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:24:15.259171 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:24:15.259191 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:24:15.259211 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:24:15.259230 | orchestrator | 2026-04-08 00:24:15.259249 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-04-08 00:24:15.259286 | orchestrator | Wednesday 08 April 2026 00:23:55 +0000 (0:00:01.375) 0:00:40.904 ******* 2026-04-08 00:24:15.259306 | orchestrator | changed: [testbed-manager] 2026-04-08 00:24:15.259326 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:24:15.259347 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:24:15.259366 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:24:15.259386 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:24:15.259407 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:24:15.259427 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:24:15.259445 | orchestrator | 2026-04-08 00:24:15.259464 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-04-08 00:24:15.259483 | orchestrator | Wednesday 08 April 2026 00:23:56 +0000 (0:00:01.018) 0:00:41.922 ******* 2026-04-08 00:24:15.259503 | orchestrator | ok: [testbed-manager] 2026-04-08 00:24:15.259523 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:24:15.259543 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:24:15.259562 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:24:15.259580 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:24:15.259599 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:24:15.259618 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:24:15.259638 | orchestrator | 2026-04-08 00:24:15.259658 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-04-08 00:24:15.259676 | orchestrator | Wednesday 08 April 2026 00:23:56 +0000 (0:00:00.766) 0:00:42.689 ******* 2026-04-08 00:24:15.259697 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:24:15.259717 | orchestrator | 2026-04-08 00:24:15.259736 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-04-08 00:24:15.259756 | orchestrator | Wednesday 08 April 2026 00:23:57 +0000 (0:00:00.282) 0:00:42.972 ******* 2026-04-08 00:24:15.259772 | orchestrator | changed: [testbed-manager] 2026-04-08 00:24:15.259790 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:24:15.259808 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:24:15.259827 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:24:15.259847 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:24:15.259896 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:24:15.259915 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:24:15.259935 | orchestrator | 2026-04-08 00:24:15.259982 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-04-08 00:24:15.260002 | orchestrator | Wednesday 08 April 2026 00:23:58 +0000 (0:00:00.990) 0:00:43.962 ******* 2026-04-08 00:24:15.260021 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:24:15.260040 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:24:15.260059 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:24:15.260078 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:24:15.260098 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:24:15.260117 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:24:15.260136 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:24:15.260156 | orchestrator | 2026-04-08 00:24:15.260176 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-04-08 00:24:15.260195 | orchestrator | Wednesday 08 April 2026 00:23:58 +0000 (0:00:00.228) 0:00:44.191 ******* 2026-04-08 00:24:15.260215 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:24:15.260236 | orchestrator | 2026-04-08 00:24:15.260257 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-04-08 00:24:15.260277 | orchestrator | Wednesday 08 April 2026 00:23:58 +0000 (0:00:00.275) 0:00:44.467 ******* 2026-04-08 00:24:15.260297 | orchestrator | ok: [testbed-manager] 2026-04-08 00:24:15.260317 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:24:15.260353 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:24:15.260373 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:24:15.260392 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:24:15.260411 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:24:15.260430 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:24:15.260450 | orchestrator | 2026-04-08 00:24:15.260469 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-04-08 00:24:15.260489 | orchestrator | Wednesday 08 April 2026 00:24:00 +0000 (0:00:01.651) 0:00:46.119 ******* 2026-04-08 00:24:15.260509 | orchestrator | changed: [testbed-manager] 2026-04-08 00:24:15.260529 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:24:15.260549 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:24:15.260569 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:24:15.260589 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:24:15.260607 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:24:15.260624 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:24:15.260642 | orchestrator | 2026-04-08 00:24:15.260659 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-04-08 00:24:15.260677 | orchestrator | Wednesday 08 April 2026 00:24:01 +0000 (0:00:01.060) 0:00:47.180 ******* 2026-04-08 00:24:15.260697 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:24:15.260716 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:24:15.260735 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:24:15.260755 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:24:15.260775 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:24:15.260795 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:24:15.260815 | orchestrator | changed: [testbed-manager] 2026-04-08 00:24:15.260834 | orchestrator | 2026-04-08 00:24:15.260851 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-04-08 00:24:15.260940 | orchestrator | Wednesday 08 April 2026 00:24:12 +0000 (0:00:11.352) 0:00:58.533 ******* 2026-04-08 00:24:15.260960 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:24:15.260979 | orchestrator | ok: [testbed-manager] 2026-04-08 00:24:15.260998 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:24:15.261018 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:24:15.261036 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:24:15.261053 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:24:15.261071 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:24:15.261091 | orchestrator | 2026-04-08 00:24:15.261111 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-04-08 00:24:15.261130 | orchestrator | Wednesday 08 April 2026 00:24:13 +0000 (0:00:00.906) 0:00:59.439 ******* 2026-04-08 00:24:15.261150 | orchestrator | ok: [testbed-manager] 2026-04-08 00:24:15.261171 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:24:15.261191 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:24:15.261210 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:24:15.261229 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:24:15.261248 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:24:15.261268 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:24:15.261288 | orchestrator | 2026-04-08 00:24:15.261308 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-04-08 00:24:15.261328 | orchestrator | Wednesday 08 April 2026 00:24:14 +0000 (0:00:00.898) 0:01:00.338 ******* 2026-04-08 00:24:15.261347 | orchestrator | ok: [testbed-manager] 2026-04-08 00:24:15.261366 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:24:15.261386 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:24:15.261405 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:24:15.261425 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:24:15.261444 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:24:15.261463 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:24:15.261481 | orchestrator | 2026-04-08 00:24:15.261499 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-04-08 00:24:15.261517 | orchestrator | Wednesday 08 April 2026 00:24:14 +0000 (0:00:00.207) 0:01:00.546 ******* 2026-04-08 00:24:15.261536 | orchestrator | ok: [testbed-manager] 2026-04-08 00:24:15.261572 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:24:15.261593 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:24:15.261613 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:24:15.261633 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:24:15.261675 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:24:15.261696 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:24:15.261716 | orchestrator | 2026-04-08 00:24:15.261737 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-04-08 00:24:15.261764 | orchestrator | Wednesday 08 April 2026 00:24:14 +0000 (0:00:00.215) 0:01:00.762 ******* 2026-04-08 00:24:15.261785 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:24:15.261807 | orchestrator | 2026-04-08 00:24:15.261846 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-04-08 00:26:35.399854 | orchestrator | Wednesday 08 April 2026 00:24:15 +0000 (0:00:00.273) 0:01:01.036 ******* 2026-04-08 00:26:35.399956 | orchestrator | ok: [testbed-manager] 2026-04-08 00:26:35.399970 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:26:35.399981 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:26:35.399990 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:26:35.399999 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:26:35.400008 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:26:35.400017 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:26:35.400026 | orchestrator | 2026-04-08 00:26:35.400037 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-04-08 00:26:35.400046 | orchestrator | Wednesday 08 April 2026 00:24:16 +0000 (0:00:01.646) 0:01:02.682 ******* 2026-04-08 00:26:35.400055 | orchestrator | changed: [testbed-manager] 2026-04-08 00:26:35.400065 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:26:35.400074 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:26:35.400083 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:26:35.400092 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:26:35.400101 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:26:35.400109 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:26:35.400118 | orchestrator | 2026-04-08 00:26:35.400127 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-04-08 00:26:35.400137 | orchestrator | Wednesday 08 April 2026 00:24:17 +0000 (0:00:00.633) 0:01:03.316 ******* 2026-04-08 00:26:35.400146 | orchestrator | ok: [testbed-manager] 2026-04-08 00:26:35.400155 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:26:35.400164 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:26:35.400174 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:26:35.400183 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:26:35.400192 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:26:35.400200 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:26:35.400209 | orchestrator | 2026-04-08 00:26:35.400218 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-04-08 00:26:35.400227 | orchestrator | Wednesday 08 April 2026 00:24:17 +0000 (0:00:00.256) 0:01:03.573 ******* 2026-04-08 00:26:35.400236 | orchestrator | ok: [testbed-manager] 2026-04-08 00:26:35.400245 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:26:35.400254 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:26:35.400262 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:26:35.400271 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:26:35.400280 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:26:35.400289 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:26:35.400297 | orchestrator | 2026-04-08 00:26:35.400307 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-04-08 00:26:35.400316 | orchestrator | Wednesday 08 April 2026 00:24:18 +0000 (0:00:01.132) 0:01:04.705 ******* 2026-04-08 00:26:35.400325 | orchestrator | changed: [testbed-manager] 2026-04-08 00:26:35.400334 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:26:35.400346 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:26:35.400379 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:26:35.400390 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:26:35.400401 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:26:35.400411 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:26:35.400421 | orchestrator | 2026-04-08 00:26:35.400432 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-04-08 00:26:35.400442 | orchestrator | Wednesday 08 April 2026 00:24:20 +0000 (0:00:01.714) 0:01:06.420 ******* 2026-04-08 00:26:35.400452 | orchestrator | ok: [testbed-manager] 2026-04-08 00:26:35.400461 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:26:35.400470 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:26:35.400479 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:26:35.400488 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:26:35.400497 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:26:35.400505 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:26:35.400514 | orchestrator | 2026-04-08 00:26:35.400523 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-04-08 00:26:35.400532 | orchestrator | Wednesday 08 April 2026 00:24:23 +0000 (0:00:02.368) 0:01:08.789 ******* 2026-04-08 00:26:35.400541 | orchestrator | ok: [testbed-manager] 2026-04-08 00:26:35.400550 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:26:35.400559 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:26:35.400568 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:26:35.400576 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:26:35.400585 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:26:35.400594 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:26:35.400603 | orchestrator | 2026-04-08 00:26:35.400612 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-04-08 00:26:35.400621 | orchestrator | Wednesday 08 April 2026 00:25:02 +0000 (0:00:39.460) 0:01:48.249 ******* 2026-04-08 00:26:35.400630 | orchestrator | changed: [testbed-manager] 2026-04-08 00:26:35.400639 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:26:35.400648 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:26:35.400657 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:26:35.400666 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:26:35.400675 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:26:35.400684 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:26:35.400693 | orchestrator | 2026-04-08 00:26:35.400706 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-04-08 00:26:35.400720 | orchestrator | Wednesday 08 April 2026 00:26:20 +0000 (0:01:17.639) 0:03:05.889 ******* 2026-04-08 00:26:35.400734 | orchestrator | ok: [testbed-manager] 2026-04-08 00:26:35.400748 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:26:35.400781 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:26:35.400798 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:26:35.400811 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:26:35.400826 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:26:35.400841 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:26:35.400853 | orchestrator | 2026-04-08 00:26:35.400862 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-04-08 00:26:35.400871 | orchestrator | Wednesday 08 April 2026 00:26:21 +0000 (0:00:01.834) 0:03:07.723 ******* 2026-04-08 00:26:35.400879 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:26:35.400887 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:26:35.400909 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:26:35.400918 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:26:35.400926 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:26:35.400935 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:26:35.400944 | orchestrator | changed: [testbed-manager] 2026-04-08 00:26:35.400952 | orchestrator | 2026-04-08 00:26:35.400961 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-04-08 00:26:35.400970 | orchestrator | Wednesday 08 April 2026 00:26:34 +0000 (0:00:12.352) 0:03:20.076 ******* 2026-04-08 00:26:35.401001 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-04-08 00:26:35.401027 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-04-08 00:26:35.401039 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-04-08 00:26:35.401055 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-08 00:26:35.401064 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-08 00:26:35.401073 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-04-08 00:26:35.401082 | orchestrator | 2026-04-08 00:26:35.401091 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-04-08 00:26:35.401100 | orchestrator | Wednesday 08 April 2026 00:26:34 +0000 (0:00:00.392) 0:03:20.469 ******* 2026-04-08 00:26:35.401109 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-08 00:26:35.401118 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:26:35.401126 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-08 00:26:35.401135 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:26:35.401144 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-08 00:26:35.401152 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:26:35.401161 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-08 00:26:35.401170 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:26:35.401178 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-08 00:26:35.401187 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-08 00:26:35.401196 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-08 00:26:35.401204 | orchestrator | 2026-04-08 00:26:35.401213 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-04-08 00:26:35.401222 | orchestrator | Wednesday 08 April 2026 00:26:35 +0000 (0:00:00.635) 0:03:21.104 ******* 2026-04-08 00:26:35.401230 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-08 00:26:35.401247 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-08 00:26:35.401260 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-08 00:26:35.401269 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-08 00:26:35.401278 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-08 00:26:35.401293 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-08 00:26:41.911283 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-08 00:26:41.911388 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-08 00:26:41.911404 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-08 00:26:41.911417 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-08 00:26:41.911430 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:26:41.911443 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-08 00:26:41.911454 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-08 00:26:41.911465 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-08 00:26:41.911476 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-08 00:26:41.911487 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-08 00:26:41.911499 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-08 00:26:41.911509 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-08 00:26:41.911522 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-08 00:26:41.911534 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-08 00:26:41.911545 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-08 00:26:41.911556 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-08 00:26:41.911567 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-08 00:26:41.911579 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-08 00:26:41.911590 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-08 00:26:41.911601 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-08 00:26:41.911612 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:26:41.911624 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-08 00:26:41.911636 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-08 00:26:41.911647 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-08 00:26:41.911658 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-08 00:26:41.911669 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-08 00:26:41.911681 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:26:41.911692 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-08 00:26:41.911730 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-08 00:26:41.911742 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-08 00:26:41.911752 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-08 00:26:41.911786 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-08 00:26:41.911797 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-08 00:26:41.911808 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-08 00:26:41.911819 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-08 00:26:41.911830 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-08 00:26:41.911842 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-08 00:26:41.911853 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:26:41.911864 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-08 00:26:41.911875 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-08 00:26:41.911886 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-08 00:26:41.911898 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-08 00:26:41.911909 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-08 00:26:41.911941 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-08 00:26:41.911953 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-08 00:26:41.911964 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-08 00:26:41.911976 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-08 00:26:41.911988 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-08 00:26:41.911999 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-08 00:26:41.912010 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-08 00:26:41.912022 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-08 00:26:41.912033 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-08 00:26:41.912045 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-08 00:26:41.912057 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-08 00:26:41.912068 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-08 00:26:41.912076 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-08 00:26:41.912084 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-08 00:26:41.912092 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-08 00:26:41.912115 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-08 00:26:41.912123 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-08 00:26:41.912130 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-08 00:26:41.912149 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-08 00:26:41.912157 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-08 00:26:41.912165 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-08 00:26:41.912173 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-08 00:26:41.912181 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-08 00:26:41.912189 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-08 00:26:41.912196 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-08 00:26:41.912203 | orchestrator | 2026-04-08 00:26:41.912210 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-04-08 00:26:41.912217 | orchestrator | Wednesday 08 April 2026 00:26:40 +0000 (0:00:05.458) 0:03:26.562 ******* 2026-04-08 00:26:41.912224 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-08 00:26:41.912231 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-08 00:26:41.912238 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-08 00:26:41.912244 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-08 00:26:41.912251 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-08 00:26:41.912258 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-08 00:26:41.912265 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-08 00:26:41.912271 | orchestrator | 2026-04-08 00:26:41.912278 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-04-08 00:26:41.912285 | orchestrator | Wednesday 08 April 2026 00:26:41 +0000 (0:00:00.624) 0:03:27.187 ******* 2026-04-08 00:26:41.912291 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-08 00:26:41.912298 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-08 00:26:41.912305 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:26:41.912312 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-08 00:26:41.912318 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:26:41.912329 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-08 00:26:41.912335 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:26:41.912342 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:26:41.912349 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-08 00:26:41.912355 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-08 00:26:41.912366 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-08 00:26:55.271144 | orchestrator | 2026-04-08 00:26:55.271249 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-04-08 00:26:55.271262 | orchestrator | Wednesday 08 April 2026 00:26:41 +0000 (0:00:00.543) 0:03:27.731 ******* 2026-04-08 00:26:55.271271 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-08 00:26:55.271280 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:26:55.271290 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-08 00:26:55.271322 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-08 00:26:55.271331 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:26:55.271339 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:26:55.271347 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-08 00:26:55.271355 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:26:55.271363 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-08 00:26:55.271371 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-08 00:26:55.271379 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-08 00:26:55.271387 | orchestrator | 2026-04-08 00:26:55.271395 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-04-08 00:26:55.271403 | orchestrator | Wednesday 08 April 2026 00:26:42 +0000 (0:00:00.510) 0:03:28.241 ******* 2026-04-08 00:26:55.271411 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-08 00:26:55.271419 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:26:55.271426 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-08 00:26:55.271435 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-08 00:26:55.271443 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:26:55.271451 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-08 00:26:55.271459 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:26:55.271467 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:26:55.271475 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-08 00:26:55.271483 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-08 00:26:55.271491 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-08 00:26:55.271499 | orchestrator | 2026-04-08 00:26:55.271507 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-04-08 00:26:55.271515 | orchestrator | Wednesday 08 April 2026 00:26:44 +0000 (0:00:01.653) 0:03:29.895 ******* 2026-04-08 00:26:55.271522 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:26:55.271530 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:26:55.271538 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:26:55.271545 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:26:55.271556 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:26:55.271569 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:26:55.271582 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:26:55.271594 | orchestrator | 2026-04-08 00:26:55.271607 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-04-08 00:26:55.271619 | orchestrator | Wednesday 08 April 2026 00:26:44 +0000 (0:00:00.287) 0:03:30.182 ******* 2026-04-08 00:26:55.271632 | orchestrator | ok: [testbed-manager] 2026-04-08 00:26:55.271645 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:26:55.271657 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:26:55.271669 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:26:55.271682 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:26:55.271695 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:26:55.271709 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:26:55.271722 | orchestrator | 2026-04-08 00:26:55.271737 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-04-08 00:26:55.271786 | orchestrator | Wednesday 08 April 2026 00:26:50 +0000 (0:00:05.639) 0:03:35.821 ******* 2026-04-08 00:26:55.271802 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-04-08 00:26:55.271828 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-04-08 00:26:55.271844 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:26:55.271858 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-04-08 00:26:55.271871 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:26:55.271886 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-04-08 00:26:55.271900 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:26:55.271914 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-04-08 00:26:55.271927 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:26:55.271941 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-04-08 00:26:55.271975 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:26:55.271991 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:26:55.272006 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-04-08 00:26:55.272021 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:26:55.272035 | orchestrator | 2026-04-08 00:26:55.272050 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-04-08 00:26:55.272065 | orchestrator | Wednesday 08 April 2026 00:26:50 +0000 (0:00:00.291) 0:03:36.113 ******* 2026-04-08 00:26:55.272080 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-04-08 00:26:55.272095 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-04-08 00:26:55.272111 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-04-08 00:26:55.272146 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-04-08 00:26:55.272156 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-04-08 00:26:55.272165 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-04-08 00:26:55.272174 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-04-08 00:26:55.272182 | orchestrator | 2026-04-08 00:26:55.272191 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-04-08 00:26:55.272200 | orchestrator | Wednesday 08 April 2026 00:26:51 +0000 (0:00:01.022) 0:03:37.135 ******* 2026-04-08 00:26:55.272210 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:26:55.272222 | orchestrator | 2026-04-08 00:26:55.272230 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-04-08 00:26:55.272239 | orchestrator | Wednesday 08 April 2026 00:26:51 +0000 (0:00:00.379) 0:03:37.515 ******* 2026-04-08 00:26:55.272248 | orchestrator | ok: [testbed-manager] 2026-04-08 00:26:55.272257 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:26:55.272265 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:26:55.272274 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:26:55.272283 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:26:55.272292 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:26:55.272300 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:26:55.272309 | orchestrator | 2026-04-08 00:26:55.272318 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-04-08 00:26:55.272326 | orchestrator | Wednesday 08 April 2026 00:26:52 +0000 (0:00:01.223) 0:03:38.738 ******* 2026-04-08 00:26:55.272335 | orchestrator | ok: [testbed-manager] 2026-04-08 00:26:55.272344 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:26:55.272352 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:26:55.272361 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:26:55.272369 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:26:55.272393 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:26:55.272402 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:26:55.272411 | orchestrator | 2026-04-08 00:26:55.272419 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-04-08 00:26:55.272438 | orchestrator | Wednesday 08 April 2026 00:26:53 +0000 (0:00:00.559) 0:03:39.298 ******* 2026-04-08 00:26:55.272446 | orchestrator | changed: [testbed-manager] 2026-04-08 00:26:55.272455 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:26:55.272463 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:26:55.272481 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:26:55.272489 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:26:55.272498 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:26:55.272506 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:26:55.272515 | orchestrator | 2026-04-08 00:26:55.272524 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-04-08 00:26:55.272533 | orchestrator | Wednesday 08 April 2026 00:26:54 +0000 (0:00:00.652) 0:03:39.951 ******* 2026-04-08 00:26:55.272541 | orchestrator | ok: [testbed-manager] 2026-04-08 00:26:55.272550 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:26:55.272558 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:26:55.272567 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:26:55.272576 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:26:55.272584 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:26:55.272593 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:26:55.272601 | orchestrator | 2026-04-08 00:26:55.272610 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-04-08 00:26:55.272619 | orchestrator | Wednesday 08 April 2026 00:26:54 +0000 (0:00:00.559) 0:03:40.511 ******* 2026-04-08 00:26:55.272632 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775606612.851, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-08 00:26:55.272645 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775606695.2557483, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-08 00:26:55.272661 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775606694.1781852, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-08 00:26:55.272691 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775606705.30241, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-08 00:27:00.413319 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775606696.5117915, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-08 00:27:00.413446 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775606690.5884333, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-08 00:27:00.413460 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775606701.648052, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-08 00:27:00.413467 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-08 00:27:00.413472 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-08 00:27:00.413478 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-08 00:27:00.413484 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-08 00:27:00.413516 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-08 00:27:00.413543 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-08 00:27:00.413551 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-08 00:27:00.413558 | orchestrator | 2026-04-08 00:27:00.413565 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-04-08 00:27:00.413572 | orchestrator | Wednesday 08 April 2026 00:26:55 +0000 (0:00:00.955) 0:03:41.466 ******* 2026-04-08 00:27:00.413578 | orchestrator | changed: [testbed-manager] 2026-04-08 00:27:00.413588 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:27:00.413597 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:27:00.413606 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:27:00.413614 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:27:00.413622 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:27:00.413631 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:27:00.413636 | orchestrator | 2026-04-08 00:27:00.413642 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-04-08 00:27:00.413647 | orchestrator | Wednesday 08 April 2026 00:26:56 +0000 (0:00:01.086) 0:03:42.552 ******* 2026-04-08 00:27:00.413652 | orchestrator | changed: [testbed-manager] 2026-04-08 00:27:00.413657 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:27:00.413662 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:27:00.413668 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:27:00.413673 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:27:00.413678 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:27:00.413683 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:27:00.413689 | orchestrator | 2026-04-08 00:27:00.413694 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-04-08 00:27:00.413699 | orchestrator | Wednesday 08 April 2026 00:26:57 +0000 (0:00:01.144) 0:03:43.696 ******* 2026-04-08 00:27:00.413704 | orchestrator | changed: [testbed-manager] 2026-04-08 00:27:00.413709 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:27:00.413715 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:27:00.413720 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:27:00.413725 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:27:00.413730 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:27:00.413735 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:27:00.413740 | orchestrator | 2026-04-08 00:27:00.413813 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-04-08 00:27:00.413823 | orchestrator | Wednesday 08 April 2026 00:26:59 +0000 (0:00:01.211) 0:03:44.908 ******* 2026-04-08 00:27:00.413832 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:27:00.413840 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:27:00.413847 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:27:00.413853 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:27:00.413859 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:27:00.413868 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:27:00.413875 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:27:00.413881 | orchestrator | 2026-04-08 00:27:00.413894 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-04-08 00:27:00.413900 | orchestrator | Wednesday 08 April 2026 00:26:59 +0000 (0:00:00.236) 0:03:45.144 ******* 2026-04-08 00:27:00.413906 | orchestrator | ok: [testbed-manager] 2026-04-08 00:27:00.413913 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:27:00.413919 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:27:00.413925 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:27:00.413931 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:27:00.413937 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:27:00.413943 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:27:00.413949 | orchestrator | 2026-04-08 00:27:00.413955 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-04-08 00:27:00.413963 | orchestrator | Wednesday 08 April 2026 00:27:00 +0000 (0:00:00.699) 0:03:45.844 ******* 2026-04-08 00:27:00.413974 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:27:00.413985 | orchestrator | 2026-04-08 00:27:00.413993 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-04-08 00:27:00.414008 | orchestrator | Wednesday 08 April 2026 00:27:00 +0000 (0:00:00.349) 0:03:46.193 ******* 2026-04-08 00:28:14.727755 | orchestrator | ok: [testbed-manager] 2026-04-08 00:28:14.727870 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:28:14.727886 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:28:14.727898 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:28:14.727909 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:28:14.727920 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:28:14.727932 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:28:14.727943 | orchestrator | 2026-04-08 00:28:14.727955 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-04-08 00:28:14.727968 | orchestrator | Wednesday 08 April 2026 00:27:08 +0000 (0:00:08.113) 0:03:54.307 ******* 2026-04-08 00:28:14.727979 | orchestrator | ok: [testbed-manager] 2026-04-08 00:28:14.727990 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:28:14.728001 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:28:14.728012 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:28:14.728023 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:28:14.728034 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:28:14.728045 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:28:14.728056 | orchestrator | 2026-04-08 00:28:14.728067 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-04-08 00:28:14.728078 | orchestrator | Wednesday 08 April 2026 00:27:09 +0000 (0:00:01.240) 0:03:55.548 ******* 2026-04-08 00:28:14.728090 | orchestrator | ok: [testbed-manager] 2026-04-08 00:28:14.728101 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:28:14.728112 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:28:14.728123 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:28:14.728133 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:28:14.728144 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:28:14.728155 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:28:14.728166 | orchestrator | 2026-04-08 00:28:14.728177 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-04-08 00:28:14.728189 | orchestrator | Wednesday 08 April 2026 00:27:10 +0000 (0:00:00.961) 0:03:56.509 ******* 2026-04-08 00:28:14.728200 | orchestrator | ok: [testbed-manager] 2026-04-08 00:28:14.728211 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:28:14.728222 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:28:14.728235 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:28:14.728248 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:28:14.728260 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:28:14.728273 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:28:14.728285 | orchestrator | 2026-04-08 00:28:14.728298 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-04-08 00:28:14.728311 | orchestrator | Wednesday 08 April 2026 00:27:10 +0000 (0:00:00.258) 0:03:56.768 ******* 2026-04-08 00:28:14.728350 | orchestrator | ok: [testbed-manager] 2026-04-08 00:28:14.728363 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:28:14.728375 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:28:14.728387 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:28:14.728399 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:28:14.728411 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:28:14.728422 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:28:14.728434 | orchestrator | 2026-04-08 00:28:14.728447 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-04-08 00:28:14.728460 | orchestrator | Wednesday 08 April 2026 00:27:11 +0000 (0:00:00.258) 0:03:57.026 ******* 2026-04-08 00:28:14.728473 | orchestrator | ok: [testbed-manager] 2026-04-08 00:28:14.728485 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:28:14.728497 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:28:14.728510 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:28:14.728522 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:28:14.728540 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:28:14.728561 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:28:14.728580 | orchestrator | 2026-04-08 00:28:14.728600 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-04-08 00:28:14.728618 | orchestrator | Wednesday 08 April 2026 00:27:11 +0000 (0:00:00.271) 0:03:57.297 ******* 2026-04-08 00:28:14.728636 | orchestrator | ok: [testbed-manager] 2026-04-08 00:28:14.728652 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:28:14.728669 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:28:14.728686 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:28:14.728738 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:28:14.728757 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:28:14.728774 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:28:14.728791 | orchestrator | 2026-04-08 00:28:14.728808 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-04-08 00:28:14.728825 | orchestrator | Wednesday 08 April 2026 00:27:17 +0000 (0:00:05.548) 0:04:02.846 ******* 2026-04-08 00:28:14.728846 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:28:14.728867 | orchestrator | 2026-04-08 00:28:14.728903 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-04-08 00:28:14.728922 | orchestrator | Wednesday 08 April 2026 00:27:17 +0000 (0:00:00.346) 0:04:03.192 ******* 2026-04-08 00:28:14.728941 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-04-08 00:28:14.728960 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-04-08 00:28:14.728978 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-04-08 00:28:14.728996 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-04-08 00:28:14.729015 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:28:14.729034 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-04-08 00:28:14.729054 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-04-08 00:28:14.729071 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:28:14.729090 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-04-08 00:28:14.729109 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-04-08 00:28:14.729127 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:28:14.729144 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:28:14.729156 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-04-08 00:28:14.729166 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-04-08 00:28:14.729177 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-04-08 00:28:14.729188 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-04-08 00:28:14.729224 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:28:14.729256 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:28:14.729273 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-04-08 00:28:14.729288 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-04-08 00:28:14.729304 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:28:14.729322 | orchestrator | 2026-04-08 00:28:14.729341 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-04-08 00:28:14.729360 | orchestrator | Wednesday 08 April 2026 00:27:17 +0000 (0:00:00.324) 0:04:03.517 ******* 2026-04-08 00:28:14.729379 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:28:14.729390 | orchestrator | 2026-04-08 00:28:14.729401 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-04-08 00:28:14.729412 | orchestrator | Wednesday 08 April 2026 00:27:18 +0000 (0:00:00.481) 0:04:03.999 ******* 2026-04-08 00:28:14.729423 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-04-08 00:28:14.729434 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-04-08 00:28:14.729445 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:28:14.729456 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-04-08 00:28:14.729467 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:28:14.729478 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-04-08 00:28:14.729489 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:28:14.729500 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:28:14.729510 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-04-08 00:28:14.729521 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-04-08 00:28:14.729532 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:28:14.729544 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:28:14.729555 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-04-08 00:28:14.729565 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:28:14.729576 | orchestrator | 2026-04-08 00:28:14.729587 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-04-08 00:28:14.729598 | orchestrator | Wednesday 08 April 2026 00:27:18 +0000 (0:00:00.308) 0:04:04.307 ******* 2026-04-08 00:28:14.729609 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:28:14.729620 | orchestrator | 2026-04-08 00:28:14.729631 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-04-08 00:28:14.729642 | orchestrator | Wednesday 08 April 2026 00:27:18 +0000 (0:00:00.404) 0:04:04.711 ******* 2026-04-08 00:28:14.729653 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:28:14.729664 | orchestrator | changed: [testbed-manager] 2026-04-08 00:28:14.729674 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:28:14.729685 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:28:14.729742 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:28:14.729755 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:28:14.729766 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:28:14.729777 | orchestrator | 2026-04-08 00:28:14.729787 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-04-08 00:28:14.729798 | orchestrator | Wednesday 08 April 2026 00:27:52 +0000 (0:00:33.947) 0:04:38.658 ******* 2026-04-08 00:28:14.729809 | orchestrator | changed: [testbed-manager] 2026-04-08 00:28:14.729820 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:28:14.729831 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:28:14.729842 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:28:14.729853 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:28:14.729872 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:28:14.729884 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:28:14.729894 | orchestrator | 2026-04-08 00:28:14.729905 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-04-08 00:28:14.729916 | orchestrator | Wednesday 08 April 2026 00:28:00 +0000 (0:00:08.124) 0:04:46.783 ******* 2026-04-08 00:28:14.729927 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:28:14.729938 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:28:14.729968 | orchestrator | changed: [testbed-manager] 2026-04-08 00:28:14.729980 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:28:14.729991 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:28:14.730002 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:28:14.730013 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:28:14.730086 | orchestrator | 2026-04-08 00:28:14.730097 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-04-08 00:28:14.730108 | orchestrator | Wednesday 08 April 2026 00:28:08 +0000 (0:00:07.294) 0:04:54.077 ******* 2026-04-08 00:28:14.730119 | orchestrator | ok: [testbed-manager] 2026-04-08 00:28:14.730130 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:28:14.730141 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:28:14.730152 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:28:14.730163 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:28:14.730174 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:28:14.730184 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:28:14.730195 | orchestrator | 2026-04-08 00:28:14.730207 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-04-08 00:28:14.730218 | orchestrator | Wednesday 08 April 2026 00:28:09 +0000 (0:00:01.350) 0:04:55.428 ******* 2026-04-08 00:28:14.730229 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:28:14.730240 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:28:14.730251 | orchestrator | changed: [testbed-manager] 2026-04-08 00:28:14.730262 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:28:14.730273 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:28:14.730283 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:28:14.730295 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:28:14.730306 | orchestrator | 2026-04-08 00:28:14.730326 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-04-08 00:28:24.233602 | orchestrator | Wednesday 08 April 2026 00:28:14 +0000 (0:00:05.077) 0:05:00.505 ******* 2026-04-08 00:28:24.233677 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:28:24.233773 | orchestrator | 2026-04-08 00:28:24.233781 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-04-08 00:28:24.233785 | orchestrator | Wednesday 08 April 2026 00:28:15 +0000 (0:00:00.351) 0:05:00.857 ******* 2026-04-08 00:28:24.233790 | orchestrator | changed: [testbed-manager] 2026-04-08 00:28:24.233795 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:28:24.233799 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:28:24.233803 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:28:24.233807 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:28:24.233811 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:28:24.233815 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:28:24.233819 | orchestrator | 2026-04-08 00:28:24.233823 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-04-08 00:28:24.233827 | orchestrator | Wednesday 08 April 2026 00:28:15 +0000 (0:00:00.620) 0:05:01.478 ******* 2026-04-08 00:28:24.233831 | orchestrator | ok: [testbed-manager] 2026-04-08 00:28:24.233836 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:28:24.233840 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:28:24.233844 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:28:24.233848 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:28:24.233852 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:28:24.233873 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:28:24.233877 | orchestrator | 2026-04-08 00:28:24.233881 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-04-08 00:28:24.233886 | orchestrator | Wednesday 08 April 2026 00:28:17 +0000 (0:00:01.385) 0:05:02.863 ******* 2026-04-08 00:28:24.233889 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:28:24.233893 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:28:24.233897 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:28:24.233901 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:28:24.233904 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:28:24.233908 | orchestrator | changed: [testbed-manager] 2026-04-08 00:28:24.233912 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:28:24.233916 | orchestrator | 2026-04-08 00:28:24.233920 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-04-08 00:28:24.233923 | orchestrator | Wednesday 08 April 2026 00:28:17 +0000 (0:00:00.628) 0:05:03.492 ******* 2026-04-08 00:28:24.233927 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:28:24.233931 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:28:24.233935 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:28:24.233939 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:28:24.233943 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:28:24.233946 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:28:24.233950 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:28:24.233954 | orchestrator | 2026-04-08 00:28:24.233958 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-04-08 00:28:24.233962 | orchestrator | Wednesday 08 April 2026 00:28:17 +0000 (0:00:00.206) 0:05:03.698 ******* 2026-04-08 00:28:24.233966 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:28:24.233969 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:28:24.233985 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:28:24.233989 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:28:24.233993 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:28:24.233997 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:28:24.234001 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:28:24.234064 | orchestrator | 2026-04-08 00:28:24.234071 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-04-08 00:28:24.234077 | orchestrator | Wednesday 08 April 2026 00:28:18 +0000 (0:00:00.345) 0:05:04.044 ******* 2026-04-08 00:28:24.234083 | orchestrator | ok: [testbed-manager] 2026-04-08 00:28:24.234089 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:28:24.234116 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:28:24.234123 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:28:24.234129 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:28:24.234135 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:28:24.234141 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:28:24.234147 | orchestrator | 2026-04-08 00:28:24.234154 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-04-08 00:28:24.234161 | orchestrator | Wednesday 08 April 2026 00:28:18 +0000 (0:00:00.301) 0:05:04.346 ******* 2026-04-08 00:28:24.234167 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:28:24.234174 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:28:24.234181 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:28:24.234188 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:28:24.234196 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:28:24.234203 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:28:24.234210 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:28:24.234216 | orchestrator | 2026-04-08 00:28:24.234221 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-04-08 00:28:24.234227 | orchestrator | Wednesday 08 April 2026 00:28:18 +0000 (0:00:00.208) 0:05:04.555 ******* 2026-04-08 00:28:24.234231 | orchestrator | ok: [testbed-manager] 2026-04-08 00:28:24.234236 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:28:24.234240 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:28:24.234247 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:28:24.234261 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:28:24.234267 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:28:24.234273 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:28:24.234279 | orchestrator | 2026-04-08 00:28:24.234286 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-04-08 00:28:24.234292 | orchestrator | Wednesday 08 April 2026 00:28:19 +0000 (0:00:00.299) 0:05:04.855 ******* 2026-04-08 00:28:24.234298 | orchestrator | ok: [testbed-manager] =>  2026-04-08 00:28:24.234304 | orchestrator |  docker_version: 5:27.5.1 2026-04-08 00:28:24.234310 | orchestrator | ok: [testbed-node-0] =>  2026-04-08 00:28:24.234317 | orchestrator |  docker_version: 5:27.5.1 2026-04-08 00:28:24.234323 | orchestrator | ok: [testbed-node-1] =>  2026-04-08 00:28:24.234330 | orchestrator |  docker_version: 5:27.5.1 2026-04-08 00:28:24.234336 | orchestrator | ok: [testbed-node-2] =>  2026-04-08 00:28:24.234342 | orchestrator |  docker_version: 5:27.5.1 2026-04-08 00:28:24.234365 | orchestrator | ok: [testbed-node-3] =>  2026-04-08 00:28:24.234372 | orchestrator |  docker_version: 5:27.5.1 2026-04-08 00:28:24.234379 | orchestrator | ok: [testbed-node-4] =>  2026-04-08 00:28:24.234384 | orchestrator |  docker_version: 5:27.5.1 2026-04-08 00:28:24.234389 | orchestrator | ok: [testbed-node-5] =>  2026-04-08 00:28:24.234393 | orchestrator |  docker_version: 5:27.5.1 2026-04-08 00:28:24.234398 | orchestrator | 2026-04-08 00:28:24.234403 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-04-08 00:28:24.234407 | orchestrator | Wednesday 08 April 2026 00:28:19 +0000 (0:00:00.210) 0:05:05.065 ******* 2026-04-08 00:28:24.234412 | orchestrator | ok: [testbed-manager] =>  2026-04-08 00:28:24.234416 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-08 00:28:24.234421 | orchestrator | ok: [testbed-node-0] =>  2026-04-08 00:28:24.234425 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-08 00:28:24.234430 | orchestrator | ok: [testbed-node-1] =>  2026-04-08 00:28:24.234436 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-08 00:28:24.234442 | orchestrator | ok: [testbed-node-2] =>  2026-04-08 00:28:24.234449 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-08 00:28:24.234455 | orchestrator | ok: [testbed-node-3] =>  2026-04-08 00:28:24.234461 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-08 00:28:24.234467 | orchestrator | ok: [testbed-node-4] =>  2026-04-08 00:28:24.234474 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-08 00:28:24.234480 | orchestrator | ok: [testbed-node-5] =>  2026-04-08 00:28:24.234486 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-08 00:28:24.234493 | orchestrator | 2026-04-08 00:28:24.234499 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-04-08 00:28:24.234505 | orchestrator | Wednesday 08 April 2026 00:28:19 +0000 (0:00:00.235) 0:05:05.300 ******* 2026-04-08 00:28:24.234512 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:28:24.234518 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:28:24.234525 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:28:24.234530 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:28:24.234533 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:28:24.234537 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:28:24.234541 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:28:24.234545 | orchestrator | 2026-04-08 00:28:24.234548 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-04-08 00:28:24.234552 | orchestrator | Wednesday 08 April 2026 00:28:19 +0000 (0:00:00.197) 0:05:05.498 ******* 2026-04-08 00:28:24.234556 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:28:24.234560 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:28:24.234563 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:28:24.234567 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:28:24.234571 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:28:24.234575 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:28:24.234578 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:28:24.234582 | orchestrator | 2026-04-08 00:28:24.234586 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-04-08 00:28:24.234594 | orchestrator | Wednesday 08 April 2026 00:28:19 +0000 (0:00:00.195) 0:05:05.694 ******* 2026-04-08 00:28:24.234599 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:28:24.234604 | orchestrator | 2026-04-08 00:28:24.234608 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-04-08 00:28:24.234612 | orchestrator | Wednesday 08 April 2026 00:28:20 +0000 (0:00:00.354) 0:05:06.048 ******* 2026-04-08 00:28:24.234616 | orchestrator | ok: [testbed-manager] 2026-04-08 00:28:24.234619 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:28:24.234623 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:28:24.234627 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:28:24.234631 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:28:24.234635 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:28:24.234638 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:28:24.234642 | orchestrator | 2026-04-08 00:28:24.234679 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-04-08 00:28:24.234704 | orchestrator | Wednesday 08 April 2026 00:28:21 +0000 (0:00:00.742) 0:05:06.791 ******* 2026-04-08 00:28:24.234709 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:28:24.234712 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:28:24.234716 | orchestrator | ok: [testbed-manager] 2026-04-08 00:28:24.234720 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:28:24.234724 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:28:24.234727 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:28:24.234731 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:28:24.234735 | orchestrator | 2026-04-08 00:28:24.234742 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-04-08 00:28:24.234747 | orchestrator | Wednesday 08 April 2026 00:28:23 +0000 (0:00:02.848) 0:05:09.640 ******* 2026-04-08 00:28:24.234751 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-04-08 00:28:24.234755 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-04-08 00:28:24.234759 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-04-08 00:28:24.234763 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:28:24.234766 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-04-08 00:28:24.234770 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-04-08 00:28:24.234774 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-04-08 00:28:24.234778 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-04-08 00:28:24.234781 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-04-08 00:28:24.234785 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-04-08 00:28:24.234789 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:28:24.234793 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-04-08 00:28:24.234796 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-04-08 00:28:24.234800 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-04-08 00:28:24.234804 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:28:24.234808 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-04-08 00:28:24.234816 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-04-08 00:29:23.297103 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-04-08 00:29:23.297200 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:29:23.297211 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-04-08 00:29:23.297220 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:29:23.297228 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-04-08 00:29:23.297236 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-04-08 00:29:23.297244 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:29:23.297275 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-04-08 00:29:23.297293 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-04-08 00:29:23.297309 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-04-08 00:29:23.297320 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:29:23.297332 | orchestrator | 2026-04-08 00:29:23.297346 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-04-08 00:29:23.297360 | orchestrator | Wednesday 08 April 2026 00:28:24 +0000 (0:00:00.590) 0:05:10.231 ******* 2026-04-08 00:29:23.297373 | orchestrator | ok: [testbed-manager] 2026-04-08 00:29:23.297385 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:29:23.297397 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:29:23.297409 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:29:23.297417 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:29:23.297424 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:29:23.297431 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:29:23.297439 | orchestrator | 2026-04-08 00:29:23.297446 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-04-08 00:29:23.297454 | orchestrator | Wednesday 08 April 2026 00:28:31 +0000 (0:00:06.983) 0:05:17.214 ******* 2026-04-08 00:29:23.297461 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:29:23.297469 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:29:23.297476 | orchestrator | ok: [testbed-manager] 2026-04-08 00:29:23.297483 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:29:23.297491 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:29:23.297498 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:29:23.297505 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:29:23.297513 | orchestrator | 2026-04-08 00:29:23.297520 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-04-08 00:29:23.297527 | orchestrator | Wednesday 08 April 2026 00:28:32 +0000 (0:00:01.088) 0:05:18.303 ******* 2026-04-08 00:29:23.297535 | orchestrator | ok: [testbed-manager] 2026-04-08 00:29:23.297542 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:29:23.297549 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:29:23.297556 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:29:23.297564 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:29:23.297571 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:29:23.297578 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:29:23.297585 | orchestrator | 2026-04-08 00:29:23.297593 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-04-08 00:29:23.297600 | orchestrator | Wednesday 08 April 2026 00:28:40 +0000 (0:00:08.331) 0:05:26.634 ******* 2026-04-08 00:29:23.297608 | orchestrator | changed: [testbed-manager] 2026-04-08 00:29:23.297615 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:29:23.297622 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:29:23.297629 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:29:23.297637 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:29:23.297719 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:29:23.297729 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:29:23.297737 | orchestrator | 2026-04-08 00:29:23.297746 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-04-08 00:29:23.297755 | orchestrator | Wednesday 08 April 2026 00:28:44 +0000 (0:00:03.387) 0:05:30.022 ******* 2026-04-08 00:29:23.297763 | orchestrator | ok: [testbed-manager] 2026-04-08 00:29:23.297772 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:29:23.297780 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:29:23.297788 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:29:23.297797 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:29:23.297805 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:29:23.297813 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:29:23.297821 | orchestrator | 2026-04-08 00:29:23.297830 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-04-08 00:29:23.297838 | orchestrator | Wednesday 08 April 2026 00:28:45 +0000 (0:00:01.275) 0:05:31.297 ******* 2026-04-08 00:29:23.297857 | orchestrator | ok: [testbed-manager] 2026-04-08 00:29:23.297866 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:29:23.297874 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:29:23.297882 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:29:23.297890 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:29:23.297898 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:29:23.297907 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:29:23.297915 | orchestrator | 2026-04-08 00:29:23.297923 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-04-08 00:29:23.297932 | orchestrator | Wednesday 08 April 2026 00:28:46 +0000 (0:00:01.258) 0:05:32.556 ******* 2026-04-08 00:29:23.297940 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:29:23.297948 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:29:23.297957 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:29:23.297965 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:29:23.297973 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:29:23.297982 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:29:23.297990 | orchestrator | changed: [testbed-manager] 2026-04-08 00:29:23.297998 | orchestrator | 2026-04-08 00:29:23.298006 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-04-08 00:29:23.298062 | orchestrator | Wednesday 08 April 2026 00:28:47 +0000 (0:00:00.563) 0:05:33.120 ******* 2026-04-08 00:29:23.298070 | orchestrator | ok: [testbed-manager] 2026-04-08 00:29:23.298078 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:29:23.298085 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:29:23.298092 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:29:23.298099 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:29:23.298107 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:29:23.298114 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:29:23.298122 | orchestrator | 2026-04-08 00:29:23.298129 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-04-08 00:29:23.298152 | orchestrator | Wednesday 08 April 2026 00:28:56 +0000 (0:00:09.567) 0:05:42.688 ******* 2026-04-08 00:29:23.298159 | orchestrator | changed: [testbed-manager] 2026-04-08 00:29:23.298167 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:29:23.298174 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:29:23.298181 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:29:23.298189 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:29:23.298196 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:29:23.298203 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:29:23.298210 | orchestrator | 2026-04-08 00:29:23.298218 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-04-08 00:29:23.298225 | orchestrator | Wednesday 08 April 2026 00:28:57 +0000 (0:00:01.079) 0:05:43.768 ******* 2026-04-08 00:29:23.298233 | orchestrator | ok: [testbed-manager] 2026-04-08 00:29:23.298240 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:29:23.298247 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:29:23.298255 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:29:23.298268 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:29:23.298281 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:29:23.298292 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:29:23.298303 | orchestrator | 2026-04-08 00:29:23.298315 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-04-08 00:29:23.298327 | orchestrator | Wednesday 08 April 2026 00:29:06 +0000 (0:00:08.775) 0:05:52.543 ******* 2026-04-08 00:29:23.298340 | orchestrator | ok: [testbed-manager] 2026-04-08 00:29:23.298352 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:29:23.298365 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:29:23.298377 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:29:23.298402 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:29:23.298409 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:29:23.298417 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:29:23.298424 | orchestrator | 2026-04-08 00:29:23.298431 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-04-08 00:29:23.298446 | orchestrator | Wednesday 08 April 2026 00:29:17 +0000 (0:00:10.287) 0:06:02.830 ******* 2026-04-08 00:29:23.298454 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-04-08 00:29:23.298462 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-04-08 00:29:23.298469 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-04-08 00:29:23.298476 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-04-08 00:29:23.298484 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-04-08 00:29:23.298491 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-04-08 00:29:23.298498 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-04-08 00:29:23.298506 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-04-08 00:29:23.298513 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-04-08 00:29:23.298520 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-04-08 00:29:23.298528 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-04-08 00:29:23.298535 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-04-08 00:29:23.298543 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-04-08 00:29:23.298550 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-04-08 00:29:23.298557 | orchestrator | 2026-04-08 00:29:23.298565 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-04-08 00:29:23.298572 | orchestrator | Wednesday 08 April 2026 00:29:18 +0000 (0:00:01.150) 0:06:03.981 ******* 2026-04-08 00:29:23.298579 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:29:23.298587 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:29:23.298594 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:29:23.298602 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:29:23.298609 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:29:23.298616 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:29:23.298624 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:29:23.298631 | orchestrator | 2026-04-08 00:29:23.298638 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-04-08 00:29:23.298664 | orchestrator | Wednesday 08 April 2026 00:29:18 +0000 (0:00:00.644) 0:06:04.625 ******* 2026-04-08 00:29:23.298672 | orchestrator | ok: [testbed-manager] 2026-04-08 00:29:23.298679 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:29:23.298687 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:29:23.298694 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:29:23.298702 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:29:23.298709 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:29:23.298716 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:29:23.298723 | orchestrator | 2026-04-08 00:29:23.298731 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-04-08 00:29:23.298752 | orchestrator | Wednesday 08 April 2026 00:29:22 +0000 (0:00:03.732) 0:06:08.357 ******* 2026-04-08 00:29:23.298759 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:29:23.298767 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:29:23.298774 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:29:23.298781 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:29:23.298788 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:29:23.298795 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:29:23.298803 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:29:23.298810 | orchestrator | 2026-04-08 00:29:23.298818 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-04-08 00:29:23.298826 | orchestrator | Wednesday 08 April 2026 00:29:23 +0000 (0:00:00.457) 0:06:08.815 ******* 2026-04-08 00:29:23.298834 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-04-08 00:29:23.298841 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-04-08 00:29:23.298849 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:29:23.298861 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-04-08 00:29:23.298869 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-04-08 00:29:23.298876 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:29:23.298884 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-04-08 00:29:23.298891 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-04-08 00:29:23.298898 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:29:23.298912 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-04-08 00:29:40.361952 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-04-08 00:29:40.362088 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:29:40.362100 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-04-08 00:29:40.362108 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-04-08 00:29:40.362116 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:29:40.362124 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-04-08 00:29:40.362131 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-04-08 00:29:40.362138 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:29:40.362146 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-04-08 00:29:40.362154 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-04-08 00:29:40.362160 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:29:40.362168 | orchestrator | 2026-04-08 00:29:40.362177 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-04-08 00:29:40.362185 | orchestrator | Wednesday 08 April 2026 00:29:23 +0000 (0:00:00.528) 0:06:09.344 ******* 2026-04-08 00:29:40.362192 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:29:40.362200 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:29:40.362206 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:29:40.362214 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:29:40.362221 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:29:40.362228 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:29:40.362235 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:29:40.362242 | orchestrator | 2026-04-08 00:29:40.362250 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-04-08 00:29:40.362258 | orchestrator | Wednesday 08 April 2026 00:29:24 +0000 (0:00:00.450) 0:06:09.794 ******* 2026-04-08 00:29:40.362265 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:29:40.362272 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:29:40.362280 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:29:40.362287 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:29:40.362294 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:29:40.362302 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:29:40.362309 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:29:40.362316 | orchestrator | 2026-04-08 00:29:40.362323 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-04-08 00:29:40.362331 | orchestrator | Wednesday 08 April 2026 00:29:24 +0000 (0:00:00.608) 0:06:10.402 ******* 2026-04-08 00:29:40.362337 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:29:40.362343 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:29:40.362350 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:29:40.362358 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:29:40.362365 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:29:40.362373 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:29:40.362380 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:29:40.362388 | orchestrator | 2026-04-08 00:29:40.362395 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-04-08 00:29:40.362404 | orchestrator | Wednesday 08 April 2026 00:29:25 +0000 (0:00:00.499) 0:06:10.901 ******* 2026-04-08 00:29:40.362411 | orchestrator | ok: [testbed-manager] 2026-04-08 00:29:40.362419 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:29:40.362427 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:29:40.362458 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:29:40.362466 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:29:40.362473 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:29:40.362481 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:29:40.362489 | orchestrator | 2026-04-08 00:29:40.362497 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-04-08 00:29:40.362509 | orchestrator | Wednesday 08 April 2026 00:29:26 +0000 (0:00:01.669) 0:06:12.570 ******* 2026-04-08 00:29:40.362519 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:29:40.362532 | orchestrator | 2026-04-08 00:29:40.362542 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-04-08 00:29:40.362550 | orchestrator | Wednesday 08 April 2026 00:29:27 +0000 (0:00:00.773) 0:06:13.344 ******* 2026-04-08 00:29:40.362559 | orchestrator | ok: [testbed-manager] 2026-04-08 00:29:40.362569 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:29:40.362582 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:29:40.362604 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:29:40.362612 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:29:40.362620 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:29:40.362648 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:29:40.362654 | orchestrator | 2026-04-08 00:29:40.362661 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-04-08 00:29:40.362668 | orchestrator | Wednesday 08 April 2026 00:29:28 +0000 (0:00:00.979) 0:06:14.324 ******* 2026-04-08 00:29:40.362675 | orchestrator | ok: [testbed-manager] 2026-04-08 00:29:40.362683 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:29:40.362691 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:29:40.362702 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:29:40.362711 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:29:40.362720 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:29:40.362727 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:29:40.362735 | orchestrator | 2026-04-08 00:29:40.362746 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-04-08 00:29:40.362755 | orchestrator | Wednesday 08 April 2026 00:29:29 +0000 (0:00:00.808) 0:06:15.133 ******* 2026-04-08 00:29:40.362765 | orchestrator | ok: [testbed-manager] 2026-04-08 00:29:40.362775 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:29:40.362783 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:29:40.362791 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:29:40.362800 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:29:40.362810 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:29:40.362820 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:29:40.362829 | orchestrator | 2026-04-08 00:29:40.362839 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-04-08 00:29:40.362861 | orchestrator | Wednesday 08 April 2026 00:29:30 +0000 (0:00:01.154) 0:06:16.287 ******* 2026-04-08 00:29:40.362868 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:29:40.362876 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:29:40.362883 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:29:40.362890 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:29:40.362897 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:29:40.362903 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:29:40.362909 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:29:40.362915 | orchestrator | 2026-04-08 00:29:40.362920 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-04-08 00:29:40.362927 | orchestrator | Wednesday 08 April 2026 00:29:31 +0000 (0:00:01.212) 0:06:17.499 ******* 2026-04-08 00:29:40.362934 | orchestrator | ok: [testbed-manager] 2026-04-08 00:29:40.362941 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:29:40.362948 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:29:40.362955 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:29:40.362969 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:29:40.362976 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:29:40.362983 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:29:40.362990 | orchestrator | 2026-04-08 00:29:40.362997 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-04-08 00:29:40.363004 | orchestrator | Wednesday 08 April 2026 00:29:32 +0000 (0:00:01.108) 0:06:18.608 ******* 2026-04-08 00:29:40.363011 | orchestrator | changed: [testbed-manager] 2026-04-08 00:29:40.363018 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:29:40.363025 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:29:40.363032 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:29:40.363038 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:29:40.363045 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:29:40.363051 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:29:40.363058 | orchestrator | 2026-04-08 00:29:40.363065 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-04-08 00:29:40.363071 | orchestrator | Wednesday 08 April 2026 00:29:34 +0000 (0:00:01.327) 0:06:19.936 ******* 2026-04-08 00:29:40.363078 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:29:40.363085 | orchestrator | 2026-04-08 00:29:40.363092 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-04-08 00:29:40.363099 | orchestrator | Wednesday 08 April 2026 00:29:34 +0000 (0:00:00.731) 0:06:20.668 ******* 2026-04-08 00:29:40.363105 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:29:40.363112 | orchestrator | ok: [testbed-manager] 2026-04-08 00:29:40.363119 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:29:40.363126 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:29:40.363133 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:29:40.363140 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:29:40.363146 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:29:40.363153 | orchestrator | 2026-04-08 00:29:40.363160 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-04-08 00:29:40.363167 | orchestrator | Wednesday 08 April 2026 00:29:36 +0000 (0:00:01.145) 0:06:21.813 ******* 2026-04-08 00:29:40.363174 | orchestrator | ok: [testbed-manager] 2026-04-08 00:29:40.363181 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:29:40.363187 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:29:40.363194 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:29:40.363201 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:29:40.363206 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:29:40.363212 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:29:40.363217 | orchestrator | 2026-04-08 00:29:40.363223 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-04-08 00:29:40.363229 | orchestrator | Wednesday 08 April 2026 00:29:37 +0000 (0:00:01.051) 0:06:22.864 ******* 2026-04-08 00:29:40.363235 | orchestrator | ok: [testbed-manager] 2026-04-08 00:29:40.363241 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:29:40.363247 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:29:40.363252 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:29:40.363258 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:29:40.363264 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:29:40.363271 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:29:40.363278 | orchestrator | 2026-04-08 00:29:40.363285 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-04-08 00:29:40.363292 | orchestrator | Wednesday 08 April 2026 00:29:38 +0000 (0:00:01.086) 0:06:23.951 ******* 2026-04-08 00:29:40.363300 | orchestrator | ok: [testbed-manager] 2026-04-08 00:29:40.363307 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:29:40.363314 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:29:40.363320 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:29:40.363327 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:29:40.363335 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:29:40.363347 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:29:40.363354 | orchestrator | 2026-04-08 00:29:40.363361 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-04-08 00:29:40.363368 | orchestrator | Wednesday 08 April 2026 00:29:39 +0000 (0:00:01.110) 0:06:25.062 ******* 2026-04-08 00:29:40.363376 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:29:40.363383 | orchestrator | 2026-04-08 00:29:40.363390 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-08 00:29:40.363397 | orchestrator | Wednesday 08 April 2026 00:29:40 +0000 (0:00:00.813) 0:06:25.876 ******* 2026-04-08 00:29:40.363404 | orchestrator | 2026-04-08 00:29:40.363411 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-08 00:29:40.363418 | orchestrator | Wednesday 08 April 2026 00:29:40 +0000 (0:00:00.039) 0:06:25.915 ******* 2026-04-08 00:29:40.363425 | orchestrator | 2026-04-08 00:29:40.363432 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-08 00:29:40.363439 | orchestrator | Wednesday 08 April 2026 00:29:40 +0000 (0:00:00.182) 0:06:26.098 ******* 2026-04-08 00:29:40.363446 | orchestrator | 2026-04-08 00:29:40.363453 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-08 00:29:40.363466 | orchestrator | Wednesday 08 April 2026 00:29:40 +0000 (0:00:00.039) 0:06:26.138 ******* 2026-04-08 00:30:06.353589 | orchestrator | 2026-04-08 00:30:06.353798 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-08 00:30:06.353818 | orchestrator | Wednesday 08 April 2026 00:29:40 +0000 (0:00:00.041) 0:06:26.179 ******* 2026-04-08 00:30:06.353830 | orchestrator | 2026-04-08 00:30:06.353842 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-08 00:30:06.353853 | orchestrator | Wednesday 08 April 2026 00:29:40 +0000 (0:00:00.057) 0:06:26.236 ******* 2026-04-08 00:30:06.353864 | orchestrator | 2026-04-08 00:30:06.353876 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-08 00:30:06.353887 | orchestrator | Wednesday 08 April 2026 00:29:40 +0000 (0:00:00.039) 0:06:26.276 ******* 2026-04-08 00:30:06.353898 | orchestrator | 2026-04-08 00:30:06.353910 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-08 00:30:06.353921 | orchestrator | Wednesday 08 April 2026 00:29:40 +0000 (0:00:00.040) 0:06:26.317 ******* 2026-04-08 00:30:06.353932 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:30:06.353944 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:30:06.353955 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:30:06.353966 | orchestrator | 2026-04-08 00:30:06.353978 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-04-08 00:30:06.353989 | orchestrator | Wednesday 08 April 2026 00:29:41 +0000 (0:00:01.248) 0:06:27.565 ******* 2026-04-08 00:30:06.354000 | orchestrator | changed: [testbed-manager] 2026-04-08 00:30:06.354015 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:30:06.354123 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:30:06.354143 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:30:06.354162 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:30:06.354180 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:30:06.354200 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:30:06.354219 | orchestrator | 2026-04-08 00:30:06.354238 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-04-08 00:30:06.354253 | orchestrator | Wednesday 08 April 2026 00:29:43 +0000 (0:00:01.396) 0:06:28.961 ******* 2026-04-08 00:30:06.354265 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:30:06.354276 | orchestrator | changed: [testbed-manager] 2026-04-08 00:30:06.354287 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:30:06.354298 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:30:06.354308 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:30:06.354319 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:30:06.354359 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:30:06.354370 | orchestrator | 2026-04-08 00:30:06.354382 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-04-08 00:30:06.354393 | orchestrator | Wednesday 08 April 2026 00:29:44 +0000 (0:00:01.130) 0:06:30.092 ******* 2026-04-08 00:30:06.354404 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:30:06.354414 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:30:06.354426 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:30:06.354436 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:30:06.354448 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:30:06.354458 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:30:06.354470 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:30:06.354481 | orchestrator | 2026-04-08 00:30:06.354492 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-04-08 00:30:06.354503 | orchestrator | Wednesday 08 April 2026 00:29:46 +0000 (0:00:02.254) 0:06:32.346 ******* 2026-04-08 00:30:06.354514 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:30:06.354525 | orchestrator | 2026-04-08 00:30:06.354536 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-04-08 00:30:06.354547 | orchestrator | Wednesday 08 April 2026 00:29:46 +0000 (0:00:00.088) 0:06:32.435 ******* 2026-04-08 00:30:06.354557 | orchestrator | ok: [testbed-manager] 2026-04-08 00:30:06.354568 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:30:06.354579 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:30:06.354590 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:30:06.354601 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:30:06.354652 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:30:06.354664 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:30:06.354675 | orchestrator | 2026-04-08 00:30:06.354686 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-04-08 00:30:06.354698 | orchestrator | Wednesday 08 April 2026 00:29:47 +0000 (0:00:01.158) 0:06:33.594 ******* 2026-04-08 00:30:06.354709 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:30:06.354720 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:30:06.354731 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:30:06.354742 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:30:06.354758 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:30:06.354769 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:30:06.354780 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:30:06.354791 | orchestrator | 2026-04-08 00:30:06.354802 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-04-08 00:30:06.354813 | orchestrator | Wednesday 08 April 2026 00:29:48 +0000 (0:00:00.517) 0:06:34.111 ******* 2026-04-08 00:30:06.354825 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:30:06.354840 | orchestrator | 2026-04-08 00:30:06.354851 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-04-08 00:30:06.354862 | orchestrator | Wednesday 08 April 2026 00:29:49 +0000 (0:00:00.861) 0:06:34.973 ******* 2026-04-08 00:30:06.354873 | orchestrator | ok: [testbed-manager] 2026-04-08 00:30:06.354884 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:30:06.354895 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:30:06.354906 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:30:06.354917 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:30:06.354928 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:30:06.354939 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:30:06.354950 | orchestrator | 2026-04-08 00:30:06.354961 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-04-08 00:30:06.354973 | orchestrator | Wednesday 08 April 2026 00:29:50 +0000 (0:00:00.973) 0:06:35.946 ******* 2026-04-08 00:30:06.354984 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-04-08 00:30:06.355025 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-04-08 00:30:06.355037 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-04-08 00:30:06.355048 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-04-08 00:30:06.355058 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-04-08 00:30:06.355069 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-04-08 00:30:06.355080 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-04-08 00:30:06.355091 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-04-08 00:30:06.355102 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-04-08 00:30:06.355113 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-04-08 00:30:06.355123 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-04-08 00:30:06.355134 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-04-08 00:30:06.355145 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-04-08 00:30:06.355156 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-04-08 00:30:06.355167 | orchestrator | 2026-04-08 00:30:06.355178 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-04-08 00:30:06.355189 | orchestrator | Wednesday 08 April 2026 00:29:52 +0000 (0:00:02.434) 0:06:38.381 ******* 2026-04-08 00:30:06.355200 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:30:06.355210 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:30:06.355222 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:30:06.355232 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:30:06.355243 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:30:06.355254 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:30:06.355265 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:30:06.355275 | orchestrator | 2026-04-08 00:30:06.355287 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-04-08 00:30:06.355298 | orchestrator | Wednesday 08 April 2026 00:29:53 +0000 (0:00:00.469) 0:06:38.851 ******* 2026-04-08 00:30:06.355311 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:30:06.355324 | orchestrator | 2026-04-08 00:30:06.355335 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-04-08 00:30:06.355346 | orchestrator | Wednesday 08 April 2026 00:29:53 +0000 (0:00:00.915) 0:06:39.767 ******* 2026-04-08 00:30:06.355356 | orchestrator | ok: [testbed-manager] 2026-04-08 00:30:06.355367 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:30:06.355378 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:30:06.355389 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:30:06.355400 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:30:06.355411 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:30:06.355422 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:30:06.355433 | orchestrator | 2026-04-08 00:30:06.355444 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-04-08 00:30:06.355455 | orchestrator | Wednesday 08 April 2026 00:29:54 +0000 (0:00:00.823) 0:06:40.590 ******* 2026-04-08 00:30:06.355466 | orchestrator | ok: [testbed-manager] 2026-04-08 00:30:06.355477 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:30:06.355491 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:30:06.355511 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:30:06.355528 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:30:06.355544 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:30:06.355559 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:30:06.355575 | orchestrator | 2026-04-08 00:30:06.355593 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-04-08 00:30:06.355637 | orchestrator | Wednesday 08 April 2026 00:29:55 +0000 (0:00:00.775) 0:06:41.366 ******* 2026-04-08 00:30:06.355659 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:30:06.355691 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:30:06.355711 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:30:06.355730 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:30:06.355748 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:30:06.355760 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:30:06.355771 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:30:06.355781 | orchestrator | 2026-04-08 00:30:06.355792 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-04-08 00:30:06.355810 | orchestrator | Wednesday 08 April 2026 00:29:56 +0000 (0:00:00.457) 0:06:41.823 ******* 2026-04-08 00:30:06.355821 | orchestrator | ok: [testbed-manager] 2026-04-08 00:30:06.355832 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:30:06.355843 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:30:06.355854 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:30:06.355865 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:30:06.355875 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:30:06.355886 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:30:06.355897 | orchestrator | 2026-04-08 00:30:06.355908 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-04-08 00:30:06.355918 | orchestrator | Wednesday 08 April 2026 00:29:57 +0000 (0:00:01.495) 0:06:43.319 ******* 2026-04-08 00:30:06.355929 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:30:06.355940 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:30:06.355951 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:30:06.355962 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:30:06.355973 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:30:06.355983 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:30:06.355994 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:30:06.356005 | orchestrator | 2026-04-08 00:30:06.356016 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-04-08 00:30:06.356026 | orchestrator | Wednesday 08 April 2026 00:29:58 +0000 (0:00:00.629) 0:06:43.948 ******* 2026-04-08 00:30:06.356037 | orchestrator | ok: [testbed-manager] 2026-04-08 00:30:06.356048 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:30:06.356059 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:30:06.356070 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:30:06.356080 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:30:06.356091 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:30:06.356111 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:30:38.113829 | orchestrator | 2026-04-08 00:30:38.113938 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-04-08 00:30:38.113956 | orchestrator | Wednesday 08 April 2026 00:30:06 +0000 (0:00:08.259) 0:06:52.208 ******* 2026-04-08 00:30:38.113968 | orchestrator | ok: [testbed-manager] 2026-04-08 00:30:38.113980 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:30:38.113990 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:30:38.114000 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:30:38.114010 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:30:38.114080 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:30:38.114090 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:30:38.114100 | orchestrator | 2026-04-08 00:30:38.114127 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-04-08 00:30:38.114138 | orchestrator | Wednesday 08 April 2026 00:30:07 +0000 (0:00:01.301) 0:06:53.510 ******* 2026-04-08 00:30:38.114148 | orchestrator | ok: [testbed-manager] 2026-04-08 00:30:38.114158 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:30:38.114168 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:30:38.114178 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:30:38.114188 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:30:38.114198 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:30:38.114208 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:30:38.114218 | orchestrator | 2026-04-08 00:30:38.114228 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-04-08 00:30:38.114275 | orchestrator | Wednesday 08 April 2026 00:30:09 +0000 (0:00:01.712) 0:06:55.222 ******* 2026-04-08 00:30:38.114286 | orchestrator | ok: [testbed-manager] 2026-04-08 00:30:38.114297 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:30:38.114306 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:30:38.114316 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:30:38.114326 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:30:38.114336 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:30:38.114346 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:30:38.114356 | orchestrator | 2026-04-08 00:30:38.114365 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-08 00:30:38.114376 | orchestrator | Wednesday 08 April 2026 00:30:11 +0000 (0:00:01.772) 0:06:56.995 ******* 2026-04-08 00:30:38.114385 | orchestrator | ok: [testbed-manager] 2026-04-08 00:30:38.114395 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:30:38.114405 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:30:38.114415 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:30:38.114425 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:30:38.114435 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:30:38.114445 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:30:38.114454 | orchestrator | 2026-04-08 00:30:38.114464 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-08 00:30:38.114474 | orchestrator | Wednesday 08 April 2026 00:30:12 +0000 (0:00:00.813) 0:06:57.808 ******* 2026-04-08 00:30:38.114484 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:30:38.114494 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:30:38.114504 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:30:38.114514 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:30:38.114523 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:30:38.114533 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:30:38.114543 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:30:38.114553 | orchestrator | 2026-04-08 00:30:38.114563 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-04-08 00:30:38.114573 | orchestrator | Wednesday 08 April 2026 00:30:12 +0000 (0:00:00.771) 0:06:58.580 ******* 2026-04-08 00:30:38.114641 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:30:38.114652 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:30:38.114662 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:30:38.114672 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:30:38.114681 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:30:38.114691 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:30:38.114701 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:30:38.114711 | orchestrator | 2026-04-08 00:30:38.114721 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-04-08 00:30:38.114731 | orchestrator | Wednesday 08 April 2026 00:30:13 +0000 (0:00:00.601) 0:06:59.181 ******* 2026-04-08 00:30:38.114741 | orchestrator | ok: [testbed-manager] 2026-04-08 00:30:38.114751 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:30:38.114761 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:30:38.114771 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:30:38.114781 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:30:38.114790 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:30:38.114800 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:30:38.114810 | orchestrator | 2026-04-08 00:30:38.114820 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-04-08 00:30:38.114845 | orchestrator | Wednesday 08 April 2026 00:30:13 +0000 (0:00:00.469) 0:06:59.651 ******* 2026-04-08 00:30:38.114855 | orchestrator | ok: [testbed-manager] 2026-04-08 00:30:38.114865 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:30:38.114874 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:30:38.114884 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:30:38.114894 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:30:38.114904 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:30:38.114913 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:30:38.114923 | orchestrator | 2026-04-08 00:30:38.114933 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-04-08 00:30:38.114953 | orchestrator | Wednesday 08 April 2026 00:30:14 +0000 (0:00:00.489) 0:07:00.140 ******* 2026-04-08 00:30:38.114963 | orchestrator | ok: [testbed-manager] 2026-04-08 00:30:38.114973 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:30:38.114982 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:30:38.114992 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:30:38.115001 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:30:38.115011 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:30:38.115021 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:30:38.115031 | orchestrator | 2026-04-08 00:30:38.115041 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-04-08 00:30:38.115051 | orchestrator | Wednesday 08 April 2026 00:30:14 +0000 (0:00:00.460) 0:07:00.600 ******* 2026-04-08 00:30:38.115061 | orchestrator | ok: [testbed-manager] 2026-04-08 00:30:38.115070 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:30:38.115080 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:30:38.115090 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:30:38.115100 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:30:38.115109 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:30:38.115119 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:30:38.115129 | orchestrator | 2026-04-08 00:30:38.115158 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-04-08 00:30:38.115168 | orchestrator | Wednesday 08 April 2026 00:30:20 +0000 (0:00:05.592) 0:07:06.192 ******* 2026-04-08 00:30:38.115178 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:30:38.115188 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:30:38.115198 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:30:38.115208 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:30:38.115218 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:30:38.115227 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:30:38.115237 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:30:38.115247 | orchestrator | 2026-04-08 00:30:38.115257 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-04-08 00:30:38.115267 | orchestrator | Wednesday 08 April 2026 00:30:21 +0000 (0:00:00.689) 0:07:06.882 ******* 2026-04-08 00:30:38.115279 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:30:38.115291 | orchestrator | 2026-04-08 00:30:38.115301 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-04-08 00:30:38.115312 | orchestrator | Wednesday 08 April 2026 00:30:21 +0000 (0:00:00.771) 0:07:07.653 ******* 2026-04-08 00:30:38.115321 | orchestrator | ok: [testbed-manager] 2026-04-08 00:30:38.115331 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:30:38.115341 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:30:38.115365 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:30:38.115386 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:30:38.115396 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:30:38.115406 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:30:38.115415 | orchestrator | 2026-04-08 00:30:38.115426 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-04-08 00:30:38.115438 | orchestrator | Wednesday 08 April 2026 00:30:23 +0000 (0:00:01.884) 0:07:09.538 ******* 2026-04-08 00:30:38.115456 | orchestrator | ok: [testbed-manager] 2026-04-08 00:30:38.115469 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:30:38.115478 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:30:38.115488 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:30:38.115498 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:30:38.115508 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:30:38.115517 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:30:38.115527 | orchestrator | 2026-04-08 00:30:38.115537 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-04-08 00:30:38.115547 | orchestrator | Wednesday 08 April 2026 00:30:24 +0000 (0:00:01.183) 0:07:10.722 ******* 2026-04-08 00:30:38.115564 | orchestrator | ok: [testbed-manager] 2026-04-08 00:30:38.115574 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:30:38.115606 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:30:38.115616 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:30:38.115626 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:30:38.115636 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:30:38.115646 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:30:38.115656 | orchestrator | 2026-04-08 00:30:38.115666 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-04-08 00:30:38.115675 | orchestrator | Wednesday 08 April 2026 00:30:25 +0000 (0:00:00.829) 0:07:11.551 ******* 2026-04-08 00:30:38.115685 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-08 00:30:38.115697 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-08 00:30:38.115707 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-08 00:30:38.115717 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-08 00:30:38.115726 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-08 00:30:38.115741 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-08 00:30:38.115752 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-08 00:30:38.115762 | orchestrator | 2026-04-08 00:30:38.115771 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-04-08 00:30:38.115781 | orchestrator | Wednesday 08 April 2026 00:30:27 +0000 (0:00:01.650) 0:07:13.201 ******* 2026-04-08 00:30:38.115791 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:30:38.115801 | orchestrator | 2026-04-08 00:30:38.115811 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-04-08 00:30:38.115821 | orchestrator | Wednesday 08 April 2026 00:30:28 +0000 (0:00:00.925) 0:07:14.127 ******* 2026-04-08 00:30:38.115831 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:30:38.115841 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:30:38.115851 | orchestrator | changed: [testbed-manager] 2026-04-08 00:30:38.115861 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:30:38.115871 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:30:38.115881 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:30:38.115891 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:30:38.115901 | orchestrator | 2026-04-08 00:30:38.115918 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-04-08 00:31:07.152122 | orchestrator | Wednesday 08 April 2026 00:30:38 +0000 (0:00:09.760) 0:07:23.887 ******* 2026-04-08 00:31:07.152243 | orchestrator | ok: [testbed-manager] 2026-04-08 00:31:07.152260 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:31:07.152271 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:31:07.152283 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:31:07.152294 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:31:07.152305 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:31:07.152316 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:31:07.152327 | orchestrator | 2026-04-08 00:31:07.152339 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-04-08 00:31:07.152351 | orchestrator | Wednesday 08 April 2026 00:30:39 +0000 (0:00:01.662) 0:07:25.549 ******* 2026-04-08 00:31:07.152389 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:31:07.152401 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:31:07.152412 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:31:07.152422 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:31:07.152433 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:31:07.152444 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:31:07.152455 | orchestrator | 2026-04-08 00:31:07.152467 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-04-08 00:31:07.152477 | orchestrator | Wednesday 08 April 2026 00:30:41 +0000 (0:00:01.428) 0:07:26.978 ******* 2026-04-08 00:31:07.152488 | orchestrator | changed: [testbed-manager] 2026-04-08 00:31:07.152500 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:31:07.152510 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:31:07.152521 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:31:07.152532 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:31:07.152543 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:31:07.152577 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:31:07.152588 | orchestrator | 2026-04-08 00:31:07.152599 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-04-08 00:31:07.152610 | orchestrator | 2026-04-08 00:31:07.152621 | orchestrator | TASK [Include hardening role] ************************************************** 2026-04-08 00:31:07.152633 | orchestrator | Wednesday 08 April 2026 00:30:42 +0000 (0:00:01.171) 0:07:28.149 ******* 2026-04-08 00:31:07.152646 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:31:07.152659 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:31:07.152672 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:31:07.152685 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:31:07.152698 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:31:07.152711 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:31:07.152723 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:31:07.152736 | orchestrator | 2026-04-08 00:31:07.152748 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-04-08 00:31:07.152761 | orchestrator | 2026-04-08 00:31:07.152774 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-04-08 00:31:07.152786 | orchestrator | Wednesday 08 April 2026 00:30:42 +0000 (0:00:00.483) 0:07:28.633 ******* 2026-04-08 00:31:07.152800 | orchestrator | changed: [testbed-manager] 2026-04-08 00:31:07.152812 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:31:07.152824 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:31:07.152838 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:31:07.152850 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:31:07.152862 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:31:07.152875 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:31:07.152888 | orchestrator | 2026-04-08 00:31:07.152900 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-04-08 00:31:07.152913 | orchestrator | Wednesday 08 April 2026 00:30:44 +0000 (0:00:01.285) 0:07:29.918 ******* 2026-04-08 00:31:07.152925 | orchestrator | ok: [testbed-manager] 2026-04-08 00:31:07.152937 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:31:07.152950 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:31:07.152962 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:31:07.152975 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:31:07.152987 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:31:07.152999 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:31:07.153010 | orchestrator | 2026-04-08 00:31:07.153021 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-04-08 00:31:07.153032 | orchestrator | Wednesday 08 April 2026 00:30:45 +0000 (0:00:01.568) 0:07:31.486 ******* 2026-04-08 00:31:07.153043 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:31:07.153054 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:31:07.153065 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:31:07.153075 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:31:07.153087 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:31:07.153106 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:31:07.153117 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:31:07.153128 | orchestrator | 2026-04-08 00:31:07.153154 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-04-08 00:31:07.153166 | orchestrator | Wednesday 08 April 2026 00:30:46 +0000 (0:00:00.461) 0:07:31.947 ******* 2026-04-08 00:31:07.153177 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:31:07.153190 | orchestrator | 2026-04-08 00:31:07.153201 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-04-08 00:31:07.153212 | orchestrator | Wednesday 08 April 2026 00:30:46 +0000 (0:00:00.778) 0:07:32.726 ******* 2026-04-08 00:31:07.153224 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:31:07.153238 | orchestrator | 2026-04-08 00:31:07.153248 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-04-08 00:31:07.153259 | orchestrator | Wednesday 08 April 2026 00:30:47 +0000 (0:00:00.929) 0:07:33.656 ******* 2026-04-08 00:31:07.153270 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:31:07.153281 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:31:07.153291 | orchestrator | changed: [testbed-manager] 2026-04-08 00:31:07.153302 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:31:07.153313 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:31:07.153324 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:31:07.153334 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:31:07.153345 | orchestrator | 2026-04-08 00:31:07.153374 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-04-08 00:31:07.153385 | orchestrator | Wednesday 08 April 2026 00:30:56 +0000 (0:00:08.450) 0:07:42.107 ******* 2026-04-08 00:31:07.153396 | orchestrator | changed: [testbed-manager] 2026-04-08 00:31:07.153407 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:31:07.153418 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:31:07.153429 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:31:07.153439 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:31:07.153450 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:31:07.153461 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:31:07.153471 | orchestrator | 2026-04-08 00:31:07.153482 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-04-08 00:31:07.153493 | orchestrator | Wednesday 08 April 2026 00:30:57 +0000 (0:00:00.807) 0:07:42.914 ******* 2026-04-08 00:31:07.153504 | orchestrator | changed: [testbed-manager] 2026-04-08 00:31:07.153515 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:31:07.153525 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:31:07.153536 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:31:07.153546 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:31:07.153577 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:31:07.153588 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:31:07.153599 | orchestrator | 2026-04-08 00:31:07.153610 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-04-08 00:31:07.153621 | orchestrator | Wednesday 08 April 2026 00:30:58 +0000 (0:00:01.260) 0:07:44.175 ******* 2026-04-08 00:31:07.153632 | orchestrator | changed: [testbed-manager] 2026-04-08 00:31:07.153642 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:31:07.153653 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:31:07.153664 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:31:07.153675 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:31:07.153685 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:31:07.153696 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:31:07.153706 | orchestrator | 2026-04-08 00:31:07.153717 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-04-08 00:31:07.153736 | orchestrator | Wednesday 08 April 2026 00:31:00 +0000 (0:00:01.859) 0:07:46.034 ******* 2026-04-08 00:31:07.153747 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:31:07.153758 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:31:07.153769 | orchestrator | changed: [testbed-manager] 2026-04-08 00:31:07.153779 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:31:07.153790 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:31:07.153801 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:31:07.153812 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:31:07.153822 | orchestrator | 2026-04-08 00:31:07.153833 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-04-08 00:31:07.153844 | orchestrator | Wednesday 08 April 2026 00:31:01 +0000 (0:00:01.251) 0:07:47.285 ******* 2026-04-08 00:31:07.153855 | orchestrator | changed: [testbed-manager] 2026-04-08 00:31:07.153865 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:31:07.153876 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:31:07.153887 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:31:07.153898 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:31:07.153909 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:31:07.153919 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:31:07.153930 | orchestrator | 2026-04-08 00:31:07.153941 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-04-08 00:31:07.153951 | orchestrator | 2026-04-08 00:31:07.153962 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-04-08 00:31:07.153973 | orchestrator | Wednesday 08 April 2026 00:31:02 +0000 (0:00:01.079) 0:07:48.365 ******* 2026-04-08 00:31:07.153984 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:31:07.153995 | orchestrator | 2026-04-08 00:31:07.154006 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-08 00:31:07.154089 | orchestrator | Wednesday 08 April 2026 00:31:03 +0000 (0:00:00.928) 0:07:49.294 ******* 2026-04-08 00:31:07.154112 | orchestrator | ok: [testbed-manager] 2026-04-08 00:31:07.154131 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:31:07.154152 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:31:07.154171 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:31:07.154191 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:31:07.154210 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:31:07.154221 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:31:07.154232 | orchestrator | 2026-04-08 00:31:07.154243 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-08 00:31:07.154254 | orchestrator | Wednesday 08 April 2026 00:31:04 +0000 (0:00:00.799) 0:07:50.094 ******* 2026-04-08 00:31:07.154265 | orchestrator | changed: [testbed-manager] 2026-04-08 00:31:07.154282 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:31:07.154301 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:31:07.154316 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:31:07.154327 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:31:07.154338 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:31:07.154348 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:31:07.154359 | orchestrator | 2026-04-08 00:31:07.154370 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-04-08 00:31:07.154381 | orchestrator | Wednesday 08 April 2026 00:31:05 +0000 (0:00:01.219) 0:07:51.313 ******* 2026-04-08 00:31:07.154392 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:31:07.154403 | orchestrator | 2026-04-08 00:31:07.154414 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-08 00:31:07.154425 | orchestrator | Wednesday 08 April 2026 00:31:06 +0000 (0:00:00.787) 0:07:52.101 ******* 2026-04-08 00:31:07.154435 | orchestrator | ok: [testbed-manager] 2026-04-08 00:31:07.154446 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:31:07.154467 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:31:07.154478 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:31:07.154488 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:31:07.154499 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:31:07.154510 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:31:07.154521 | orchestrator | 2026-04-08 00:31:07.154542 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-08 00:31:08.610953 | orchestrator | Wednesday 08 April 2026 00:31:07 +0000 (0:00:00.826) 0:07:52.927 ******* 2026-04-08 00:31:08.611055 | orchestrator | changed: [testbed-manager] 2026-04-08 00:31:08.611071 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:31:08.611084 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:31:08.611096 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:31:08.611107 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:31:08.611119 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:31:08.611130 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:31:08.611141 | orchestrator | 2026-04-08 00:31:08.611154 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:31:08.611167 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-04-08 00:31:08.611180 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-08 00:31:08.611191 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-08 00:31:08.611225 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-08 00:31:08.611238 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-08 00:31:08.611249 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-08 00:31:08.611260 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-08 00:31:08.611272 | orchestrator | 2026-04-08 00:31:08.611283 | orchestrator | 2026-04-08 00:31:08.611294 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:31:08.611305 | orchestrator | Wednesday 08 April 2026 00:31:08 +0000 (0:00:01.172) 0:07:54.100 ******* 2026-04-08 00:31:08.611317 | orchestrator | =============================================================================== 2026-04-08 00:31:08.611328 | orchestrator | osism.commons.packages : Install required packages --------------------- 77.64s 2026-04-08 00:31:08.611340 | orchestrator | osism.commons.packages : Download required packages -------------------- 39.46s 2026-04-08 00:31:08.611351 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.95s 2026-04-08 00:31:08.611363 | orchestrator | osism.commons.repository : Update package cache ------------------------ 14.69s 2026-04-08 00:31:08.611374 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.35s 2026-04-08 00:31:08.611386 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.35s 2026-04-08 00:31:08.611397 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.29s 2026-04-08 00:31:08.611408 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.76s 2026-04-08 00:31:08.611419 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.57s 2026-04-08 00:31:08.611430 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.78s 2026-04-08 00:31:08.611441 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.45s 2026-04-08 00:31:08.611453 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.33s 2026-04-08 00:31:08.611486 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.26s 2026-04-08 00:31:08.611500 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.12s 2026-04-08 00:31:08.611518 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.11s 2026-04-08 00:31:08.611531 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.29s 2026-04-08 00:31:08.611544 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.98s 2026-04-08 00:31:08.611587 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.64s 2026-04-08 00:31:08.611601 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.59s 2026-04-08 00:31:08.611613 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.55s 2026-04-08 00:31:08.783497 | orchestrator | + osism apply fail2ban 2026-04-08 00:31:20.263905 | orchestrator | 2026-04-08 00:31:20 | INFO  | Prepare task for execution of fail2ban. 2026-04-08 00:31:20.353137 | orchestrator | 2026-04-08 00:31:20 | INFO  | Task cc98776f-252c-4f3c-bc4c-2ff449537c56 (fail2ban) was prepared for execution. 2026-04-08 00:31:20.353226 | orchestrator | 2026-04-08 00:31:20 | INFO  | It takes a moment until task cc98776f-252c-4f3c-bc4c-2ff449537c56 (fail2ban) has been started and output is visible here. 2026-04-08 00:31:40.987085 | orchestrator | 2026-04-08 00:31:40.987210 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-04-08 00:31:40.987232 | orchestrator | 2026-04-08 00:31:40.987245 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-04-08 00:31:40.987260 | orchestrator | Wednesday 08 April 2026 00:31:23 +0000 (0:00:00.318) 0:00:00.318 ******* 2026-04-08 00:31:40.987278 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:31:40.987294 | orchestrator | 2026-04-08 00:31:40.987307 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-04-08 00:31:40.987322 | orchestrator | Wednesday 08 April 2026 00:31:24 +0000 (0:00:01.149) 0:00:01.467 ******* 2026-04-08 00:31:40.987337 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:31:40.987353 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:31:40.987368 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:31:40.987382 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:31:40.987397 | orchestrator | changed: [testbed-manager] 2026-04-08 00:31:40.987412 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:31:40.987421 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:31:40.987430 | orchestrator | 2026-04-08 00:31:40.987440 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-04-08 00:31:40.987455 | orchestrator | Wednesday 08 April 2026 00:31:36 +0000 (0:00:11.297) 0:00:12.765 ******* 2026-04-08 00:31:40.987470 | orchestrator | changed: [testbed-manager] 2026-04-08 00:31:40.987484 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:31:40.987499 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:31:40.987512 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:31:40.987559 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:31:40.987573 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:31:40.987587 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:31:40.987600 | orchestrator | 2026-04-08 00:31:40.987617 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-04-08 00:31:40.987632 | orchestrator | Wednesday 08 April 2026 00:31:37 +0000 (0:00:01.566) 0:00:14.332 ******* 2026-04-08 00:31:40.987647 | orchestrator | ok: [testbed-manager] 2026-04-08 00:31:40.987664 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:31:40.987680 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:31:40.987696 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:31:40.987746 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:31:40.987759 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:31:40.987768 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:31:40.987779 | orchestrator | 2026-04-08 00:31:40.987790 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-04-08 00:31:40.987800 | orchestrator | Wednesday 08 April 2026 00:31:39 +0000 (0:00:01.234) 0:00:15.567 ******* 2026-04-08 00:31:40.987810 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:31:40.987820 | orchestrator | changed: [testbed-manager] 2026-04-08 00:31:40.987830 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:31:40.987840 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:31:40.987850 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:31:40.987860 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:31:40.987869 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:31:40.987879 | orchestrator | 2026-04-08 00:31:40.987889 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:31:40.987900 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:31:40.987911 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:31:40.987922 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:31:40.987933 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:31:40.987942 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:31:40.987953 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:31:40.987978 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:31:40.987987 | orchestrator | 2026-04-08 00:31:40.987996 | orchestrator | 2026-04-08 00:31:40.988005 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:31:40.988014 | orchestrator | Wednesday 08 April 2026 00:31:40 +0000 (0:00:01.664) 0:00:17.231 ******* 2026-04-08 00:31:40.988022 | orchestrator | =============================================================================== 2026-04-08 00:31:40.988031 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.30s 2026-04-08 00:31:40.988040 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.66s 2026-04-08 00:31:40.988049 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.57s 2026-04-08 00:31:40.988058 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.23s 2026-04-08 00:31:40.988067 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.15s 2026-04-08 00:31:41.154987 | orchestrator | + osism apply network 2026-04-08 00:31:52.425651 | orchestrator | 2026-04-08 00:31:52 | INFO  | Prepare task for execution of network. 2026-04-08 00:31:52.499262 | orchestrator | 2026-04-08 00:31:52 | INFO  | Task 4fd11440-6e18-4343-a5c3-761e23581515 (network) was prepared for execution. 2026-04-08 00:31:52.499387 | orchestrator | 2026-04-08 00:31:52 | INFO  | It takes a moment until task 4fd11440-6e18-4343-a5c3-761e23581515 (network) has been started and output is visible here. 2026-04-08 00:32:18.825362 | orchestrator | 2026-04-08 00:32:18.825564 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-04-08 00:32:18.825582 | orchestrator | 2026-04-08 00:32:18.825590 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-04-08 00:32:18.825597 | orchestrator | Wednesday 08 April 2026 00:31:55 +0000 (0:00:00.277) 0:00:00.277 ******* 2026-04-08 00:32:18.825635 | orchestrator | ok: [testbed-manager] 2026-04-08 00:32:18.825645 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:32:18.825652 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:32:18.825659 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:32:18.825666 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:32:18.825673 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:32:18.825680 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:32:18.825686 | orchestrator | 2026-04-08 00:32:18.825693 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-04-08 00:32:18.825700 | orchestrator | Wednesday 08 April 2026 00:31:56 +0000 (0:00:00.557) 0:00:00.834 ******* 2026-04-08 00:32:18.825711 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:32:18.825721 | orchestrator | 2026-04-08 00:32:18.825728 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-04-08 00:32:18.825735 | orchestrator | Wednesday 08 April 2026 00:31:57 +0000 (0:00:01.007) 0:00:01.842 ******* 2026-04-08 00:32:18.825742 | orchestrator | ok: [testbed-manager] 2026-04-08 00:32:18.825749 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:32:18.825756 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:32:18.825763 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:32:18.825770 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:32:18.825777 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:32:18.825785 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:32:18.825792 | orchestrator | 2026-04-08 00:32:18.825799 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-04-08 00:32:18.825806 | orchestrator | Wednesday 08 April 2026 00:31:59 +0000 (0:00:02.073) 0:00:03.916 ******* 2026-04-08 00:32:18.825813 | orchestrator | ok: [testbed-manager] 2026-04-08 00:32:18.825820 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:32:18.825827 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:32:18.825834 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:32:18.825841 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:32:18.825848 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:32:18.825855 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:32:18.825864 | orchestrator | 2026-04-08 00:32:18.825872 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-04-08 00:32:18.825881 | orchestrator | Wednesday 08 April 2026 00:32:00 +0000 (0:00:01.480) 0:00:05.397 ******* 2026-04-08 00:32:18.825889 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-04-08 00:32:18.825898 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-04-08 00:32:18.825907 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-04-08 00:32:18.825915 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-04-08 00:32:18.825923 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-04-08 00:32:18.825932 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-04-08 00:32:18.825939 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-04-08 00:32:18.825947 | orchestrator | 2026-04-08 00:32:18.825954 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-04-08 00:32:18.825963 | orchestrator | Wednesday 08 April 2026 00:32:01 +0000 (0:00:01.183) 0:00:06.580 ******* 2026-04-08 00:32:18.825971 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-08 00:32:18.825981 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-08 00:32:18.825989 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-08 00:32:18.825996 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-08 00:32:18.826003 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-08 00:32:18.826011 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-08 00:32:18.826092 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-08 00:32:18.826101 | orchestrator | 2026-04-08 00:32:18.826109 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-04-08 00:32:18.826131 | orchestrator | Wednesday 08 April 2026 00:32:05 +0000 (0:00:03.344) 0:00:09.924 ******* 2026-04-08 00:32:18.826140 | orchestrator | changed: [testbed-manager] 2026-04-08 00:32:18.826148 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:32:18.826156 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:32:18.826182 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:32:18.826191 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:32:18.826199 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:32:18.826207 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:32:18.826215 | orchestrator | 2026-04-08 00:32:18.826222 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-04-08 00:32:18.826230 | orchestrator | Wednesday 08 April 2026 00:32:06 +0000 (0:00:01.608) 0:00:11.533 ******* 2026-04-08 00:32:18.826236 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-08 00:32:18.826243 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-08 00:32:18.826249 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-08 00:32:18.826255 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-08 00:32:18.826262 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-08 00:32:18.826269 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-08 00:32:18.826276 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-08 00:32:18.826284 | orchestrator | 2026-04-08 00:32:18.826291 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-04-08 00:32:18.826298 | orchestrator | Wednesday 08 April 2026 00:32:08 +0000 (0:00:01.892) 0:00:13.425 ******* 2026-04-08 00:32:18.826305 | orchestrator | ok: [testbed-manager] 2026-04-08 00:32:18.826313 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:32:18.826321 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:32:18.826328 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:32:18.826336 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:32:18.826343 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:32:18.826350 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:32:18.826358 | orchestrator | 2026-04-08 00:32:18.826365 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-04-08 00:32:18.826396 | orchestrator | Wednesday 08 April 2026 00:32:09 +0000 (0:00:00.942) 0:00:14.367 ******* 2026-04-08 00:32:18.826405 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:32:18.826412 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:32:18.826420 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:32:18.826427 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:32:18.826435 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:32:18.826442 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:32:18.826448 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:32:18.826455 | orchestrator | 2026-04-08 00:32:18.826463 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-04-08 00:32:18.826471 | orchestrator | Wednesday 08 April 2026 00:32:10 +0000 (0:00:00.747) 0:00:15.115 ******* 2026-04-08 00:32:18.826479 | orchestrator | ok: [testbed-manager] 2026-04-08 00:32:18.826505 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:32:18.826511 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:32:18.826517 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:32:18.826523 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:32:18.826530 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:32:18.826537 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:32:18.826544 | orchestrator | 2026-04-08 00:32:18.826551 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-04-08 00:32:18.826557 | orchestrator | Wednesday 08 April 2026 00:32:12 +0000 (0:00:01.985) 0:00:17.101 ******* 2026-04-08 00:32:18.826564 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:32:18.826571 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:32:18.826578 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:32:18.826584 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:32:18.826591 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:32:18.826598 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:32:18.826622 | orchestrator | changed: [testbed-manager] => (item={'src': '/opt/configuration/network/iptables.sh', 'dest': 'routable.d/iptables.sh'}) 2026-04-08 00:32:18.826632 | orchestrator | 2026-04-08 00:32:18.826639 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-04-08 00:32:18.826646 | orchestrator | Wednesday 08 April 2026 00:32:13 +0000 (0:00:00.892) 0:00:17.993 ******* 2026-04-08 00:32:18.826652 | orchestrator | ok: [testbed-manager] 2026-04-08 00:32:18.826658 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:32:18.826665 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:32:18.826671 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:32:18.826677 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:32:18.826684 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:32:18.826691 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:32:18.826698 | orchestrator | 2026-04-08 00:32:18.826704 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-04-08 00:32:18.826711 | orchestrator | Wednesday 08 April 2026 00:32:14 +0000 (0:00:01.355) 0:00:19.348 ******* 2026-04-08 00:32:18.826718 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:32:18.826727 | orchestrator | 2026-04-08 00:32:18.826734 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-08 00:32:18.826741 | orchestrator | Wednesday 08 April 2026 00:32:15 +0000 (0:00:01.237) 0:00:20.586 ******* 2026-04-08 00:32:18.826748 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:32:18.826755 | orchestrator | ok: [testbed-manager] 2026-04-08 00:32:18.826761 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:32:18.826768 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:32:18.826775 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:32:18.826782 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:32:18.826789 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:32:18.826796 | orchestrator | 2026-04-08 00:32:18.826802 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-04-08 00:32:18.826809 | orchestrator | Wednesday 08 April 2026 00:32:17 +0000 (0:00:01.085) 0:00:21.671 ******* 2026-04-08 00:32:18.826815 | orchestrator | ok: [testbed-manager] 2026-04-08 00:32:18.826822 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:32:18.826829 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:32:18.826836 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:32:18.826842 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:32:18.826849 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:32:18.826856 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:32:18.826862 | orchestrator | 2026-04-08 00:32:18.826870 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-08 00:32:18.826885 | orchestrator | Wednesday 08 April 2026 00:32:17 +0000 (0:00:00.758) 0:00:22.430 ******* 2026-04-08 00:32:18.826893 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-04-08 00:32:18.826901 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-04-08 00:32:18.826908 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-04-08 00:32:18.826915 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-04-08 00:32:18.826922 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-08 00:32:18.826929 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-04-08 00:32:18.826936 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-08 00:32:18.826942 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-04-08 00:32:18.826948 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-08 00:32:18.826954 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-04-08 00:32:18.826968 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-08 00:32:18.826974 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-08 00:32:18.826981 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-08 00:32:18.826988 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-08 00:32:18.826995 | orchestrator | 2026-04-08 00:32:18.827014 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-04-08 00:32:33.562414 | orchestrator | Wednesday 08 April 2026 00:32:18 +0000 (0:00:01.047) 0:00:23.477 ******* 2026-04-08 00:32:33.562603 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:32:33.562623 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:32:33.562636 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:32:33.562647 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:32:33.562658 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:32:33.562669 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:32:33.562680 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:32:33.562691 | orchestrator | 2026-04-08 00:32:33.562703 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-04-08 00:32:33.562722 | orchestrator | Wednesday 08 April 2026 00:32:19 +0000 (0:00:00.795) 0:00:24.272 ******* 2026-04-08 00:32:33.562744 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-0, testbed-manager, testbed-node-1, testbed-node-5, testbed-node-4, testbed-node-2, testbed-node-3 2026-04-08 00:32:33.562766 | orchestrator | 2026-04-08 00:32:33.562786 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-04-08 00:32:33.562805 | orchestrator | Wednesday 08 April 2026 00:32:23 +0000 (0:00:04.259) 0:00:28.532 ******* 2026-04-08 00:32:33.562826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-08 00:32:33.562849 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-04-08 00:32:33.562870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-08 00:32:33.562891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-08 00:32:33.562910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-04-08 00:32:33.562928 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-08 00:32:33.562948 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-08 00:32:33.562989 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-04-08 00:32:33.563055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-04-08 00:32:33.563080 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-04-08 00:32:33.563104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-04-08 00:32:33.563150 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-04-08 00:32:33.563165 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-04-08 00:32:33.563179 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-04-08 00:32:33.563192 | orchestrator | 2026-04-08 00:32:33.563204 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-04-08 00:32:33.563217 | orchestrator | Wednesday 08 April 2026 00:32:28 +0000 (0:00:04.913) 0:00:33.445 ******* 2026-04-08 00:32:33.563230 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-04-08 00:32:33.563243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-08 00:32:33.563256 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-04-08 00:32:33.563270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-08 00:32:33.563283 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-08 00:32:33.563296 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-08 00:32:33.563315 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-04-08 00:32:33.563348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-08 00:32:33.563368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-04-08 00:32:33.563385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-04-08 00:32:33.563403 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-04-08 00:32:33.563423 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-04-08 00:32:33.563450 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-04-08 00:32:45.509176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-04-08 00:32:45.509283 | orchestrator | 2026-04-08 00:32:45.509298 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-04-08 00:32:45.509310 | orchestrator | Wednesday 08 April 2026 00:32:33 +0000 (0:00:04.902) 0:00:38.347 ******* 2026-04-08 00:32:45.509320 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:32:45.509330 | orchestrator | 2026-04-08 00:32:45.509339 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-08 00:32:45.509348 | orchestrator | Wednesday 08 April 2026 00:32:34 +0000 (0:00:01.097) 0:00:39.444 ******* 2026-04-08 00:32:45.509373 | orchestrator | ok: [testbed-manager] 2026-04-08 00:32:45.509384 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:32:45.509401 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:32:45.509411 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:32:45.509419 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:32:45.509428 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:32:45.509436 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:32:45.509445 | orchestrator | 2026-04-08 00:32:45.509549 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-08 00:32:45.509561 | orchestrator | Wednesday 08 April 2026 00:32:35 +0000 (0:00:00.966) 0:00:40.411 ******* 2026-04-08 00:32:45.509571 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-08 00:32:45.509581 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-08 00:32:45.509590 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-08 00:32:45.509622 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-08 00:32:45.509631 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:32:45.509641 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-08 00:32:45.509650 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-08 00:32:45.509659 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-08 00:32:45.509668 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-08 00:32:45.509676 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:32:45.509685 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-08 00:32:45.509710 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-08 00:32:45.509720 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-08 00:32:45.509731 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-08 00:32:45.509741 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-08 00:32:45.509756 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-08 00:32:45.509797 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-08 00:32:45.509812 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-08 00:32:45.509827 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:32:45.509847 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-08 00:32:45.509862 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-08 00:32:45.509876 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-08 00:32:45.509890 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-08 00:32:45.509905 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:32:45.509926 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-08 00:32:45.509944 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-08 00:32:45.509958 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-08 00:32:45.509973 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-08 00:32:45.509987 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:32:45.510002 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:32:45.510091 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-08 00:32:45.510112 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-08 00:32:45.510127 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-08 00:32:45.510141 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-08 00:32:45.510154 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:32:45.510166 | orchestrator | 2026-04-08 00:32:45.510178 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-04-08 00:32:45.510214 | orchestrator | Wednesday 08 April 2026 00:32:36 +0000 (0:00:00.681) 0:00:41.093 ******* 2026-04-08 00:32:45.510230 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:32:45.510245 | orchestrator | 2026-04-08 00:32:45.510260 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-04-08 00:32:45.510275 | orchestrator | Wednesday 08 April 2026 00:32:37 +0000 (0:00:01.073) 0:00:42.166 ******* 2026-04-08 00:32:45.510304 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:32:45.510319 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:32:45.510334 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:32:45.510349 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:32:45.510364 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:32:45.510379 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:32:45.510393 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:32:45.510402 | orchestrator | 2026-04-08 00:32:45.510411 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-04-08 00:32:45.510420 | orchestrator | Wednesday 08 April 2026 00:32:38 +0000 (0:00:00.606) 0:00:42.773 ******* 2026-04-08 00:32:45.510429 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:32:45.510437 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:32:45.510446 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:32:45.510486 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:32:45.510496 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:32:45.510505 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:32:45.510514 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:32:45.510522 | orchestrator | 2026-04-08 00:32:45.510532 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-04-08 00:32:45.510540 | orchestrator | Wednesday 08 April 2026 00:32:38 +0000 (0:00:00.603) 0:00:43.377 ******* 2026-04-08 00:32:45.510549 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:32:45.510558 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:32:45.510567 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:32:45.510576 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:32:45.510584 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:32:45.510593 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:32:45.510602 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:32:45.510610 | orchestrator | 2026-04-08 00:32:45.510619 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-04-08 00:32:45.510628 | orchestrator | Wednesday 08 April 2026 00:32:39 +0000 (0:00:00.719) 0:00:44.096 ******* 2026-04-08 00:32:45.510637 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:32:45.510646 | orchestrator | ok: [testbed-manager] 2026-04-08 00:32:45.510655 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:32:45.510664 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:32:45.510672 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:32:45.510681 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:32:45.510690 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:32:45.510699 | orchestrator | 2026-04-08 00:32:45.510708 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-04-08 00:32:45.510717 | orchestrator | Wednesday 08 April 2026 00:32:40 +0000 (0:00:01.480) 0:00:45.577 ******* 2026-04-08 00:32:45.510726 | orchestrator | ok: [testbed-manager] 2026-04-08 00:32:45.510735 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:32:45.510743 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:32:45.510752 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:32:45.510761 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:32:45.510769 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:32:45.510778 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:32:45.510787 | orchestrator | 2026-04-08 00:32:45.510796 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-04-08 00:32:45.510805 | orchestrator | Wednesday 08 April 2026 00:32:42 +0000 (0:00:01.199) 0:00:46.776 ******* 2026-04-08 00:32:45.510814 | orchestrator | ok: [testbed-manager] 2026-04-08 00:32:45.510823 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:32:45.510831 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:32:45.510845 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:32:45.510854 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:32:45.510863 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:32:45.510884 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:32:45.510899 | orchestrator | 2026-04-08 00:32:45.510921 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-04-08 00:32:45.510944 | orchestrator | Wednesday 08 April 2026 00:32:44 +0000 (0:00:02.134) 0:00:48.910 ******* 2026-04-08 00:32:45.510958 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:32:45.510974 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:32:45.510988 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:32:45.511002 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:32:45.511016 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:32:45.511031 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:32:45.511043 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:32:45.511051 | orchestrator | 2026-04-08 00:32:45.511060 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-04-08 00:32:45.511069 | orchestrator | Wednesday 08 April 2026 00:32:44 +0000 (0:00:00.593) 0:00:49.504 ******* 2026-04-08 00:32:45.511078 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:32:45.511087 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:32:45.511095 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:32:45.511104 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:32:45.511113 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:32:45.511122 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:32:45.511130 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:32:45.511139 | orchestrator | 2026-04-08 00:32:45.511148 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:32:45.511158 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-08 00:32:45.511169 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-08 00:32:45.511190 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-08 00:32:45.745582 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-08 00:32:45.745717 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-08 00:32:45.745743 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-08 00:32:45.745764 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-08 00:32:45.745786 | orchestrator | 2026-04-08 00:32:45.745805 | orchestrator | 2026-04-08 00:32:45.745825 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:32:45.745848 | orchestrator | Wednesday 08 April 2026 00:32:45 +0000 (0:00:00.661) 0:00:50.165 ******* 2026-04-08 00:32:45.745869 | orchestrator | =============================================================================== 2026-04-08 00:32:45.745890 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 4.91s 2026-04-08 00:32:45.745911 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 4.90s 2026-04-08 00:32:45.745931 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.26s 2026-04-08 00:32:45.745948 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.34s 2026-04-08 00:32:45.745966 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.13s 2026-04-08 00:32:45.745987 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.07s 2026-04-08 00:32:45.746009 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 1.99s 2026-04-08 00:32:45.746098 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.89s 2026-04-08 00:32:45.746140 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.61s 2026-04-08 00:32:45.746152 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.48s 2026-04-08 00:32:45.746163 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.48s 2026-04-08 00:32:45.746174 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.35s 2026-04-08 00:32:45.746185 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.24s 2026-04-08 00:32:45.746196 | orchestrator | osism.commons.network : Remove network-extra-init systemd service ------- 1.20s 2026-04-08 00:32:45.746207 | orchestrator | osism.commons.network : Create required directories --------------------- 1.18s 2026-04-08 00:32:45.746218 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.10s 2026-04-08 00:32:45.746229 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.09s 2026-04-08 00:32:45.746240 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.07s 2026-04-08 00:32:45.746251 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.05s 2026-04-08 00:32:45.746262 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.01s 2026-04-08 00:32:45.916840 | orchestrator | + osism apply wireguard 2026-04-08 00:32:57.241308 | orchestrator | 2026-04-08 00:32:57 | INFO  | Prepare task for execution of wireguard. 2026-04-08 00:32:57.320233 | orchestrator | 2026-04-08 00:32:57 | INFO  | Task c4e49df6-d028-42b2-817e-c9e1632c95ca (wireguard) was prepared for execution. 2026-04-08 00:32:57.320314 | orchestrator | 2026-04-08 00:32:57 | INFO  | It takes a moment until task c4e49df6-d028-42b2-817e-c9e1632c95ca (wireguard) has been started and output is visible here. 2026-04-08 00:33:14.695659 | orchestrator | 2026-04-08 00:33:14.695759 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-04-08 00:33:14.695772 | orchestrator | 2026-04-08 00:33:14.695780 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-04-08 00:33:14.695789 | orchestrator | Wednesday 08 April 2026 00:33:00 +0000 (0:00:00.264) 0:00:00.264 ******* 2026-04-08 00:33:14.695797 | orchestrator | ok: [testbed-manager] 2026-04-08 00:33:14.695806 | orchestrator | 2026-04-08 00:33:14.695813 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-04-08 00:33:14.695822 | orchestrator | Wednesday 08 April 2026 00:33:02 +0000 (0:00:01.489) 0:00:01.754 ******* 2026-04-08 00:33:14.695830 | orchestrator | changed: [testbed-manager] 2026-04-08 00:33:14.695839 | orchestrator | 2026-04-08 00:33:14.695847 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-04-08 00:33:14.695855 | orchestrator | Wednesday 08 April 2026 00:33:07 +0000 (0:00:05.185) 0:00:06.940 ******* 2026-04-08 00:33:14.695863 | orchestrator | changed: [testbed-manager] 2026-04-08 00:33:14.695870 | orchestrator | 2026-04-08 00:33:14.695878 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-04-08 00:33:14.695886 | orchestrator | Wednesday 08 April 2026 00:33:07 +0000 (0:00:00.568) 0:00:07.508 ******* 2026-04-08 00:33:14.695894 | orchestrator | changed: [testbed-manager] 2026-04-08 00:33:14.695901 | orchestrator | 2026-04-08 00:33:14.695909 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-04-08 00:33:14.695917 | orchestrator | Wednesday 08 April 2026 00:33:08 +0000 (0:00:00.424) 0:00:07.933 ******* 2026-04-08 00:33:14.695925 | orchestrator | ok: [testbed-manager] 2026-04-08 00:33:14.695933 | orchestrator | 2026-04-08 00:33:14.695941 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-04-08 00:33:14.695950 | orchestrator | Wednesday 08 April 2026 00:33:08 +0000 (0:00:00.535) 0:00:08.468 ******* 2026-04-08 00:33:14.695958 | orchestrator | ok: [testbed-manager] 2026-04-08 00:33:14.695966 | orchestrator | 2026-04-08 00:33:14.695973 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-04-08 00:33:14.695981 | orchestrator | Wednesday 08 April 2026 00:33:09 +0000 (0:00:00.411) 0:00:08.879 ******* 2026-04-08 00:33:14.696013 | orchestrator | ok: [testbed-manager] 2026-04-08 00:33:14.696020 | orchestrator | 2026-04-08 00:33:14.696026 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-04-08 00:33:14.696032 | orchestrator | Wednesday 08 April 2026 00:33:09 +0000 (0:00:00.398) 0:00:09.278 ******* 2026-04-08 00:33:14.696038 | orchestrator | changed: [testbed-manager] 2026-04-08 00:33:14.696045 | orchestrator | 2026-04-08 00:33:14.696051 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-04-08 00:33:14.696058 | orchestrator | Wednesday 08 April 2026 00:33:10 +0000 (0:00:01.166) 0:00:10.445 ******* 2026-04-08 00:33:14.696064 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-08 00:33:14.696071 | orchestrator | changed: [testbed-manager] 2026-04-08 00:33:14.696078 | orchestrator | 2026-04-08 00:33:14.696085 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-04-08 00:33:14.696092 | orchestrator | Wednesday 08 April 2026 00:33:11 +0000 (0:00:00.896) 0:00:11.341 ******* 2026-04-08 00:33:14.696099 | orchestrator | changed: [testbed-manager] 2026-04-08 00:33:14.696106 | orchestrator | 2026-04-08 00:33:14.696112 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-04-08 00:33:14.696119 | orchestrator | Wednesday 08 April 2026 00:33:13 +0000 (0:00:01.945) 0:00:13.287 ******* 2026-04-08 00:33:14.696126 | orchestrator | changed: [testbed-manager] 2026-04-08 00:33:14.696133 | orchestrator | 2026-04-08 00:33:14.696140 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:33:14.696148 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:33:14.696157 | orchestrator | 2026-04-08 00:33:14.696164 | orchestrator | 2026-04-08 00:33:14.696170 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:33:14.696176 | orchestrator | Wednesday 08 April 2026 00:33:14 +0000 (0:00:00.943) 0:00:14.231 ******* 2026-04-08 00:33:14.696183 | orchestrator | =============================================================================== 2026-04-08 00:33:14.696190 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.19s 2026-04-08 00:33:14.696196 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.95s 2026-04-08 00:33:14.696203 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.49s 2026-04-08 00:33:14.696210 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.17s 2026-04-08 00:33:14.696217 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.94s 2026-04-08 00:33:14.696223 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.90s 2026-04-08 00:33:14.696230 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.57s 2026-04-08 00:33:14.696237 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.54s 2026-04-08 00:33:14.696243 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.42s 2026-04-08 00:33:14.696249 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.41s 2026-04-08 00:33:14.696256 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.40s 2026-04-08 00:33:14.864524 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-04-08 00:33:14.896379 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-04-08 00:33:14.896538 | orchestrator | Dload Upload Total Spent Left Speed 2026-04-08 00:33:14.968678 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 206 0 --:--:-- --:--:-- --:--:-- 205 2026-04-08 00:33:14.983194 | orchestrator | + osism apply --environment custom workarounds 2026-04-08 00:33:16.229710 | orchestrator | 2026-04-08 00:33:16 | INFO  | Trying to run play workarounds in environment custom 2026-04-08 00:33:26.407979 | orchestrator | 2026-04-08 00:33:26 | INFO  | Prepare task for execution of workarounds. 2026-04-08 00:33:26.483854 | orchestrator | 2026-04-08 00:33:26 | INFO  | Task 618550cb-f488-44ef-b169-9651118c4596 (workarounds) was prepared for execution. 2026-04-08 00:33:26.483956 | orchestrator | 2026-04-08 00:33:26 | INFO  | It takes a moment until task 618550cb-f488-44ef-b169-9651118c4596 (workarounds) has been started and output is visible here. 2026-04-08 00:33:50.383217 | orchestrator | 2026-04-08 00:33:50.383330 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-08 00:33:50.383347 | orchestrator | 2026-04-08 00:33:50.383359 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-04-08 00:33:50.383371 | orchestrator | Wednesday 08 April 2026 00:33:29 +0000 (0:00:00.171) 0:00:00.171 ******* 2026-04-08 00:33:50.383423 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-04-08 00:33:50.383438 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-04-08 00:33:50.383449 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-04-08 00:33:50.383460 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-04-08 00:33:50.383472 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-04-08 00:33:50.383483 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-04-08 00:33:50.383495 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-04-08 00:33:50.383506 | orchestrator | 2026-04-08 00:33:50.383517 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-04-08 00:33:50.383528 | orchestrator | 2026-04-08 00:33:50.383539 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-08 00:33:50.383550 | orchestrator | Wednesday 08 April 2026 00:33:30 +0000 (0:00:00.694) 0:00:00.866 ******* 2026-04-08 00:33:50.383561 | orchestrator | ok: [testbed-manager] 2026-04-08 00:33:50.383573 | orchestrator | 2026-04-08 00:33:50.383585 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-04-08 00:33:50.383596 | orchestrator | 2026-04-08 00:33:50.383607 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-08 00:33:50.383618 | orchestrator | Wednesday 08 April 2026 00:33:32 +0000 (0:00:02.555) 0:00:03.422 ******* 2026-04-08 00:33:50.383629 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:33:50.383640 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:33:50.383651 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:33:50.383662 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:33:50.383673 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:33:50.383685 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:33:50.383696 | orchestrator | 2026-04-08 00:33:50.383707 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-04-08 00:33:50.383718 | orchestrator | 2026-04-08 00:33:50.383730 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-04-08 00:33:50.383741 | orchestrator | Wednesday 08 April 2026 00:33:35 +0000 (0:00:02.353) 0:00:05.776 ******* 2026-04-08 00:33:50.383753 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-08 00:33:50.383768 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-08 00:33:50.383781 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-08 00:33:50.383794 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-08 00:33:50.383807 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-08 00:33:50.383819 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-08 00:33:50.383857 | orchestrator | 2026-04-08 00:33:50.383870 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-04-08 00:33:50.383883 | orchestrator | Wednesday 08 April 2026 00:33:36 +0000 (0:00:01.271) 0:00:07.047 ******* 2026-04-08 00:33:50.383896 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:33:50.383908 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:33:50.383922 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:33:50.383934 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:33:50.383946 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:33:50.383959 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:33:50.383971 | orchestrator | 2026-04-08 00:33:50.383983 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-04-08 00:33:50.383996 | orchestrator | Wednesday 08 April 2026 00:33:40 +0000 (0:00:03.868) 0:00:10.916 ******* 2026-04-08 00:33:50.384009 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:33:50.384022 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:33:50.384035 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:33:50.384048 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:33:50.384062 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:33:50.384074 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:33:50.384086 | orchestrator | 2026-04-08 00:33:50.384114 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-04-08 00:33:50.384125 | orchestrator | 2026-04-08 00:33:50.384136 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-04-08 00:33:50.384147 | orchestrator | Wednesday 08 April 2026 00:33:40 +0000 (0:00:00.498) 0:00:11.414 ******* 2026-04-08 00:33:50.384158 | orchestrator | changed: [testbed-manager] 2026-04-08 00:33:50.384170 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:33:50.384181 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:33:50.384192 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:33:50.384203 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:33:50.384214 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:33:50.384225 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:33:50.384235 | orchestrator | 2026-04-08 00:33:50.384247 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-04-08 00:33:50.384258 | orchestrator | Wednesday 08 April 2026 00:33:42 +0000 (0:00:01.667) 0:00:13.081 ******* 2026-04-08 00:33:50.384269 | orchestrator | changed: [testbed-manager] 2026-04-08 00:33:50.384280 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:33:50.384290 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:33:50.384301 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:33:50.384312 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:33:50.384323 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:33:50.384352 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:33:50.384363 | orchestrator | 2026-04-08 00:33:50.384374 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-04-08 00:33:50.384410 | orchestrator | Wednesday 08 April 2026 00:33:43 +0000 (0:00:01.354) 0:00:14.435 ******* 2026-04-08 00:33:50.384425 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:33:50.384436 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:33:50.384447 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:33:50.384458 | orchestrator | ok: [testbed-manager] 2026-04-08 00:33:50.384468 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:33:50.384479 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:33:50.384490 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:33:50.384501 | orchestrator | 2026-04-08 00:33:50.384512 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-04-08 00:33:50.384523 | orchestrator | Wednesday 08 April 2026 00:33:45 +0000 (0:00:01.611) 0:00:16.047 ******* 2026-04-08 00:33:50.384534 | orchestrator | changed: [testbed-manager] 2026-04-08 00:33:50.384545 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:33:50.384561 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:33:50.384579 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:33:50.384609 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:33:50.384628 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:33:50.384646 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:33:50.384661 | orchestrator | 2026-04-08 00:33:50.384672 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-04-08 00:33:50.384683 | orchestrator | Wednesday 08 April 2026 00:33:46 +0000 (0:00:01.468) 0:00:17.515 ******* 2026-04-08 00:33:50.384694 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:33:50.384705 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:33:50.384716 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:33:50.384727 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:33:50.384737 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:33:50.384748 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:33:50.384759 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:33:50.384770 | orchestrator | 2026-04-08 00:33:50.384781 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-04-08 00:33:50.384792 | orchestrator | 2026-04-08 00:33:50.384803 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-04-08 00:33:50.384814 | orchestrator | Wednesday 08 April 2026 00:33:47 +0000 (0:00:00.702) 0:00:18.218 ******* 2026-04-08 00:33:50.384825 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:33:50.384836 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:33:50.384847 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:33:50.384857 | orchestrator | ok: [testbed-manager] 2026-04-08 00:33:50.384868 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:33:50.384879 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:33:50.384890 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:33:50.384900 | orchestrator | 2026-04-08 00:33:50.384911 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:33:50.384924 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-08 00:33:50.384936 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:33:50.384947 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:33:50.384957 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:33:50.384968 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:33:50.384979 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:33:50.384990 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:33:50.385001 | orchestrator | 2026-04-08 00:33:50.385012 | orchestrator | 2026-04-08 00:33:50.385023 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:33:50.385034 | orchestrator | Wednesday 08 April 2026 00:33:50 +0000 (0:00:02.760) 0:00:20.979 ******* 2026-04-08 00:33:50.385045 | orchestrator | =============================================================================== 2026-04-08 00:33:50.385062 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.87s 2026-04-08 00:33:50.385073 | orchestrator | Install python3-docker -------------------------------------------------- 2.76s 2026-04-08 00:33:50.385084 | orchestrator | Apply netplan configuration --------------------------------------------- 2.56s 2026-04-08 00:33:50.385095 | orchestrator | Apply netplan configuration --------------------------------------------- 2.35s 2026-04-08 00:33:50.385106 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.67s 2026-04-08 00:33:50.385124 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.61s 2026-04-08 00:33:50.385135 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.47s 2026-04-08 00:33:50.385146 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.35s 2026-04-08 00:33:50.385157 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.27s 2026-04-08 00:33:50.385168 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.70s 2026-04-08 00:33:50.385179 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.69s 2026-04-08 00:33:50.385198 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.50s 2026-04-08 00:33:50.840843 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-04-08 00:34:02.276105 | orchestrator | 2026-04-08 00:34:02 | INFO  | Prepare task for execution of reboot. 2026-04-08 00:34:02.355424 | orchestrator | 2026-04-08 00:34:02 | INFO  | Task 5f60bfb3-bbb7-40fe-a7fb-dc11e28999cd (reboot) was prepared for execution. 2026-04-08 00:34:02.355527 | orchestrator | 2026-04-08 00:34:02 | INFO  | It takes a moment until task 5f60bfb3-bbb7-40fe-a7fb-dc11e28999cd (reboot) has been started and output is visible here. 2026-04-08 00:34:12.871719 | orchestrator | 2026-04-08 00:34:12.871830 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-08 00:34:12.871847 | orchestrator | 2026-04-08 00:34:12.871859 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-08 00:34:12.871871 | orchestrator | Wednesday 08 April 2026 00:34:05 +0000 (0:00:00.226) 0:00:00.226 ******* 2026-04-08 00:34:12.871883 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:34:12.871896 | orchestrator | 2026-04-08 00:34:12.871906 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-08 00:34:12.871917 | orchestrator | Wednesday 08 April 2026 00:34:05 +0000 (0:00:00.135) 0:00:00.361 ******* 2026-04-08 00:34:12.871928 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:34:12.871939 | orchestrator | 2026-04-08 00:34:12.871950 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-08 00:34:12.871961 | orchestrator | Wednesday 08 April 2026 00:34:06 +0000 (0:00:01.187) 0:00:01.548 ******* 2026-04-08 00:34:12.871972 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:34:12.871983 | orchestrator | 2026-04-08 00:34:12.871994 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-08 00:34:12.872004 | orchestrator | 2026-04-08 00:34:12.872015 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-08 00:34:12.872026 | orchestrator | Wednesday 08 April 2026 00:34:06 +0000 (0:00:00.096) 0:00:01.645 ******* 2026-04-08 00:34:12.872037 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:34:12.872048 | orchestrator | 2026-04-08 00:34:12.872059 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-08 00:34:12.872070 | orchestrator | Wednesday 08 April 2026 00:34:06 +0000 (0:00:00.080) 0:00:01.725 ******* 2026-04-08 00:34:12.872081 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:34:12.872092 | orchestrator | 2026-04-08 00:34:12.872103 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-08 00:34:12.872114 | orchestrator | Wednesday 08 April 2026 00:34:07 +0000 (0:00:00.914) 0:00:02.640 ******* 2026-04-08 00:34:12.872125 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:34:12.872136 | orchestrator | 2026-04-08 00:34:12.872146 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-08 00:34:12.872157 | orchestrator | 2026-04-08 00:34:12.872168 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-08 00:34:12.872179 | orchestrator | Wednesday 08 April 2026 00:34:07 +0000 (0:00:00.090) 0:00:02.731 ******* 2026-04-08 00:34:12.872190 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:34:12.872201 | orchestrator | 2026-04-08 00:34:12.872236 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-08 00:34:12.872248 | orchestrator | Wednesday 08 April 2026 00:34:07 +0000 (0:00:00.085) 0:00:02.816 ******* 2026-04-08 00:34:12.872261 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:34:12.872273 | orchestrator | 2026-04-08 00:34:12.872285 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-08 00:34:12.872298 | orchestrator | Wednesday 08 April 2026 00:34:08 +0000 (0:00:00.986) 0:00:03.803 ******* 2026-04-08 00:34:12.872310 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:34:12.872323 | orchestrator | 2026-04-08 00:34:12.872335 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-08 00:34:12.872347 | orchestrator | 2026-04-08 00:34:12.872359 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-08 00:34:12.872404 | orchestrator | Wednesday 08 April 2026 00:34:08 +0000 (0:00:00.110) 0:00:03.914 ******* 2026-04-08 00:34:12.872417 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:34:12.872430 | orchestrator | 2026-04-08 00:34:12.872443 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-08 00:34:12.872456 | orchestrator | Wednesday 08 April 2026 00:34:09 +0000 (0:00:00.096) 0:00:04.010 ******* 2026-04-08 00:34:12.872468 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:34:12.872481 | orchestrator | 2026-04-08 00:34:12.872493 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-08 00:34:12.872506 | orchestrator | Wednesday 08 April 2026 00:34:10 +0000 (0:00:00.990) 0:00:05.001 ******* 2026-04-08 00:34:12.872519 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:34:12.872532 | orchestrator | 2026-04-08 00:34:12.872544 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-08 00:34:12.872557 | orchestrator | 2026-04-08 00:34:12.872570 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-08 00:34:12.872581 | orchestrator | Wednesday 08 April 2026 00:34:10 +0000 (0:00:00.097) 0:00:05.099 ******* 2026-04-08 00:34:12.872592 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:34:12.872603 | orchestrator | 2026-04-08 00:34:12.872614 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-08 00:34:12.872625 | orchestrator | Wednesday 08 April 2026 00:34:10 +0000 (0:00:00.203) 0:00:05.303 ******* 2026-04-08 00:34:12.872636 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:34:12.872647 | orchestrator | 2026-04-08 00:34:12.872658 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-08 00:34:12.872669 | orchestrator | Wednesday 08 April 2026 00:34:11 +0000 (0:00:01.026) 0:00:06.330 ******* 2026-04-08 00:34:12.872680 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:34:12.872690 | orchestrator | 2026-04-08 00:34:12.872701 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-08 00:34:12.872712 | orchestrator | 2026-04-08 00:34:12.872723 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-08 00:34:12.872733 | orchestrator | Wednesday 08 April 2026 00:34:11 +0000 (0:00:00.105) 0:00:06.435 ******* 2026-04-08 00:34:12.872750 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:34:12.872768 | orchestrator | 2026-04-08 00:34:12.872791 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-08 00:34:12.872817 | orchestrator | Wednesday 08 April 2026 00:34:11 +0000 (0:00:00.101) 0:00:06.537 ******* 2026-04-08 00:34:12.872835 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:34:12.872853 | orchestrator | 2026-04-08 00:34:12.872870 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-08 00:34:12.872887 | orchestrator | Wednesday 08 April 2026 00:34:12 +0000 (0:00:00.982) 0:00:07.519 ******* 2026-04-08 00:34:12.872926 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:34:12.872945 | orchestrator | 2026-04-08 00:34:12.872963 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:34:12.872982 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:34:12.873017 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:34:12.873037 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:34:12.873056 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:34:12.873074 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:34:12.873090 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:34:12.873101 | orchestrator | 2026-04-08 00:34:12.873112 | orchestrator | 2026-04-08 00:34:12.873123 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:34:12.873134 | orchestrator | Wednesday 08 April 2026 00:34:12 +0000 (0:00:00.041) 0:00:07.560 ******* 2026-04-08 00:34:12.873145 | orchestrator | =============================================================================== 2026-04-08 00:34:12.873156 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 6.09s 2026-04-08 00:34:12.873166 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.70s 2026-04-08 00:34:12.873177 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.54s 2026-04-08 00:34:13.057122 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-04-08 00:34:24.471805 | orchestrator | 2026-04-08 00:34:24 | INFO  | Prepare task for execution of wait-for-connection. 2026-04-08 00:34:24.551084 | orchestrator | 2026-04-08 00:34:24 | INFO  | Task d165b8d9-d7d0-433d-952d-9782f70c4c43 (wait-for-connection) was prepared for execution. 2026-04-08 00:34:24.551190 | orchestrator | 2026-04-08 00:34:24 | INFO  | It takes a moment until task d165b8d9-d7d0-433d-952d-9782f70c4c43 (wait-for-connection) has been started and output is visible here. 2026-04-08 00:34:39.448545 | orchestrator | 2026-04-08 00:34:39.448620 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-04-08 00:34:39.448627 | orchestrator | 2026-04-08 00:34:39.448631 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-04-08 00:34:39.448636 | orchestrator | Wednesday 08 April 2026 00:34:27 +0000 (0:00:00.328) 0:00:00.328 ******* 2026-04-08 00:34:39.448640 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:34:39.448645 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:34:39.448649 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:34:39.448653 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:34:39.448657 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:34:39.448661 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:34:39.448665 | orchestrator | 2026-04-08 00:34:39.448669 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:34:39.448689 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:34:39.448698 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:34:39.448702 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:34:39.448706 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:34:39.448709 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:34:39.448730 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:34:39.448734 | orchestrator | 2026-04-08 00:34:39.448738 | orchestrator | 2026-04-08 00:34:39.448742 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:34:39.448746 | orchestrator | Wednesday 08 April 2026 00:34:39 +0000 (0:00:11.603) 0:00:11.932 ******* 2026-04-08 00:34:39.448749 | orchestrator | =============================================================================== 2026-04-08 00:34:39.448753 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.60s 2026-04-08 00:34:39.621840 | orchestrator | + osism apply hddtemp 2026-04-08 00:34:50.923251 | orchestrator | 2026-04-08 00:34:50 | INFO  | Prepare task for execution of hddtemp. 2026-04-08 00:34:50.999251 | orchestrator | 2026-04-08 00:34:50 | INFO  | Task 35098d73-ab50-4ffc-9cac-f9c467ac3f4c (hddtemp) was prepared for execution. 2026-04-08 00:34:50.999374 | orchestrator | 2026-04-08 00:34:50 | INFO  | It takes a moment until task 35098d73-ab50-4ffc-9cac-f9c467ac3f4c (hddtemp) has been started and output is visible here. 2026-04-08 00:35:16.842494 | orchestrator | 2026-04-08 00:35:16.842629 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-04-08 00:35:16.842657 | orchestrator | 2026-04-08 00:35:16.842677 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-04-08 00:35:16.842698 | orchestrator | Wednesday 08 April 2026 00:34:54 +0000 (0:00:00.316) 0:00:00.316 ******* 2026-04-08 00:35:16.842711 | orchestrator | ok: [testbed-manager] 2026-04-08 00:35:16.842723 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:35:16.842734 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:35:16.842745 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:35:16.842756 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:35:16.842767 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:35:16.842778 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:35:16.842789 | orchestrator | 2026-04-08 00:35:16.842800 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-04-08 00:35:16.842811 | orchestrator | Wednesday 08 April 2026 00:34:54 +0000 (0:00:00.554) 0:00:00.871 ******* 2026-04-08 00:35:16.842824 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:35:16.842838 | orchestrator | 2026-04-08 00:35:16.842849 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-04-08 00:35:16.842861 | orchestrator | Wednesday 08 April 2026 00:34:55 +0000 (0:00:01.105) 0:00:01.976 ******* 2026-04-08 00:35:16.842872 | orchestrator | ok: [testbed-manager] 2026-04-08 00:35:16.842882 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:35:16.842893 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:35:16.842904 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:35:16.842915 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:35:16.842926 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:35:16.842937 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:35:16.842950 | orchestrator | 2026-04-08 00:35:16.842962 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-04-08 00:35:16.842976 | orchestrator | Wednesday 08 April 2026 00:34:58 +0000 (0:00:02.323) 0:00:04.299 ******* 2026-04-08 00:35:16.842988 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:35:16.843002 | orchestrator | changed: [testbed-manager] 2026-04-08 00:35:16.843015 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:35:16.843028 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:35:16.843040 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:35:16.843053 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:35:16.843065 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:35:16.843078 | orchestrator | 2026-04-08 00:35:16.843095 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-04-08 00:35:16.843143 | orchestrator | Wednesday 08 April 2026 00:34:59 +0000 (0:00:00.911) 0:00:05.211 ******* 2026-04-08 00:35:16.843157 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:35:16.843170 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:35:16.843183 | orchestrator | ok: [testbed-manager] 2026-04-08 00:35:16.843195 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:35:16.843207 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:35:16.843220 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:35:16.843232 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:35:16.843244 | orchestrator | 2026-04-08 00:35:16.843257 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-04-08 00:35:16.843270 | orchestrator | Wednesday 08 April 2026 00:35:01 +0000 (0:00:01.997) 0:00:07.208 ******* 2026-04-08 00:35:16.843283 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:35:16.843362 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:35:16.843377 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:35:16.843388 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:35:16.843399 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:35:16.843409 | orchestrator | changed: [testbed-manager] 2026-04-08 00:35:16.843420 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:35:16.843431 | orchestrator | 2026-04-08 00:35:16.843442 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-04-08 00:35:16.843470 | orchestrator | Wednesday 08 April 2026 00:35:01 +0000 (0:00:00.508) 0:00:07.717 ******* 2026-04-08 00:35:16.843481 | orchestrator | changed: [testbed-manager] 2026-04-08 00:35:16.843492 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:35:16.843503 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:35:16.843513 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:35:16.843524 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:35:16.843535 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:35:16.843546 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:35:16.843556 | orchestrator | 2026-04-08 00:35:16.843567 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-04-08 00:35:16.843581 | orchestrator | Wednesday 08 April 2026 00:35:13 +0000 (0:00:12.022) 0:00:19.739 ******* 2026-04-08 00:35:16.843601 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:35:16.843621 | orchestrator | 2026-04-08 00:35:16.843640 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-04-08 00:35:16.843658 | orchestrator | Wednesday 08 April 2026 00:35:14 +0000 (0:00:01.179) 0:00:20.919 ******* 2026-04-08 00:35:16.843674 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:35:16.843692 | orchestrator | changed: [testbed-manager] 2026-04-08 00:35:16.843710 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:35:16.843729 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:35:16.843749 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:35:16.843767 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:35:16.843786 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:35:16.843798 | orchestrator | 2026-04-08 00:35:16.843809 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:35:16.843821 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:35:16.843855 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-08 00:35:16.843867 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-08 00:35:16.843878 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-08 00:35:16.843902 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-08 00:35:16.843913 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-08 00:35:16.843924 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-08 00:35:16.843935 | orchestrator | 2026-04-08 00:35:16.843945 | orchestrator | 2026-04-08 00:35:16.843956 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:35:16.843967 | orchestrator | Wednesday 08 April 2026 00:35:16 +0000 (0:00:01.793) 0:00:22.712 ******* 2026-04-08 00:35:16.843978 | orchestrator | =============================================================================== 2026-04-08 00:35:16.843989 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.02s 2026-04-08 00:35:16.844000 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.32s 2026-04-08 00:35:16.844011 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 2.00s 2026-04-08 00:35:16.844021 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.79s 2026-04-08 00:35:16.844032 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.18s 2026-04-08 00:35:16.844043 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.11s 2026-04-08 00:35:16.844053 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 0.91s 2026-04-08 00:35:16.844064 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.56s 2026-04-08 00:35:16.844075 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.51s 2026-04-08 00:35:17.003756 | orchestrator | ++ semver 10.0.0 7.1.1 2026-04-08 00:35:17.050537 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-08 00:35:17.050640 | orchestrator | + sudo systemctl restart manager.service 2026-04-08 00:35:30.381117 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-08 00:35:30.381206 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-08 00:35:30.381217 | orchestrator | + local max_attempts=60 2026-04-08 00:35:30.381227 | orchestrator | + local name=ceph-ansible 2026-04-08 00:35:30.381235 | orchestrator | + local attempt_num=1 2026-04-08 00:35:30.381244 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-08 00:35:30.412536 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-08 00:35:30.412622 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-08 00:35:30.412633 | orchestrator | + sleep 5 2026-04-08 00:35:35.414498 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-08 00:35:35.471472 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-08 00:35:35.471553 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-08 00:35:35.471564 | orchestrator | + sleep 5 2026-04-08 00:35:40.474449 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-08 00:35:40.511295 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-08 00:35:40.511388 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-08 00:35:40.511405 | orchestrator | + sleep 5 2026-04-08 00:35:45.516014 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-08 00:35:45.547436 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-08 00:35:45.547526 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-08 00:35:45.547540 | orchestrator | + sleep 5 2026-04-08 00:35:50.550603 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-08 00:35:50.588025 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-08 00:35:50.588092 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-08 00:35:50.588098 | orchestrator | + sleep 5 2026-04-08 00:35:55.591796 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-08 00:35:55.622428 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-08 00:35:55.622545 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-08 00:35:55.622563 | orchestrator | + sleep 5 2026-04-08 00:36:00.626231 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-08 00:36:00.664301 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-08 00:36:00.664527 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-08 00:36:00.664552 | orchestrator | + sleep 5 2026-04-08 00:36:05.671118 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-08 00:36:05.713977 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-08 00:36:05.714133 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-08 00:36:05.714147 | orchestrator | + sleep 5 2026-04-08 00:36:10.717338 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-08 00:36:10.752578 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-08 00:36:10.752648 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-08 00:36:10.752656 | orchestrator | + sleep 5 2026-04-08 00:36:15.756483 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-08 00:36:15.794382 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-08 00:36:15.794483 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-08 00:36:15.794499 | orchestrator | + sleep 5 2026-04-08 00:36:20.798112 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-08 00:36:20.835947 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-08 00:36:20.836018 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-08 00:36:20.836025 | orchestrator | + sleep 5 2026-04-08 00:36:25.840420 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-08 00:36:25.878425 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-08 00:36:25.878572 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-08 00:36:25.878601 | orchestrator | + sleep 5 2026-04-08 00:36:30.882598 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-08 00:36:30.916764 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-08 00:36:30.916862 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-08 00:36:30.916874 | orchestrator | + sleep 5 2026-04-08 00:36:35.921919 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-08 00:36:35.956321 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-08 00:36:35.956409 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-08 00:36:35.956426 | orchestrator | + local max_attempts=60 2026-04-08 00:36:35.956441 | orchestrator | + local name=kolla-ansible 2026-04-08 00:36:35.956467 | orchestrator | + local attempt_num=1 2026-04-08 00:36:35.956815 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-08 00:36:35.982371 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-08 00:36:35.982475 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-08 00:36:35.982511 | orchestrator | + local max_attempts=60 2026-04-08 00:36:35.982526 | orchestrator | + local name=osism-ansible 2026-04-08 00:36:35.982541 | orchestrator | + local attempt_num=1 2026-04-08 00:36:35.983067 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-08 00:36:36.015291 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-08 00:36:36.015384 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-08 00:36:36.015399 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-08 00:36:36.173501 | orchestrator | ARA in ceph-ansible already disabled. 2026-04-08 00:36:36.306653 | orchestrator | ARA in kolla-ansible already disabled. 2026-04-08 00:36:36.441776 | orchestrator | ARA in osism-ansible already disabled. 2026-04-08 00:36:36.574634 | orchestrator | ARA in osism-kubernetes already disabled. 2026-04-08 00:36:36.574722 | orchestrator | + osism apply gather-facts 2026-04-08 00:36:47.767704 | orchestrator | 2026-04-08 00:36:47 | INFO  | Prepare task for execution of gather-facts. 2026-04-08 00:36:47.822304 | orchestrator | 2026-04-08 00:36:47 | INFO  | Task 1a8ca2a5-d1fc-470f-82a4-9c2d2f19af18 (gather-facts) was prepared for execution. 2026-04-08 00:36:47.822428 | orchestrator | 2026-04-08 00:36:47 | INFO  | It takes a moment until task 1a8ca2a5-d1fc-470f-82a4-9c2d2f19af18 (gather-facts) has been started and output is visible here. 2026-04-08 00:36:58.778841 | orchestrator | 2026-04-08 00:36:58.778970 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-08 00:36:58.778981 | orchestrator | 2026-04-08 00:36:58.778988 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-08 00:36:58.779021 | orchestrator | Wednesday 08 April 2026 00:36:50 +0000 (0:00:00.208) 0:00:00.208 ******* 2026-04-08 00:36:58.779026 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:36:58.779033 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:36:58.779038 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:36:58.779044 | orchestrator | ok: [testbed-manager] 2026-04-08 00:36:58.779049 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:36:58.779055 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:36:58.779060 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:36:58.779066 | orchestrator | 2026-04-08 00:36:58.779071 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-08 00:36:58.779076 | orchestrator | 2026-04-08 00:36:58.779082 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-08 00:36:58.779087 | orchestrator | Wednesday 08 April 2026 00:36:57 +0000 (0:00:07.343) 0:00:07.552 ******* 2026-04-08 00:36:58.779092 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:36:58.779099 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:36:58.779104 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:36:58.779110 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:36:58.779115 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:36:58.779120 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:36:58.779125 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:36:58.779130 | orchestrator | 2026-04-08 00:36:58.779135 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:36:58.779141 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-08 00:36:58.779149 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-08 00:36:58.779169 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-08 00:36:58.779174 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-08 00:36:58.779179 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-08 00:36:58.779185 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-08 00:36:58.779216 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-08 00:36:58.779221 | orchestrator | 2026-04-08 00:36:58.779227 | orchestrator | 2026-04-08 00:36:58.779232 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:36:58.779237 | orchestrator | Wednesday 08 April 2026 00:36:58 +0000 (0:00:00.579) 0:00:08.131 ******* 2026-04-08 00:36:58.779242 | orchestrator | =============================================================================== 2026-04-08 00:36:58.779248 | orchestrator | Gathers facts about hosts ----------------------------------------------- 7.34s 2026-04-08 00:36:58.779253 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.58s 2026-04-08 00:36:58.987518 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-04-08 00:36:59.008745 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-04-08 00:36:59.024102 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-04-08 00:36:59.034792 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-04-08 00:36:59.044346 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-04-08 00:36:59.054890 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-04-08 00:36:59.066548 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-04-08 00:36:59.079610 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-04-08 00:36:59.089473 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-04-08 00:36:59.099412 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-04-08 00:36:59.112651 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-04-08 00:36:59.122286 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-04-08 00:36:59.131355 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-04-08 00:36:59.145632 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-04-08 00:36:59.158011 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-04-08 00:36:59.168989 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-04-08 00:36:59.180329 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-04-08 00:36:59.190088 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-04-08 00:36:59.203004 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-04-08 00:36:59.214106 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-04-08 00:36:59.224081 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-04-08 00:36:59.236663 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-04-08 00:36:59.251290 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-04-08 00:36:59.265373 | orchestrator | + [[ false == \t\r\u\e ]] 2026-04-08 00:36:59.457844 | orchestrator | ok: Runtime: 0:22:55.639718 2026-04-08 00:36:59.565653 | 2026-04-08 00:36:59.565789 | TASK [Deploy services] 2026-04-08 00:37:00.099278 | orchestrator | skipping: Conditional result was False 2026-04-08 00:37:00.116970 | 2026-04-08 00:37:00.117142 | TASK [Deploy in a nutshell] 2026-04-08 00:37:00.861314 | orchestrator | + set -e 2026-04-08 00:37:00.861969 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-08 00:37:00.862010 | orchestrator | ++ export INTERACTIVE=false 2026-04-08 00:37:00.862060 | orchestrator | ++ INTERACTIVE=false 2026-04-08 00:37:00.862074 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-08 00:37:00.862086 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-08 00:37:00.862099 | orchestrator | + source /opt/manager-vars.sh 2026-04-08 00:37:00.862139 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-08 00:37:00.862162 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-08 00:37:00.862175 | orchestrator | ++ export CEPH_VERSION= 2026-04-08 00:37:00.862218 | orchestrator | ++ CEPH_VERSION= 2026-04-08 00:37:00.862231 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-08 00:37:00.862247 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-08 00:37:00.862257 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-08 00:37:00.862275 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-08 00:37:00.862286 | orchestrator | ++ export OPENSTACK_VERSION= 2026-04-08 00:37:00.862298 | orchestrator | ++ OPENSTACK_VERSION= 2026-04-08 00:37:00.862311 | orchestrator | ++ export ARA=false 2026-04-08 00:37:00.862322 | orchestrator | ++ ARA=false 2026-04-08 00:37:00.862332 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-08 00:37:00.862345 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-08 00:37:00.862355 | orchestrator | ++ export TEMPEST=true 2026-04-08 00:37:00.862367 | orchestrator | ++ TEMPEST=true 2026-04-08 00:37:00.862377 | orchestrator | ++ export IS_ZUUL=true 2026-04-08 00:37:00.862396 | orchestrator | ++ IS_ZUUL=true 2026-04-08 00:37:00.862408 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.114 2026-04-08 00:37:00.862419 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.114 2026-04-08 00:37:00.862430 | orchestrator | ++ export EXTERNAL_API=false 2026-04-08 00:37:00.862963 | orchestrator | 2026-04-08 00:37:00.862981 | orchestrator | # PULL IMAGES 2026-04-08 00:37:00.862990 | orchestrator | 2026-04-08 00:37:00.863000 | orchestrator | ++ EXTERNAL_API=false 2026-04-08 00:37:00.863009 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-08 00:37:00.863020 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-08 00:37:00.863028 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-08 00:37:00.863036 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-08 00:37:00.863045 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-08 00:37:00.863054 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-08 00:37:00.863064 | orchestrator | + echo 2026-04-08 00:37:00.863073 | orchestrator | + echo '# PULL IMAGES' 2026-04-08 00:37:00.863082 | orchestrator | + echo 2026-04-08 00:37:00.863857 | orchestrator | ++ semver 10.0.0 7.0.0 2026-04-08 00:37:00.929427 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-08 00:37:00.929526 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-04-08 00:37:02.186966 | orchestrator | 2026-04-08 00:37:02 | INFO  | Trying to run play pull-images in environment custom 2026-04-08 00:37:12.237144 | orchestrator | 2026-04-08 00:37:12 | INFO  | Prepare task for execution of pull-images. 2026-04-08 00:37:12.311403 | orchestrator | 2026-04-08 00:37:12 | INFO  | Task 01e9c31a-c661-46bd-bdc6-bd819b5fe131 (pull-images) was prepared for execution. 2026-04-08 00:37:12.311537 | orchestrator | 2026-04-08 00:37:12 | INFO  | Task 01e9c31a-c661-46bd-bdc6-bd819b5fe131 is running in background. No more output. Check ARA for logs. 2026-04-08 00:37:13.799921 | orchestrator | 2026-04-08 00:37:13 | INFO  | Trying to run play wipe-partitions in environment custom 2026-04-08 00:37:23.874644 | orchestrator | 2026-04-08 00:37:23 | INFO  | Prepare task for execution of wipe-partitions. 2026-04-08 00:37:23.940824 | orchestrator | 2026-04-08 00:37:23 | INFO  | Task ea25516e-4ed9-45fa-aaa3-2e39db0af5e6 (wipe-partitions) was prepared for execution. 2026-04-08 00:37:23.940917 | orchestrator | 2026-04-08 00:37:23 | INFO  | It takes a moment until task ea25516e-4ed9-45fa-aaa3-2e39db0af5e6 (wipe-partitions) has been started and output is visible here. 2026-04-08 00:37:35.133812 | orchestrator | 2026-04-08 00:37:35.133930 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-04-08 00:37:35.133947 | orchestrator | 2026-04-08 00:37:35.133959 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-04-08 00:37:35.133976 | orchestrator | Wednesday 08 April 2026 00:37:26 +0000 (0:00:00.156) 0:00:00.156 ******* 2026-04-08 00:37:35.133990 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:37:35.134111 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:37:35.134127 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:37:35.134138 | orchestrator | 2026-04-08 00:37:35.134150 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-04-08 00:37:35.134197 | orchestrator | Wednesday 08 April 2026 00:37:27 +0000 (0:00:00.929) 0:00:01.085 ******* 2026-04-08 00:37:35.134217 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:37:35.134242 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:37:35.134260 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:37:35.134277 | orchestrator | 2026-04-08 00:37:35.134295 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-04-08 00:37:35.134314 | orchestrator | Wednesday 08 April 2026 00:37:28 +0000 (0:00:00.242) 0:00:01.327 ******* 2026-04-08 00:37:35.134334 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:37:35.134355 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:37:35.134373 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:37:35.134389 | orchestrator | 2026-04-08 00:37:35.134403 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-04-08 00:37:35.134417 | orchestrator | Wednesday 08 April 2026 00:37:28 +0000 (0:00:00.540) 0:00:01.868 ******* 2026-04-08 00:37:35.134429 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:37:35.134442 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:37:35.134454 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:37:35.134467 | orchestrator | 2026-04-08 00:37:35.134479 | orchestrator | TASK [Check device availability] *********************************************** 2026-04-08 00:37:35.134492 | orchestrator | Wednesday 08 April 2026 00:37:28 +0000 (0:00:00.248) 0:00:02.116 ******* 2026-04-08 00:37:35.134504 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-08 00:37:35.134522 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-08 00:37:35.134533 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-08 00:37:35.134544 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-08 00:37:35.134555 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-08 00:37:35.134566 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-08 00:37:35.134576 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-08 00:37:35.134587 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-08 00:37:35.134598 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-08 00:37:35.134609 | orchestrator | 2026-04-08 00:37:35.134620 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-04-08 00:37:35.134632 | orchestrator | Wednesday 08 April 2026 00:37:30 +0000 (0:00:01.288) 0:00:03.405 ******* 2026-04-08 00:37:35.134643 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-04-08 00:37:35.134654 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-04-08 00:37:35.134664 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-04-08 00:37:35.134675 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-04-08 00:37:35.134685 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-04-08 00:37:35.134696 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-04-08 00:37:35.134707 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-04-08 00:37:35.134718 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-04-08 00:37:35.134728 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-04-08 00:37:35.134739 | orchestrator | 2026-04-08 00:37:35.134750 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-04-08 00:37:35.134761 | orchestrator | Wednesday 08 April 2026 00:37:31 +0000 (0:00:01.294) 0:00:04.699 ******* 2026-04-08 00:37:35.134771 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-08 00:37:35.134782 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-08 00:37:35.134793 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-08 00:37:35.134803 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-08 00:37:35.134814 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-08 00:37:35.134843 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-08 00:37:35.134854 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-08 00:37:35.134865 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-08 00:37:35.134876 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-08 00:37:35.134887 | orchestrator | 2026-04-08 00:37:35.134898 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-04-08 00:37:35.134908 | orchestrator | Wednesday 08 April 2026 00:37:33 +0000 (0:00:02.083) 0:00:06.783 ******* 2026-04-08 00:37:35.134919 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:37:35.134930 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:37:35.134941 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:37:35.134952 | orchestrator | 2026-04-08 00:37:35.134963 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-04-08 00:37:35.134974 | orchestrator | Wednesday 08 April 2026 00:37:34 +0000 (0:00:00.581) 0:00:07.364 ******* 2026-04-08 00:37:35.134985 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:37:35.134996 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:37:35.135006 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:37:35.135017 | orchestrator | 2026-04-08 00:37:35.135029 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:37:35.135041 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:37:35.135053 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:37:35.135084 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:37:35.135096 | orchestrator | 2026-04-08 00:37:35.135107 | orchestrator | 2026-04-08 00:37:35.135118 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:37:35.135129 | orchestrator | Wednesday 08 April 2026 00:37:34 +0000 (0:00:00.798) 0:00:08.162 ******* 2026-04-08 00:37:35.135140 | orchestrator | =============================================================================== 2026-04-08 00:37:35.135151 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.08s 2026-04-08 00:37:35.135252 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.29s 2026-04-08 00:37:35.135264 | orchestrator | Check device availability ----------------------------------------------- 1.29s 2026-04-08 00:37:35.135274 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.93s 2026-04-08 00:37:35.135285 | orchestrator | Request device events from the kernel ----------------------------------- 0.80s 2026-04-08 00:37:35.135296 | orchestrator | Reload udev rules ------------------------------------------------------- 0.58s 2026-04-08 00:37:35.135306 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.54s 2026-04-08 00:37:35.135317 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.25s 2026-04-08 00:37:35.135328 | orchestrator | Remove all rook related logical devices --------------------------------- 0.24s 2026-04-08 00:37:46.689776 | orchestrator | 2026-04-08 00:37:46 | INFO  | Prepare task for execution of facts. 2026-04-08 00:37:46.758672 | orchestrator | 2026-04-08 00:37:46 | INFO  | Task 4f55d40f-f09c-4c4f-bb10-807ac49b5239 (facts) was prepared for execution. 2026-04-08 00:37:46.758769 | orchestrator | 2026-04-08 00:37:46 | INFO  | It takes a moment until task 4f55d40f-f09c-4c4f-bb10-807ac49b5239 (facts) has been started and output is visible here. 2026-04-08 00:37:57.330362 | orchestrator | 2026-04-08 00:37:57.330482 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-08 00:37:57.330505 | orchestrator | 2026-04-08 00:37:57.330526 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-08 00:37:57.330578 | orchestrator | Wednesday 08 April 2026 00:37:49 +0000 (0:00:00.303) 0:00:00.303 ******* 2026-04-08 00:37:57.330600 | orchestrator | ok: [testbed-manager] 2026-04-08 00:37:57.330622 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:37:57.330641 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:37:57.330655 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:37:57.330666 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:37:57.330677 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:37:57.330688 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:37:57.330699 | orchestrator | 2026-04-08 00:37:57.330716 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-08 00:37:57.330754 | orchestrator | Wednesday 08 April 2026 00:37:50 +0000 (0:00:01.174) 0:00:01.477 ******* 2026-04-08 00:37:57.330810 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:37:57.330826 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:37:57.330836 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:37:57.330851 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:37:57.330869 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:37:57.330888 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:37:57.330907 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:37:57.330925 | orchestrator | 2026-04-08 00:37:57.330942 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-08 00:37:57.330954 | orchestrator | 2026-04-08 00:37:57.330968 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-08 00:37:57.330980 | orchestrator | Wednesday 08 April 2026 00:37:52 +0000 (0:00:01.130) 0:00:02.608 ******* 2026-04-08 00:37:57.330992 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:37:57.331003 | orchestrator | ok: [testbed-manager] 2026-04-08 00:37:57.331014 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:37:57.331025 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:37:57.331036 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:37:57.331046 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:37:57.331057 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:37:57.331068 | orchestrator | 2026-04-08 00:37:57.331079 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-08 00:37:57.331089 | orchestrator | 2026-04-08 00:37:57.331100 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-08 00:37:57.331111 | orchestrator | Wednesday 08 April 2026 00:37:56 +0000 (0:00:04.478) 0:00:07.086 ******* 2026-04-08 00:37:57.331122 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:37:57.331163 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:37:57.331177 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:37:57.331188 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:37:57.331199 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:37:57.331209 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:37:57.331226 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:37:57.331243 | orchestrator | 2026-04-08 00:37:57.331262 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:37:57.331281 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:37:57.331301 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:37:57.331320 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:37:57.331331 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:37:57.331342 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:37:57.331353 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:37:57.331377 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:37:57.331388 | orchestrator | 2026-04-08 00:37:57.331399 | orchestrator | 2026-04-08 00:37:57.331410 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:37:57.331420 | orchestrator | Wednesday 08 April 2026 00:37:57 +0000 (0:00:00.463) 0:00:07.550 ******* 2026-04-08 00:37:57.331431 | orchestrator | =============================================================================== 2026-04-08 00:37:57.331442 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.48s 2026-04-08 00:37:57.331453 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.17s 2026-04-08 00:37:57.331463 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.13s 2026-04-08 00:37:57.331474 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.46s 2026-04-08 00:37:58.734767 | orchestrator | 2026-04-08 00:37:58 | INFO  | Prepare task for execution of ceph-configure-lvm-volumes. 2026-04-08 00:37:58.797843 | orchestrator | 2026-04-08 00:37:58 | INFO  | Task 3845b1b9-53fc-403b-b171-477e735097c5 (ceph-configure-lvm-volumes) was prepared for execution. 2026-04-08 00:37:58.797948 | orchestrator | 2026-04-08 00:37:58 | INFO  | It takes a moment until task 3845b1b9-53fc-403b-b171-477e735097c5 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-04-08 00:38:09.538396 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-08 00:38:09.538507 | orchestrator | 2.16.14 2026-04-08 00:38:09.538525 | orchestrator | 2026-04-08 00:38:09.538538 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-08 00:38:09.538551 | orchestrator | 2026-04-08 00:38:09.538562 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-08 00:38:09.538584 | orchestrator | Wednesday 08 April 2026 00:38:03 +0000 (0:00:00.280) 0:00:00.280 ******* 2026-04-08 00:38:09.538596 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-08 00:38:09.538608 | orchestrator | 2026-04-08 00:38:09.538620 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-08 00:38:09.538631 | orchestrator | Wednesday 08 April 2026 00:38:03 +0000 (0:00:00.224) 0:00:00.505 ******* 2026-04-08 00:38:09.538642 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:38:09.538654 | orchestrator | 2026-04-08 00:38:09.538665 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:38:09.538677 | orchestrator | Wednesday 08 April 2026 00:38:03 +0000 (0:00:00.209) 0:00:00.714 ******* 2026-04-08 00:38:09.538688 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-08 00:38:09.538699 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-08 00:38:09.538710 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-08 00:38:09.538721 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-08 00:38:09.538732 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-08 00:38:09.538743 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-08 00:38:09.538754 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-08 00:38:09.538765 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-08 00:38:09.538776 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-08 00:38:09.538787 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-08 00:38:09.538798 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-08 00:38:09.538832 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-08 00:38:09.538849 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-08 00:38:09.538868 | orchestrator | 2026-04-08 00:38:09.538886 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:38:09.538904 | orchestrator | Wednesday 08 April 2026 00:38:03 +0000 (0:00:00.353) 0:00:01.067 ******* 2026-04-08 00:38:09.538927 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:38:09.538946 | orchestrator | 2026-04-08 00:38:09.538965 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:38:09.538978 | orchestrator | Wednesday 08 April 2026 00:38:04 +0000 (0:00:00.461) 0:00:01.529 ******* 2026-04-08 00:38:09.538990 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:38:09.539003 | orchestrator | 2026-04-08 00:38:09.539016 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:38:09.539029 | orchestrator | Wednesday 08 April 2026 00:38:04 +0000 (0:00:00.177) 0:00:01.707 ******* 2026-04-08 00:38:09.539046 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:38:09.539059 | orchestrator | 2026-04-08 00:38:09.539071 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:38:09.539083 | orchestrator | Wednesday 08 April 2026 00:38:04 +0000 (0:00:00.177) 0:00:01.885 ******* 2026-04-08 00:38:09.539096 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:38:09.539108 | orchestrator | 2026-04-08 00:38:09.539152 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:38:09.539166 | orchestrator | Wednesday 08 April 2026 00:38:04 +0000 (0:00:00.180) 0:00:02.065 ******* 2026-04-08 00:38:09.539179 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:38:09.539192 | orchestrator | 2026-04-08 00:38:09.539204 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:38:09.539217 | orchestrator | Wednesday 08 April 2026 00:38:05 +0000 (0:00:00.169) 0:00:02.235 ******* 2026-04-08 00:38:09.539230 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:38:09.539242 | orchestrator | 2026-04-08 00:38:09.539255 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:38:09.539268 | orchestrator | Wednesday 08 April 2026 00:38:05 +0000 (0:00:00.183) 0:00:02.418 ******* 2026-04-08 00:38:09.539283 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:38:09.539302 | orchestrator | 2026-04-08 00:38:09.539319 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:38:09.539337 | orchestrator | Wednesday 08 April 2026 00:38:05 +0000 (0:00:00.187) 0:00:02.606 ******* 2026-04-08 00:38:09.539357 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:38:09.539375 | orchestrator | 2026-04-08 00:38:09.539394 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:38:09.539406 | orchestrator | Wednesday 08 April 2026 00:38:05 +0000 (0:00:00.188) 0:00:02.795 ******* 2026-04-08 00:38:09.539417 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_4596a618-b0c7-4f6c-b3f8-3bb0eece7c92) 2026-04-08 00:38:09.539429 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_4596a618-b0c7-4f6c-b3f8-3bb0eece7c92) 2026-04-08 00:38:09.539440 | orchestrator | 2026-04-08 00:38:09.539450 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:38:09.539482 | orchestrator | Wednesday 08 April 2026 00:38:06 +0000 (0:00:00.400) 0:00:03.195 ******* 2026-04-08 00:38:09.539493 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_9851de66-42ac-4afe-9f6b-65921d8ebe77) 2026-04-08 00:38:09.539504 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_9851de66-42ac-4afe-9f6b-65921d8ebe77) 2026-04-08 00:38:09.539515 | orchestrator | 2026-04-08 00:38:09.539526 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:38:09.539537 | orchestrator | Wednesday 08 April 2026 00:38:06 +0000 (0:00:00.399) 0:00:03.595 ******* 2026-04-08 00:38:09.539558 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3c9bb5e0-782f-4e13-9d09-525e18a95d4a) 2026-04-08 00:38:09.539569 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3c9bb5e0-782f-4e13-9d09-525e18a95d4a) 2026-04-08 00:38:09.539580 | orchestrator | 2026-04-08 00:38:09.539591 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:38:09.539601 | orchestrator | Wednesday 08 April 2026 00:38:06 +0000 (0:00:00.544) 0:00:04.140 ******* 2026-04-08 00:38:09.539612 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_10ca2d35-3b66-46f3-ab0f-253d8a66f2e6) 2026-04-08 00:38:09.539623 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_10ca2d35-3b66-46f3-ab0f-253d8a66f2e6) 2026-04-08 00:38:09.539634 | orchestrator | 2026-04-08 00:38:09.539645 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:38:09.539655 | orchestrator | Wednesday 08 April 2026 00:38:07 +0000 (0:00:00.505) 0:00:04.645 ******* 2026-04-08 00:38:09.539666 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-08 00:38:09.539677 | orchestrator | 2026-04-08 00:38:09.539687 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:38:09.539698 | orchestrator | Wednesday 08 April 2026 00:38:08 +0000 (0:00:00.517) 0:00:05.162 ******* 2026-04-08 00:38:09.539709 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-08 00:38:09.539726 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-08 00:38:09.539738 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-08 00:38:09.539748 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-08 00:38:09.539759 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-08 00:38:09.539770 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-08 00:38:09.539780 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-08 00:38:09.539791 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-08 00:38:09.539802 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-08 00:38:09.539813 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-08 00:38:09.539824 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-08 00:38:09.539835 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-08 00:38:09.539845 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-08 00:38:09.539856 | orchestrator | 2026-04-08 00:38:09.539867 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:38:09.539878 | orchestrator | Wednesday 08 April 2026 00:38:08 +0000 (0:00:00.333) 0:00:05.496 ******* 2026-04-08 00:38:09.539889 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:38:09.539899 | orchestrator | 2026-04-08 00:38:09.539910 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:38:09.539921 | orchestrator | Wednesday 08 April 2026 00:38:08 +0000 (0:00:00.170) 0:00:05.666 ******* 2026-04-08 00:38:09.539932 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:38:09.539942 | orchestrator | 2026-04-08 00:38:09.539953 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:38:09.539964 | orchestrator | Wednesday 08 April 2026 00:38:08 +0000 (0:00:00.162) 0:00:05.829 ******* 2026-04-08 00:38:09.539975 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:38:09.539986 | orchestrator | 2026-04-08 00:38:09.539997 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:38:09.540014 | orchestrator | Wednesday 08 April 2026 00:38:08 +0000 (0:00:00.165) 0:00:05.994 ******* 2026-04-08 00:38:09.540025 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:38:09.540036 | orchestrator | 2026-04-08 00:38:09.540047 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:38:09.540058 | orchestrator | Wednesday 08 April 2026 00:38:09 +0000 (0:00:00.168) 0:00:06.163 ******* 2026-04-08 00:38:09.540068 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:38:09.540079 | orchestrator | 2026-04-08 00:38:09.540090 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:38:09.540106 | orchestrator | Wednesday 08 April 2026 00:38:09 +0000 (0:00:00.176) 0:00:06.339 ******* 2026-04-08 00:38:09.540117 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:38:09.540153 | orchestrator | 2026-04-08 00:38:09.540164 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:38:09.540175 | orchestrator | Wednesday 08 April 2026 00:38:09 +0000 (0:00:00.180) 0:00:06.519 ******* 2026-04-08 00:38:09.540185 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:38:09.540196 | orchestrator | 2026-04-08 00:38:09.540213 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:38:15.668867 | orchestrator | Wednesday 08 April 2026 00:38:09 +0000 (0:00:00.158) 0:00:06.677 ******* 2026-04-08 00:38:15.668985 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:38:15.669001 | orchestrator | 2026-04-08 00:38:15.669014 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:38:15.669026 | orchestrator | Wednesday 08 April 2026 00:38:09 +0000 (0:00:00.164) 0:00:06.842 ******* 2026-04-08 00:38:15.669037 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-08 00:38:15.669049 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-08 00:38:15.669061 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-08 00:38:15.669071 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-08 00:38:15.669082 | orchestrator | 2026-04-08 00:38:15.669094 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:38:15.669105 | orchestrator | Wednesday 08 April 2026 00:38:10 +0000 (0:00:00.775) 0:00:07.617 ******* 2026-04-08 00:38:15.669156 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:38:15.669178 | orchestrator | 2026-04-08 00:38:15.669198 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:38:15.669217 | orchestrator | Wednesday 08 April 2026 00:38:10 +0000 (0:00:00.164) 0:00:07.782 ******* 2026-04-08 00:38:15.669238 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:38:15.669256 | orchestrator | 2026-04-08 00:38:15.669267 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:38:15.669279 | orchestrator | Wednesday 08 April 2026 00:38:10 +0000 (0:00:00.170) 0:00:07.953 ******* 2026-04-08 00:38:15.669289 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:38:15.669300 | orchestrator | 2026-04-08 00:38:15.669311 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:38:15.669322 | orchestrator | Wednesday 08 April 2026 00:38:10 +0000 (0:00:00.164) 0:00:08.117 ******* 2026-04-08 00:38:15.669333 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:38:15.669344 | orchestrator | 2026-04-08 00:38:15.669354 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-08 00:38:15.669366 | orchestrator | Wednesday 08 April 2026 00:38:11 +0000 (0:00:00.169) 0:00:08.286 ******* 2026-04-08 00:38:15.669379 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-04-08 00:38:15.669392 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-04-08 00:38:15.669405 | orchestrator | 2026-04-08 00:38:15.669417 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-08 00:38:15.669430 | orchestrator | Wednesday 08 April 2026 00:38:11 +0000 (0:00:00.154) 0:00:08.440 ******* 2026-04-08 00:38:15.669443 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:38:15.669481 | orchestrator | 2026-04-08 00:38:15.669498 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-08 00:38:15.669516 | orchestrator | Wednesday 08 April 2026 00:38:11 +0000 (0:00:00.124) 0:00:08.565 ******* 2026-04-08 00:38:15.669535 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:38:15.669554 | orchestrator | 2026-04-08 00:38:15.669573 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-08 00:38:15.669598 | orchestrator | Wednesday 08 April 2026 00:38:11 +0000 (0:00:00.117) 0:00:08.682 ******* 2026-04-08 00:38:15.669617 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:38:15.669634 | orchestrator | 2026-04-08 00:38:15.669652 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-08 00:38:15.669672 | orchestrator | Wednesday 08 April 2026 00:38:11 +0000 (0:00:00.092) 0:00:08.775 ******* 2026-04-08 00:38:15.669685 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:38:15.669696 | orchestrator | 2026-04-08 00:38:15.669707 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-08 00:38:15.669718 | orchestrator | Wednesday 08 April 2026 00:38:11 +0000 (0:00:00.118) 0:00:08.893 ******* 2026-04-08 00:38:15.669735 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8'}}) 2026-04-08 00:38:15.669754 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9c748ac0-b7ad-5284-8a6e-a168bddd5b66'}}) 2026-04-08 00:38:15.669774 | orchestrator | 2026-04-08 00:38:15.669793 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-08 00:38:15.669813 | orchestrator | Wednesday 08 April 2026 00:38:11 +0000 (0:00:00.125) 0:00:09.018 ******* 2026-04-08 00:38:15.669833 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8'}})  2026-04-08 00:38:15.669865 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9c748ac0-b7ad-5284-8a6e-a168bddd5b66'}})  2026-04-08 00:38:15.669877 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:38:15.669888 | orchestrator | 2026-04-08 00:38:15.669899 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-08 00:38:15.669910 | orchestrator | Wednesday 08 April 2026 00:38:11 +0000 (0:00:00.128) 0:00:09.147 ******* 2026-04-08 00:38:15.669921 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8'}})  2026-04-08 00:38:15.669932 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9c748ac0-b7ad-5284-8a6e-a168bddd5b66'}})  2026-04-08 00:38:15.669942 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:38:15.669953 | orchestrator | 2026-04-08 00:38:15.669963 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-08 00:38:15.669974 | orchestrator | Wednesday 08 April 2026 00:38:12 +0000 (0:00:00.121) 0:00:09.269 ******* 2026-04-08 00:38:15.669985 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8'}})  2026-04-08 00:38:15.670016 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9c748ac0-b7ad-5284-8a6e-a168bddd5b66'}})  2026-04-08 00:38:15.670238 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:38:15.670249 | orchestrator | 2026-04-08 00:38:15.670260 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-08 00:38:15.670271 | orchestrator | Wednesday 08 April 2026 00:38:12 +0000 (0:00:00.242) 0:00:09.511 ******* 2026-04-08 00:38:15.670282 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:38:15.670292 | orchestrator | 2026-04-08 00:38:15.670303 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-08 00:38:15.670314 | orchestrator | Wednesday 08 April 2026 00:38:12 +0000 (0:00:00.124) 0:00:09.635 ******* 2026-04-08 00:38:15.670324 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:38:15.670335 | orchestrator | 2026-04-08 00:38:15.670361 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-08 00:38:15.670372 | orchestrator | Wednesday 08 April 2026 00:38:12 +0000 (0:00:00.119) 0:00:09.754 ******* 2026-04-08 00:38:15.670383 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:38:15.670394 | orchestrator | 2026-04-08 00:38:15.670405 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-08 00:38:15.670427 | orchestrator | Wednesday 08 April 2026 00:38:12 +0000 (0:00:00.105) 0:00:09.860 ******* 2026-04-08 00:38:15.670439 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:38:15.670450 | orchestrator | 2026-04-08 00:38:15.670460 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-08 00:38:15.670472 | orchestrator | Wednesday 08 April 2026 00:38:12 +0000 (0:00:00.103) 0:00:09.964 ******* 2026-04-08 00:38:15.670491 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:38:15.670508 | orchestrator | 2026-04-08 00:38:15.670526 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-08 00:38:15.670545 | orchestrator | Wednesday 08 April 2026 00:38:12 +0000 (0:00:00.114) 0:00:10.078 ******* 2026-04-08 00:38:15.670563 | orchestrator | ok: [testbed-node-3] => { 2026-04-08 00:38:15.670582 | orchestrator |  "ceph_osd_devices": { 2026-04-08 00:38:15.670600 | orchestrator |  "sdb": { 2026-04-08 00:38:15.670613 | orchestrator |  "osd_lvm_uuid": "19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8" 2026-04-08 00:38:15.670624 | orchestrator |  }, 2026-04-08 00:38:15.670635 | orchestrator |  "sdc": { 2026-04-08 00:38:15.670646 | orchestrator |  "osd_lvm_uuid": "9c748ac0-b7ad-5284-8a6e-a168bddd5b66" 2026-04-08 00:38:15.670656 | orchestrator |  } 2026-04-08 00:38:15.670667 | orchestrator |  } 2026-04-08 00:38:15.670678 | orchestrator | } 2026-04-08 00:38:15.670689 | orchestrator | 2026-04-08 00:38:15.670700 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-08 00:38:15.670711 | orchestrator | Wednesday 08 April 2026 00:38:13 +0000 (0:00:00.098) 0:00:10.177 ******* 2026-04-08 00:38:15.670721 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:38:15.670732 | orchestrator | 2026-04-08 00:38:15.670743 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-08 00:38:15.670754 | orchestrator | Wednesday 08 April 2026 00:38:13 +0000 (0:00:00.107) 0:00:10.285 ******* 2026-04-08 00:38:15.670765 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:38:15.670775 | orchestrator | 2026-04-08 00:38:15.670786 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-08 00:38:15.670797 | orchestrator | Wednesday 08 April 2026 00:38:13 +0000 (0:00:00.101) 0:00:10.386 ******* 2026-04-08 00:38:15.670807 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:38:15.670818 | orchestrator | 2026-04-08 00:38:15.670829 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-08 00:38:15.670839 | orchestrator | Wednesday 08 April 2026 00:38:13 +0000 (0:00:00.111) 0:00:10.498 ******* 2026-04-08 00:38:15.670850 | orchestrator | changed: [testbed-node-3] => { 2026-04-08 00:38:15.670861 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-08 00:38:15.670880 | orchestrator |  "ceph_osd_devices": { 2026-04-08 00:38:15.670899 | orchestrator |  "sdb": { 2026-04-08 00:38:15.670918 | orchestrator |  "osd_lvm_uuid": "19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8" 2026-04-08 00:38:15.670938 | orchestrator |  }, 2026-04-08 00:38:15.670958 | orchestrator |  "sdc": { 2026-04-08 00:38:15.670976 | orchestrator |  "osd_lvm_uuid": "9c748ac0-b7ad-5284-8a6e-a168bddd5b66" 2026-04-08 00:38:15.670995 | orchestrator |  } 2026-04-08 00:38:15.671008 | orchestrator |  }, 2026-04-08 00:38:15.671018 | orchestrator |  "lvm_volumes": [ 2026-04-08 00:38:15.671029 | orchestrator |  { 2026-04-08 00:38:15.671040 | orchestrator |  "data": "osd-block-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8", 2026-04-08 00:38:15.671051 | orchestrator |  "data_vg": "ceph-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8" 2026-04-08 00:38:15.671062 | orchestrator |  }, 2026-04-08 00:38:15.671084 | orchestrator |  { 2026-04-08 00:38:15.671095 | orchestrator |  "data": "osd-block-9c748ac0-b7ad-5284-8a6e-a168bddd5b66", 2026-04-08 00:38:15.671106 | orchestrator |  "data_vg": "ceph-9c748ac0-b7ad-5284-8a6e-a168bddd5b66" 2026-04-08 00:38:15.671162 | orchestrator |  } 2026-04-08 00:38:15.671174 | orchestrator |  ] 2026-04-08 00:38:15.671185 | orchestrator |  } 2026-04-08 00:38:15.671196 | orchestrator | } 2026-04-08 00:38:15.671207 | orchestrator | 2026-04-08 00:38:15.671218 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-08 00:38:15.671228 | orchestrator | Wednesday 08 April 2026 00:38:13 +0000 (0:00:00.167) 0:00:10.665 ******* 2026-04-08 00:38:15.671239 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-08 00:38:15.671250 | orchestrator | 2026-04-08 00:38:15.671261 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-08 00:38:15.671272 | orchestrator | 2026-04-08 00:38:15.671282 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-08 00:38:15.671293 | orchestrator | Wednesday 08 April 2026 00:38:15 +0000 (0:00:01.740) 0:00:12.406 ******* 2026-04-08 00:38:15.671304 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-08 00:38:15.671315 | orchestrator | 2026-04-08 00:38:15.671326 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-08 00:38:15.671347 | orchestrator | Wednesday 08 April 2026 00:38:15 +0000 (0:00:00.215) 0:00:12.621 ******* 2026-04-08 00:38:15.671367 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:38:15.671387 | orchestrator | 2026-04-08 00:38:15.671421 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:38:21.816471 | orchestrator | Wednesday 08 April 2026 00:38:15 +0000 (0:00:00.190) 0:00:12.811 ******* 2026-04-08 00:38:21.816580 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-08 00:38:21.816597 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-08 00:38:21.816609 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-08 00:38:21.816620 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-08 00:38:21.816631 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-08 00:38:21.816642 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-08 00:38:21.816653 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-08 00:38:21.816664 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-08 00:38:21.816680 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-08 00:38:21.816692 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-08 00:38:21.816703 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-08 00:38:21.816714 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-08 00:38:21.816725 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-08 00:38:21.816737 | orchestrator | 2026-04-08 00:38:21.816749 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:38:21.816760 | orchestrator | Wednesday 08 April 2026 00:38:15 +0000 (0:00:00.309) 0:00:13.121 ******* 2026-04-08 00:38:21.816771 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:38:21.816783 | orchestrator | 2026-04-08 00:38:21.816794 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:38:21.816806 | orchestrator | Wednesday 08 April 2026 00:38:16 +0000 (0:00:00.167) 0:00:13.288 ******* 2026-04-08 00:38:21.816817 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:38:21.816850 | orchestrator | 2026-04-08 00:38:21.816862 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:38:21.816873 | orchestrator | Wednesday 08 April 2026 00:38:16 +0000 (0:00:00.163) 0:00:13.452 ******* 2026-04-08 00:38:21.816884 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:38:21.816895 | orchestrator | 2026-04-08 00:38:21.816906 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:38:21.816917 | orchestrator | Wednesday 08 April 2026 00:38:16 +0000 (0:00:00.158) 0:00:13.611 ******* 2026-04-08 00:38:21.816928 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:38:21.816939 | orchestrator | 2026-04-08 00:38:21.816949 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:38:21.816960 | orchestrator | Wednesday 08 April 2026 00:38:16 +0000 (0:00:00.155) 0:00:13.766 ******* 2026-04-08 00:38:21.816971 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:38:21.816982 | orchestrator | 2026-04-08 00:38:21.816993 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:38:21.817005 | orchestrator | Wednesday 08 April 2026 00:38:16 +0000 (0:00:00.156) 0:00:13.922 ******* 2026-04-08 00:38:21.817018 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:38:21.817031 | orchestrator | 2026-04-08 00:38:21.817044 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:38:21.817056 | orchestrator | Wednesday 08 April 2026 00:38:17 +0000 (0:00:00.383) 0:00:14.306 ******* 2026-04-08 00:38:21.817069 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:38:21.817082 | orchestrator | 2026-04-08 00:38:21.817094 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:38:21.817133 | orchestrator | Wednesday 08 April 2026 00:38:17 +0000 (0:00:00.169) 0:00:14.475 ******* 2026-04-08 00:38:21.817145 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:38:21.817158 | orchestrator | 2026-04-08 00:38:21.817170 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:38:21.817183 | orchestrator | Wednesday 08 April 2026 00:38:17 +0000 (0:00:00.163) 0:00:14.638 ******* 2026-04-08 00:38:21.817196 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_1f87f253-d467-48bc-bac0-692ec5abf0aa) 2026-04-08 00:38:21.817211 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_1f87f253-d467-48bc-bac0-692ec5abf0aa) 2026-04-08 00:38:21.817223 | orchestrator | 2026-04-08 00:38:21.817236 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:38:21.817266 | orchestrator | Wednesday 08 April 2026 00:38:17 +0000 (0:00:00.352) 0:00:14.991 ******* 2026-04-08 00:38:21.817279 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_49047f2d-69c1-4fac-a475-f46440c51814) 2026-04-08 00:38:21.817292 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_49047f2d-69c1-4fac-a475-f46440c51814) 2026-04-08 00:38:21.817305 | orchestrator | 2026-04-08 00:38:21.817317 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:38:21.817329 | orchestrator | Wednesday 08 April 2026 00:38:18 +0000 (0:00:00.360) 0:00:15.352 ******* 2026-04-08 00:38:21.817343 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_0a17ff12-522e-4235-8e4c-edb4898b90f5) 2026-04-08 00:38:21.817356 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_0a17ff12-522e-4235-8e4c-edb4898b90f5) 2026-04-08 00:38:21.817367 | orchestrator | 2026-04-08 00:38:21.817379 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:38:21.817409 | orchestrator | Wednesday 08 April 2026 00:38:18 +0000 (0:00:00.378) 0:00:15.731 ******* 2026-04-08 00:38:21.817422 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_22a44c82-a679-4e37-857c-f96ffb845a8a) 2026-04-08 00:38:21.817433 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_22a44c82-a679-4e37-857c-f96ffb845a8a) 2026-04-08 00:38:21.817444 | orchestrator | 2026-04-08 00:38:21.817455 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:38:21.817475 | orchestrator | Wednesday 08 April 2026 00:38:18 +0000 (0:00:00.362) 0:00:16.093 ******* 2026-04-08 00:38:21.817486 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-08 00:38:21.817496 | orchestrator | 2026-04-08 00:38:21.817507 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:38:21.817518 | orchestrator | Wednesday 08 April 2026 00:38:19 +0000 (0:00:00.264) 0:00:16.357 ******* 2026-04-08 00:38:21.817529 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-08 00:38:21.817540 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-08 00:38:21.817550 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-08 00:38:21.817561 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-08 00:38:21.817572 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-08 00:38:21.817583 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-08 00:38:21.817593 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-08 00:38:21.817604 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-08 00:38:21.817615 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-08 00:38:21.817625 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-08 00:38:21.817636 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-08 00:38:21.817647 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-08 00:38:21.817657 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-08 00:38:21.817668 | orchestrator | 2026-04-08 00:38:21.817679 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:38:21.817690 | orchestrator | Wednesday 08 April 2026 00:38:19 +0000 (0:00:00.333) 0:00:16.690 ******* 2026-04-08 00:38:21.817701 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:38:21.817711 | orchestrator | 2026-04-08 00:38:21.817722 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:38:21.817733 | orchestrator | Wednesday 08 April 2026 00:38:19 +0000 (0:00:00.173) 0:00:16.864 ******* 2026-04-08 00:38:21.817744 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:38:21.817755 | orchestrator | 2026-04-08 00:38:21.817766 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:38:21.817777 | orchestrator | Wednesday 08 April 2026 00:38:20 +0000 (0:00:00.421) 0:00:17.285 ******* 2026-04-08 00:38:21.817787 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:38:21.817798 | orchestrator | 2026-04-08 00:38:21.817809 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:38:21.817820 | orchestrator | Wednesday 08 April 2026 00:38:20 +0000 (0:00:00.172) 0:00:17.458 ******* 2026-04-08 00:38:21.817831 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:38:21.817841 | orchestrator | 2026-04-08 00:38:21.817852 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:38:21.817863 | orchestrator | Wednesday 08 April 2026 00:38:20 +0000 (0:00:00.184) 0:00:17.642 ******* 2026-04-08 00:38:21.817874 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:38:21.817884 | orchestrator | 2026-04-08 00:38:21.817895 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:38:21.817906 | orchestrator | Wednesday 08 April 2026 00:38:20 +0000 (0:00:00.175) 0:00:17.818 ******* 2026-04-08 00:38:21.817917 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:38:21.817928 | orchestrator | 2026-04-08 00:38:21.817939 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:38:21.817964 | orchestrator | Wednesday 08 April 2026 00:38:20 +0000 (0:00:00.170) 0:00:17.989 ******* 2026-04-08 00:38:21.817975 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:38:21.817986 | orchestrator | 2026-04-08 00:38:21.817997 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:38:21.818008 | orchestrator | Wednesday 08 April 2026 00:38:21 +0000 (0:00:00.163) 0:00:18.152 ******* 2026-04-08 00:38:21.818075 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:38:21.818089 | orchestrator | 2026-04-08 00:38:21.818101 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:38:21.818144 | orchestrator | Wednesday 08 April 2026 00:38:21 +0000 (0:00:00.172) 0:00:18.325 ******* 2026-04-08 00:38:21.818156 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-08 00:38:21.818168 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-08 00:38:21.818179 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-08 00:38:21.818190 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-08 00:38:21.818201 | orchestrator | 2026-04-08 00:38:21.818212 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:38:21.818223 | orchestrator | Wednesday 08 April 2026 00:38:21 +0000 (0:00:00.542) 0:00:18.867 ******* 2026-04-08 00:38:21.818234 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:38:26.942931 | orchestrator | 2026-04-08 00:38:26.943022 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:38:26.943034 | orchestrator | Wednesday 08 April 2026 00:38:21 +0000 (0:00:00.173) 0:00:19.041 ******* 2026-04-08 00:38:26.943042 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:38:26.943050 | orchestrator | 2026-04-08 00:38:26.943059 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:38:26.943068 | orchestrator | Wednesday 08 April 2026 00:38:22 +0000 (0:00:00.154) 0:00:19.196 ******* 2026-04-08 00:38:26.943075 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:38:26.943081 | orchestrator | 2026-04-08 00:38:26.943088 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:38:26.943094 | orchestrator | Wednesday 08 April 2026 00:38:22 +0000 (0:00:00.170) 0:00:19.366 ******* 2026-04-08 00:38:26.943167 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:38:26.943174 | orchestrator | 2026-04-08 00:38:26.943181 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-08 00:38:26.943187 | orchestrator | Wednesday 08 April 2026 00:38:22 +0000 (0:00:00.162) 0:00:19.529 ******* 2026-04-08 00:38:26.943194 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-04-08 00:38:26.943203 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-04-08 00:38:26.943212 | orchestrator | 2026-04-08 00:38:26.943220 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-08 00:38:26.943227 | orchestrator | Wednesday 08 April 2026 00:38:22 +0000 (0:00:00.269) 0:00:19.798 ******* 2026-04-08 00:38:26.943233 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:38:26.943239 | orchestrator | 2026-04-08 00:38:26.943246 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-08 00:38:26.943252 | orchestrator | Wednesday 08 April 2026 00:38:22 +0000 (0:00:00.104) 0:00:19.902 ******* 2026-04-08 00:38:26.943259 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:38:26.943265 | orchestrator | 2026-04-08 00:38:26.943272 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-08 00:38:26.943278 | orchestrator | Wednesday 08 April 2026 00:38:22 +0000 (0:00:00.105) 0:00:20.008 ******* 2026-04-08 00:38:26.943285 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:38:26.943291 | orchestrator | 2026-04-08 00:38:26.943298 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-08 00:38:26.943308 | orchestrator | Wednesday 08 April 2026 00:38:22 +0000 (0:00:00.110) 0:00:20.119 ******* 2026-04-08 00:38:26.943316 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:38:26.943346 | orchestrator | 2026-04-08 00:38:26.943353 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-08 00:38:26.943360 | orchestrator | Wednesday 08 April 2026 00:38:23 +0000 (0:00:00.125) 0:00:20.245 ******* 2026-04-08 00:38:26.943367 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c5eee886-e951-5b32-a4a0-4842fe7aed13'}}) 2026-04-08 00:38:26.943373 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '16b9c52d-170e-5f8d-b9c1-c30752bb4b9e'}}) 2026-04-08 00:38:26.943380 | orchestrator | 2026-04-08 00:38:26.943387 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-08 00:38:26.943395 | orchestrator | Wednesday 08 April 2026 00:38:23 +0000 (0:00:00.137) 0:00:20.383 ******* 2026-04-08 00:38:26.943406 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c5eee886-e951-5b32-a4a0-4842fe7aed13'}})  2026-04-08 00:38:26.943414 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '16b9c52d-170e-5f8d-b9c1-c30752bb4b9e'}})  2026-04-08 00:38:26.943420 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:38:26.943427 | orchestrator | 2026-04-08 00:38:26.943433 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-08 00:38:26.943440 | orchestrator | Wednesday 08 April 2026 00:38:23 +0000 (0:00:00.127) 0:00:20.511 ******* 2026-04-08 00:38:26.943446 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c5eee886-e951-5b32-a4a0-4842fe7aed13'}})  2026-04-08 00:38:26.943453 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '16b9c52d-170e-5f8d-b9c1-c30752bb4b9e'}})  2026-04-08 00:38:26.943459 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:38:26.943466 | orchestrator | 2026-04-08 00:38:26.943473 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-08 00:38:26.943481 | orchestrator | Wednesday 08 April 2026 00:38:23 +0000 (0:00:00.114) 0:00:20.625 ******* 2026-04-08 00:38:26.943491 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c5eee886-e951-5b32-a4a0-4842fe7aed13'}})  2026-04-08 00:38:26.943501 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '16b9c52d-170e-5f8d-b9c1-c30752bb4b9e'}})  2026-04-08 00:38:26.943511 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:38:26.943520 | orchestrator | 2026-04-08 00:38:26.943529 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-08 00:38:26.943554 | orchestrator | Wednesday 08 April 2026 00:38:23 +0000 (0:00:00.121) 0:00:20.746 ******* 2026-04-08 00:38:26.943563 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:38:26.943573 | orchestrator | 2026-04-08 00:38:26.943582 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-08 00:38:26.943592 | orchestrator | Wednesday 08 April 2026 00:38:23 +0000 (0:00:00.122) 0:00:20.869 ******* 2026-04-08 00:38:26.943600 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:38:26.943609 | orchestrator | 2026-04-08 00:38:26.943619 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-08 00:38:26.943629 | orchestrator | Wednesday 08 April 2026 00:38:23 +0000 (0:00:00.110) 0:00:20.979 ******* 2026-04-08 00:38:26.943657 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:38:26.943664 | orchestrator | 2026-04-08 00:38:26.943670 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-08 00:38:26.943677 | orchestrator | Wednesday 08 April 2026 00:38:23 +0000 (0:00:00.113) 0:00:21.092 ******* 2026-04-08 00:38:26.943683 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:38:26.943690 | orchestrator | 2026-04-08 00:38:26.943697 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-08 00:38:26.943703 | orchestrator | Wednesday 08 April 2026 00:38:24 +0000 (0:00:00.230) 0:00:21.323 ******* 2026-04-08 00:38:26.943710 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:38:26.943717 | orchestrator | 2026-04-08 00:38:26.943723 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-08 00:38:26.943738 | orchestrator | Wednesday 08 April 2026 00:38:24 +0000 (0:00:00.119) 0:00:21.442 ******* 2026-04-08 00:38:26.943744 | orchestrator | ok: [testbed-node-4] => { 2026-04-08 00:38:26.943751 | orchestrator |  "ceph_osd_devices": { 2026-04-08 00:38:26.943757 | orchestrator |  "sdb": { 2026-04-08 00:38:26.943767 | orchestrator |  "osd_lvm_uuid": "c5eee886-e951-5b32-a4a0-4842fe7aed13" 2026-04-08 00:38:26.943774 | orchestrator |  }, 2026-04-08 00:38:26.943781 | orchestrator |  "sdc": { 2026-04-08 00:38:26.943787 | orchestrator |  "osd_lvm_uuid": "16b9c52d-170e-5f8d-b9c1-c30752bb4b9e" 2026-04-08 00:38:26.943794 | orchestrator |  } 2026-04-08 00:38:26.943800 | orchestrator |  } 2026-04-08 00:38:26.943807 | orchestrator | } 2026-04-08 00:38:26.943814 | orchestrator | 2026-04-08 00:38:26.943821 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-08 00:38:26.943826 | orchestrator | Wednesday 08 April 2026 00:38:24 +0000 (0:00:00.120) 0:00:21.563 ******* 2026-04-08 00:38:26.943832 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:38:26.943838 | orchestrator | 2026-04-08 00:38:26.943843 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-08 00:38:26.943850 | orchestrator | Wednesday 08 April 2026 00:38:24 +0000 (0:00:00.107) 0:00:21.670 ******* 2026-04-08 00:38:26.943856 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:38:26.943863 | orchestrator | 2026-04-08 00:38:26.943869 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-08 00:38:26.943876 | orchestrator | Wednesday 08 April 2026 00:38:24 +0000 (0:00:00.092) 0:00:21.762 ******* 2026-04-08 00:38:26.943885 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:38:26.943893 | orchestrator | 2026-04-08 00:38:26.943900 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-08 00:38:26.943906 | orchestrator | Wednesday 08 April 2026 00:38:24 +0000 (0:00:00.100) 0:00:21.863 ******* 2026-04-08 00:38:26.943913 | orchestrator | changed: [testbed-node-4] => { 2026-04-08 00:38:26.943919 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-08 00:38:26.943926 | orchestrator |  "ceph_osd_devices": { 2026-04-08 00:38:26.943933 | orchestrator |  "sdb": { 2026-04-08 00:38:26.943940 | orchestrator |  "osd_lvm_uuid": "c5eee886-e951-5b32-a4a0-4842fe7aed13" 2026-04-08 00:38:26.943946 | orchestrator |  }, 2026-04-08 00:38:26.943953 | orchestrator |  "sdc": { 2026-04-08 00:38:26.943959 | orchestrator |  "osd_lvm_uuid": "16b9c52d-170e-5f8d-b9c1-c30752bb4b9e" 2026-04-08 00:38:26.943966 | orchestrator |  } 2026-04-08 00:38:26.943972 | orchestrator |  }, 2026-04-08 00:38:26.943979 | orchestrator |  "lvm_volumes": [ 2026-04-08 00:38:26.943989 | orchestrator |  { 2026-04-08 00:38:26.943996 | orchestrator |  "data": "osd-block-c5eee886-e951-5b32-a4a0-4842fe7aed13", 2026-04-08 00:38:26.944003 | orchestrator |  "data_vg": "ceph-c5eee886-e951-5b32-a4a0-4842fe7aed13" 2026-04-08 00:38:26.944010 | orchestrator |  }, 2026-04-08 00:38:26.944016 | orchestrator |  { 2026-04-08 00:38:26.944023 | orchestrator |  "data": "osd-block-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e", 2026-04-08 00:38:26.944029 | orchestrator |  "data_vg": "ceph-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e" 2026-04-08 00:38:26.944036 | orchestrator |  } 2026-04-08 00:38:26.944043 | orchestrator |  ] 2026-04-08 00:38:26.944049 | orchestrator |  } 2026-04-08 00:38:26.944056 | orchestrator | } 2026-04-08 00:38:26.944062 | orchestrator | 2026-04-08 00:38:26.944069 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-08 00:38:26.944075 | orchestrator | Wednesday 08 April 2026 00:38:24 +0000 (0:00:00.171) 0:00:22.035 ******* 2026-04-08 00:38:26.944082 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-08 00:38:26.944088 | orchestrator | 2026-04-08 00:38:26.944095 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-08 00:38:26.944126 | orchestrator | 2026-04-08 00:38:26.944134 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-08 00:38:26.944141 | orchestrator | Wednesday 08 April 2026 00:38:25 +0000 (0:00:00.974) 0:00:23.010 ******* 2026-04-08 00:38:26.944147 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-08 00:38:26.944156 | orchestrator | 2026-04-08 00:38:26.944161 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-08 00:38:26.944167 | orchestrator | Wednesday 08 April 2026 00:38:26 +0000 (0:00:00.360) 0:00:23.370 ******* 2026-04-08 00:38:26.944173 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:38:26.944179 | orchestrator | 2026-04-08 00:38:26.944185 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:38:26.944191 | orchestrator | Wednesday 08 April 2026 00:38:26 +0000 (0:00:00.441) 0:00:23.811 ******* 2026-04-08 00:38:26.944197 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-08 00:38:26.944203 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-08 00:38:26.944212 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-08 00:38:26.944220 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-08 00:38:26.944227 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-08 00:38:26.944241 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-08 00:38:33.701505 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-08 00:38:33.701626 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-08 00:38:33.701642 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-08 00:38:33.701654 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-08 00:38:33.701665 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-08 00:38:33.701697 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-08 00:38:33.701709 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-08 00:38:33.701720 | orchestrator | 2026-04-08 00:38:33.701732 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:38:33.701745 | orchestrator | Wednesday 08 April 2026 00:38:27 +0000 (0:00:00.349) 0:00:24.161 ******* 2026-04-08 00:38:33.701756 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:38:33.701768 | orchestrator | 2026-04-08 00:38:33.701779 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:38:33.701790 | orchestrator | Wednesday 08 April 2026 00:38:27 +0000 (0:00:00.180) 0:00:24.342 ******* 2026-04-08 00:38:33.701802 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:38:33.701812 | orchestrator | 2026-04-08 00:38:33.701823 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:38:33.701834 | orchestrator | Wednesday 08 April 2026 00:38:27 +0000 (0:00:00.180) 0:00:24.522 ******* 2026-04-08 00:38:33.701846 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:38:33.701857 | orchestrator | 2026-04-08 00:38:33.701868 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:38:33.701879 | orchestrator | Wednesday 08 April 2026 00:38:27 +0000 (0:00:00.184) 0:00:24.707 ******* 2026-04-08 00:38:33.701890 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:38:33.701901 | orchestrator | 2026-04-08 00:38:33.701918 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:38:33.701929 | orchestrator | Wednesday 08 April 2026 00:38:27 +0000 (0:00:00.182) 0:00:24.889 ******* 2026-04-08 00:38:33.701940 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:38:33.701974 | orchestrator | 2026-04-08 00:38:33.701986 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:38:33.701997 | orchestrator | Wednesday 08 April 2026 00:38:27 +0000 (0:00:00.170) 0:00:25.059 ******* 2026-04-08 00:38:33.702008 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:38:33.702081 | orchestrator | 2026-04-08 00:38:33.702123 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:38:33.702136 | orchestrator | Wednesday 08 April 2026 00:38:28 +0000 (0:00:00.167) 0:00:25.226 ******* 2026-04-08 00:38:33.702149 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:38:33.702161 | orchestrator | 2026-04-08 00:38:33.702174 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:38:33.702188 | orchestrator | Wednesday 08 April 2026 00:38:28 +0000 (0:00:00.186) 0:00:25.412 ******* 2026-04-08 00:38:33.702201 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:38:33.702213 | orchestrator | 2026-04-08 00:38:33.702226 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:38:33.702239 | orchestrator | Wednesday 08 April 2026 00:38:28 +0000 (0:00:00.177) 0:00:25.590 ******* 2026-04-08 00:38:33.702251 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e5ab6e22-4bb9-43f8-8e9a-32a0c49ce343) 2026-04-08 00:38:33.702264 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e5ab6e22-4bb9-43f8-8e9a-32a0c49ce343) 2026-04-08 00:38:33.702277 | orchestrator | 2026-04-08 00:38:33.702290 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:38:33.702303 | orchestrator | Wednesday 08 April 2026 00:38:28 +0000 (0:00:00.485) 0:00:26.075 ******* 2026-04-08 00:38:33.702316 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_8b42dfd4-7d2e-4d67-9c6f-8993b8aa5f54) 2026-04-08 00:38:33.702329 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_8b42dfd4-7d2e-4d67-9c6f-8993b8aa5f54) 2026-04-08 00:38:33.702341 | orchestrator | 2026-04-08 00:38:33.702353 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:38:33.702366 | orchestrator | Wednesday 08 April 2026 00:38:29 +0000 (0:00:00.588) 0:00:26.663 ******* 2026-04-08 00:38:33.702379 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_29911cfa-2062-4b30-9263-aae8438640a0) 2026-04-08 00:38:33.702391 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_29911cfa-2062-4b30-9263-aae8438640a0) 2026-04-08 00:38:33.702402 | orchestrator | 2026-04-08 00:38:33.702413 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:38:33.702424 | orchestrator | Wednesday 08 April 2026 00:38:29 +0000 (0:00:00.360) 0:00:27.024 ******* 2026-04-08 00:38:33.702434 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_7b1a6d0f-b1ea-4446-8ff8-479db77ebf36) 2026-04-08 00:38:33.702446 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_7b1a6d0f-b1ea-4446-8ff8-479db77ebf36) 2026-04-08 00:38:33.702456 | orchestrator | 2026-04-08 00:38:33.702467 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:38:33.702478 | orchestrator | Wednesday 08 April 2026 00:38:30 +0000 (0:00:00.360) 0:00:27.384 ******* 2026-04-08 00:38:33.702489 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-08 00:38:33.702500 | orchestrator | 2026-04-08 00:38:33.702511 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:38:33.702539 | orchestrator | Wednesday 08 April 2026 00:38:30 +0000 (0:00:00.270) 0:00:27.655 ******* 2026-04-08 00:38:33.702551 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-08 00:38:33.702562 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-08 00:38:33.702573 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-08 00:38:33.702584 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-08 00:38:33.702605 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-08 00:38:33.702615 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-08 00:38:33.702626 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-08 00:38:33.702637 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-08 00:38:33.702648 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-08 00:38:33.702659 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-08 00:38:33.702670 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-08 00:38:33.702680 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-08 00:38:33.702691 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-08 00:38:33.702702 | orchestrator | 2026-04-08 00:38:33.702713 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:38:33.702723 | orchestrator | Wednesday 08 April 2026 00:38:30 +0000 (0:00:00.322) 0:00:27.977 ******* 2026-04-08 00:38:33.702734 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:38:33.702745 | orchestrator | 2026-04-08 00:38:33.702756 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:38:33.702767 | orchestrator | Wednesday 08 April 2026 00:38:31 +0000 (0:00:00.171) 0:00:28.149 ******* 2026-04-08 00:38:33.702778 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:38:33.702788 | orchestrator | 2026-04-08 00:38:33.702799 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:38:33.702810 | orchestrator | Wednesday 08 April 2026 00:38:31 +0000 (0:00:00.172) 0:00:28.321 ******* 2026-04-08 00:38:33.702821 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:38:33.702832 | orchestrator | 2026-04-08 00:38:33.702843 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:38:33.702853 | orchestrator | Wednesday 08 April 2026 00:38:31 +0000 (0:00:00.171) 0:00:28.492 ******* 2026-04-08 00:38:33.702864 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:38:33.702875 | orchestrator | 2026-04-08 00:38:33.702892 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:38:33.702904 | orchestrator | Wednesday 08 April 2026 00:38:31 +0000 (0:00:00.163) 0:00:28.656 ******* 2026-04-08 00:38:33.702914 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:38:33.702925 | orchestrator | 2026-04-08 00:38:33.702936 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:38:33.702947 | orchestrator | Wednesday 08 April 2026 00:38:31 +0000 (0:00:00.167) 0:00:28.824 ******* 2026-04-08 00:38:33.702958 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:38:33.702969 | orchestrator | 2026-04-08 00:38:33.702980 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:38:33.702991 | orchestrator | Wednesday 08 April 2026 00:38:32 +0000 (0:00:00.474) 0:00:29.298 ******* 2026-04-08 00:38:33.703001 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:38:33.703012 | orchestrator | 2026-04-08 00:38:33.703023 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:38:33.703034 | orchestrator | Wednesday 08 April 2026 00:38:32 +0000 (0:00:00.190) 0:00:29.489 ******* 2026-04-08 00:38:33.703045 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:38:33.703056 | orchestrator | 2026-04-08 00:38:33.703067 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:38:33.703077 | orchestrator | Wednesday 08 April 2026 00:38:32 +0000 (0:00:00.171) 0:00:29.661 ******* 2026-04-08 00:38:33.703088 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-08 00:38:33.703120 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-08 00:38:33.703138 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-08 00:38:33.703149 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-08 00:38:33.703160 | orchestrator | 2026-04-08 00:38:33.703171 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:38:33.703182 | orchestrator | Wednesday 08 April 2026 00:38:33 +0000 (0:00:00.535) 0:00:30.196 ******* 2026-04-08 00:38:33.703193 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:38:33.703204 | orchestrator | 2026-04-08 00:38:33.703214 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:38:33.703225 | orchestrator | Wednesday 08 April 2026 00:38:33 +0000 (0:00:00.162) 0:00:30.359 ******* 2026-04-08 00:38:33.703236 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:38:33.703247 | orchestrator | 2026-04-08 00:38:33.703258 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:38:33.703269 | orchestrator | Wednesday 08 April 2026 00:38:33 +0000 (0:00:00.165) 0:00:30.524 ******* 2026-04-08 00:38:33.703280 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:38:33.703290 | orchestrator | 2026-04-08 00:38:33.703301 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:38:33.703312 | orchestrator | Wednesday 08 April 2026 00:38:33 +0000 (0:00:00.157) 0:00:30.682 ******* 2026-04-08 00:38:33.703323 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:38:33.703334 | orchestrator | 2026-04-08 00:38:33.703351 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-08 00:38:37.374696 | orchestrator | Wednesday 08 April 2026 00:38:33 +0000 (0:00:00.162) 0:00:30.845 ******* 2026-04-08 00:38:37.374832 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-04-08 00:38:37.374860 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-04-08 00:38:37.374880 | orchestrator | 2026-04-08 00:38:37.374901 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-08 00:38:37.374913 | orchestrator | Wednesday 08 April 2026 00:38:33 +0000 (0:00:00.141) 0:00:30.986 ******* 2026-04-08 00:38:37.374924 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:38:37.374935 | orchestrator | 2026-04-08 00:38:37.374946 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-08 00:38:37.374957 | orchestrator | Wednesday 08 April 2026 00:38:33 +0000 (0:00:00.106) 0:00:31.093 ******* 2026-04-08 00:38:37.374968 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:38:37.374979 | orchestrator | 2026-04-08 00:38:37.374989 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-08 00:38:37.375000 | orchestrator | Wednesday 08 April 2026 00:38:34 +0000 (0:00:00.120) 0:00:31.214 ******* 2026-04-08 00:38:37.375010 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:38:37.375021 | orchestrator | 2026-04-08 00:38:37.375032 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-08 00:38:37.375044 | orchestrator | Wednesday 08 April 2026 00:38:34 +0000 (0:00:00.115) 0:00:31.329 ******* 2026-04-08 00:38:37.375055 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:38:37.375066 | orchestrator | 2026-04-08 00:38:37.375077 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-08 00:38:37.375088 | orchestrator | Wednesday 08 April 2026 00:38:34 +0000 (0:00:00.236) 0:00:31.566 ******* 2026-04-08 00:38:37.375133 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c80af5d6-1159-5955-8f01-035b314db1bd'}}) 2026-04-08 00:38:37.375145 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd7d0ff5a-46f9-53d2-8425-61ef59e49033'}}) 2026-04-08 00:38:37.375156 | orchestrator | 2026-04-08 00:38:37.375167 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-08 00:38:37.375177 | orchestrator | Wednesday 08 April 2026 00:38:34 +0000 (0:00:00.144) 0:00:31.711 ******* 2026-04-08 00:38:37.375189 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c80af5d6-1159-5955-8f01-035b314db1bd'}})  2026-04-08 00:38:37.375231 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd7d0ff5a-46f9-53d2-8425-61ef59e49033'}})  2026-04-08 00:38:37.375244 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:38:37.375256 | orchestrator | 2026-04-08 00:38:37.375269 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-08 00:38:37.375282 | orchestrator | Wednesday 08 April 2026 00:38:34 +0000 (0:00:00.115) 0:00:31.827 ******* 2026-04-08 00:38:37.375294 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c80af5d6-1159-5955-8f01-035b314db1bd'}})  2026-04-08 00:38:37.375307 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd7d0ff5a-46f9-53d2-8425-61ef59e49033'}})  2026-04-08 00:38:37.375319 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:38:37.375331 | orchestrator | 2026-04-08 00:38:37.375344 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-08 00:38:37.375357 | orchestrator | Wednesday 08 April 2026 00:38:34 +0000 (0:00:00.129) 0:00:31.957 ******* 2026-04-08 00:38:37.375370 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c80af5d6-1159-5955-8f01-035b314db1bd'}})  2026-04-08 00:38:37.375383 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd7d0ff5a-46f9-53d2-8425-61ef59e49033'}})  2026-04-08 00:38:37.375396 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:38:37.375408 | orchestrator | 2026-04-08 00:38:37.375420 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-08 00:38:37.375433 | orchestrator | Wednesday 08 April 2026 00:38:34 +0000 (0:00:00.142) 0:00:32.099 ******* 2026-04-08 00:38:37.375446 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:38:37.375458 | orchestrator | 2026-04-08 00:38:37.375471 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-08 00:38:37.375483 | orchestrator | Wednesday 08 April 2026 00:38:35 +0000 (0:00:00.129) 0:00:32.229 ******* 2026-04-08 00:38:37.375495 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:38:37.375508 | orchestrator | 2026-04-08 00:38:37.375520 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-08 00:38:37.375533 | orchestrator | Wednesday 08 April 2026 00:38:35 +0000 (0:00:00.122) 0:00:32.351 ******* 2026-04-08 00:38:37.375545 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:38:37.375557 | orchestrator | 2026-04-08 00:38:37.375567 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-08 00:38:37.375578 | orchestrator | Wednesday 08 April 2026 00:38:35 +0000 (0:00:00.116) 0:00:32.467 ******* 2026-04-08 00:38:37.375589 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:38:37.375600 | orchestrator | 2026-04-08 00:38:37.375610 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-08 00:38:37.375621 | orchestrator | Wednesday 08 April 2026 00:38:35 +0000 (0:00:00.129) 0:00:32.597 ******* 2026-04-08 00:38:37.375632 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:38:37.375642 | orchestrator | 2026-04-08 00:38:37.375653 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-08 00:38:37.375664 | orchestrator | Wednesday 08 April 2026 00:38:35 +0000 (0:00:00.114) 0:00:32.711 ******* 2026-04-08 00:38:37.375675 | orchestrator | ok: [testbed-node-5] => { 2026-04-08 00:38:37.375685 | orchestrator |  "ceph_osd_devices": { 2026-04-08 00:38:37.375696 | orchestrator |  "sdb": { 2026-04-08 00:38:37.375728 | orchestrator |  "osd_lvm_uuid": "c80af5d6-1159-5955-8f01-035b314db1bd" 2026-04-08 00:38:37.375740 | orchestrator |  }, 2026-04-08 00:38:37.375751 | orchestrator |  "sdc": { 2026-04-08 00:38:37.375762 | orchestrator |  "osd_lvm_uuid": "d7d0ff5a-46f9-53d2-8425-61ef59e49033" 2026-04-08 00:38:37.375773 | orchestrator |  } 2026-04-08 00:38:37.375784 | orchestrator |  } 2026-04-08 00:38:37.375795 | orchestrator | } 2026-04-08 00:38:37.375806 | orchestrator | 2026-04-08 00:38:37.375837 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-08 00:38:37.375858 | orchestrator | Wednesday 08 April 2026 00:38:35 +0000 (0:00:00.137) 0:00:32.848 ******* 2026-04-08 00:38:37.375869 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:38:37.375880 | orchestrator | 2026-04-08 00:38:37.375891 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-08 00:38:37.375902 | orchestrator | Wednesday 08 April 2026 00:38:35 +0000 (0:00:00.133) 0:00:32.982 ******* 2026-04-08 00:38:37.375913 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:38:37.375924 | orchestrator | 2026-04-08 00:38:37.375935 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-08 00:38:37.375945 | orchestrator | Wednesday 08 April 2026 00:38:36 +0000 (0:00:00.295) 0:00:33.277 ******* 2026-04-08 00:38:37.375956 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:38:37.375967 | orchestrator | 2026-04-08 00:38:37.375978 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-08 00:38:37.375989 | orchestrator | Wednesday 08 April 2026 00:38:36 +0000 (0:00:00.126) 0:00:33.404 ******* 2026-04-08 00:38:37.375999 | orchestrator | changed: [testbed-node-5] => { 2026-04-08 00:38:37.376010 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-08 00:38:37.376022 | orchestrator |  "ceph_osd_devices": { 2026-04-08 00:38:37.376033 | orchestrator |  "sdb": { 2026-04-08 00:38:37.376044 | orchestrator |  "osd_lvm_uuid": "c80af5d6-1159-5955-8f01-035b314db1bd" 2026-04-08 00:38:37.376055 | orchestrator |  }, 2026-04-08 00:38:37.376066 | orchestrator |  "sdc": { 2026-04-08 00:38:37.376077 | orchestrator |  "osd_lvm_uuid": "d7d0ff5a-46f9-53d2-8425-61ef59e49033" 2026-04-08 00:38:37.376088 | orchestrator |  } 2026-04-08 00:38:37.376135 | orchestrator |  }, 2026-04-08 00:38:37.376147 | orchestrator |  "lvm_volumes": [ 2026-04-08 00:38:37.376159 | orchestrator |  { 2026-04-08 00:38:37.376170 | orchestrator |  "data": "osd-block-c80af5d6-1159-5955-8f01-035b314db1bd", 2026-04-08 00:38:37.376181 | orchestrator |  "data_vg": "ceph-c80af5d6-1159-5955-8f01-035b314db1bd" 2026-04-08 00:38:37.376192 | orchestrator |  }, 2026-04-08 00:38:37.376203 | orchestrator |  { 2026-04-08 00:38:37.376219 | orchestrator |  "data": "osd-block-d7d0ff5a-46f9-53d2-8425-61ef59e49033", 2026-04-08 00:38:37.376229 | orchestrator |  "data_vg": "ceph-d7d0ff5a-46f9-53d2-8425-61ef59e49033" 2026-04-08 00:38:37.376240 | orchestrator |  } 2026-04-08 00:38:37.376251 | orchestrator |  ] 2026-04-08 00:38:37.376262 | orchestrator |  } 2026-04-08 00:38:37.376273 | orchestrator | } 2026-04-08 00:38:37.376284 | orchestrator | 2026-04-08 00:38:37.376295 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-08 00:38:37.376306 | orchestrator | Wednesday 08 April 2026 00:38:36 +0000 (0:00:00.202) 0:00:33.607 ******* 2026-04-08 00:38:37.376317 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-08 00:38:37.376327 | orchestrator | 2026-04-08 00:38:37.376338 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:38:37.376350 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-08 00:38:37.376362 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-08 00:38:37.376373 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-08 00:38:37.376384 | orchestrator | 2026-04-08 00:38:37.376395 | orchestrator | 2026-04-08 00:38:37.376405 | orchestrator | 2026-04-08 00:38:37.376416 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:38:37.376427 | orchestrator | Wednesday 08 April 2026 00:38:37 +0000 (0:00:00.897) 0:00:34.504 ******* 2026-04-08 00:38:37.376438 | orchestrator | =============================================================================== 2026-04-08 00:38:37.376503 | orchestrator | Write configuration file ------------------------------------------------ 3.61s 2026-04-08 00:38:37.376514 | orchestrator | Add known links to the list of available block devices ------------------ 1.01s 2026-04-08 00:38:37.376525 | orchestrator | Add known partitions to the list of available block devices ------------- 0.99s 2026-04-08 00:38:37.376536 | orchestrator | Get initial list of available block devices ----------------------------- 0.84s 2026-04-08 00:38:37.376547 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.80s 2026-04-08 00:38:37.376557 | orchestrator | Add known partitions to the list of available block devices ------------- 0.78s 2026-04-08 00:38:37.376568 | orchestrator | Add known links to the list of available block devices ------------------ 0.59s 2026-04-08 00:38:37.376579 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.57s 2026-04-08 00:38:37.376590 | orchestrator | Add known links to the list of available block devices ------------------ 0.54s 2026-04-08 00:38:37.376601 | orchestrator | Add known partitions to the list of available block devices ------------- 0.54s 2026-04-08 00:38:37.376611 | orchestrator | Print configuration data ------------------------------------------------ 0.54s 2026-04-08 00:38:37.376622 | orchestrator | Add known partitions to the list of available block devices ------------- 0.54s 2026-04-08 00:38:37.376633 | orchestrator | Add known links to the list of available block devices ------------------ 0.52s 2026-04-08 00:38:37.376653 | orchestrator | Generate lvm_volumes structure (block + db + wal) ----------------------- 0.51s 2026-04-08 00:38:37.665074 | orchestrator | Add known links to the list of available block devices ------------------ 0.51s 2026-04-08 00:38:37.665208 | orchestrator | Print DB devices -------------------------------------------------------- 0.49s 2026-04-08 00:38:37.665223 | orchestrator | Add known links to the list of available block devices ------------------ 0.49s 2026-04-08 00:38:37.665235 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.48s 2026-04-08 00:38:37.665247 | orchestrator | Add known partitions to the list of available block devices ------------- 0.47s 2026-04-08 00:38:37.665258 | orchestrator | Set WAL devices config data --------------------------------------------- 0.46s 2026-04-08 00:38:59.133371 | orchestrator | 2026-04-08 00:38:59 | INFO  | Task ff1a6dd4-675d-4a03-8c1f-6121806cb794 (sync inventory) is running in background. Output coming soon. 2026-04-08 00:39:25.748418 | orchestrator | 2026-04-08 00:39:00 | INFO  | Starting group_vars file reorganization 2026-04-08 00:39:25.748551 | orchestrator | 2026-04-08 00:39:00 | INFO  | Moved 0 file(s) to their respective directories 2026-04-08 00:39:25.748581 | orchestrator | 2026-04-08 00:39:00 | INFO  | Group_vars file reorganization completed 2026-04-08 00:39:25.748603 | orchestrator | 2026-04-08 00:39:03 | INFO  | Starting variable preparation from inventory 2026-04-08 00:39:25.748624 | orchestrator | 2026-04-08 00:39:05 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-04-08 00:39:25.748646 | orchestrator | 2026-04-08 00:39:05 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-04-08 00:39:25.748667 | orchestrator | 2026-04-08 00:39:05 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-04-08 00:39:25.748689 | orchestrator | 2026-04-08 00:39:05 | INFO  | 3 file(s) written, 6 host(s) processed 2026-04-08 00:39:25.748708 | orchestrator | 2026-04-08 00:39:05 | INFO  | Variable preparation completed 2026-04-08 00:39:25.748728 | orchestrator | 2026-04-08 00:39:06 | INFO  | Starting inventory overwrite handling 2026-04-08 00:39:25.748749 | orchestrator | 2026-04-08 00:39:06 | INFO  | Handling group overwrites in 99-overwrite 2026-04-08 00:39:25.748769 | orchestrator | 2026-04-08 00:39:06 | INFO  | Removing group frr:children from 60-generic 2026-04-08 00:39:25.748790 | orchestrator | 2026-04-08 00:39:06 | INFO  | Removing group netbird:children from 50-infrastructure 2026-04-08 00:39:25.748844 | orchestrator | 2026-04-08 00:39:06 | INFO  | Removing group ceph-mds from 50-ceph 2026-04-08 00:39:25.748868 | orchestrator | 2026-04-08 00:39:06 | INFO  | Removing group ceph-rgw from 50-ceph 2026-04-08 00:39:25.748889 | orchestrator | 2026-04-08 00:39:06 | INFO  | Handling group overwrites in 20-roles 2026-04-08 00:39:25.748908 | orchestrator | 2026-04-08 00:39:06 | INFO  | Removing group k3s_node from 50-infrastructure 2026-04-08 00:39:25.748926 | orchestrator | 2026-04-08 00:39:06 | INFO  | Removed 5 group(s) in total 2026-04-08 00:39:25.748947 | orchestrator | 2026-04-08 00:39:06 | INFO  | Inventory overwrite handling completed 2026-04-08 00:39:25.748966 | orchestrator | 2026-04-08 00:39:07 | INFO  | Starting merge of inventory files 2026-04-08 00:39:25.748986 | orchestrator | 2026-04-08 00:39:07 | INFO  | Inventory files merged successfully 2026-04-08 00:39:25.749005 | orchestrator | 2026-04-08 00:39:12 | INFO  | Generating minified hosts file 2026-04-08 00:39:25.749024 | orchestrator | 2026-04-08 00:39:13 | INFO  | Successfully wrote minified hosts file to /inventory.merge/hosts-minified.yml 2026-04-08 00:39:25.749074 | orchestrator | 2026-04-08 00:39:13 | INFO  | Successfully wrote fast inventory to /inventory.merge/fast/hosts.json 2026-04-08 00:39:25.749095 | orchestrator | 2026-04-08 00:39:15 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-04-08 00:39:25.749114 | orchestrator | 2026-04-08 00:39:24 | INFO  | Successfully wrote ClusterShell configuration 2026-04-08 00:39:25.749161 | orchestrator | [master 488c9d5] 2026-04-08-00-39 2026-04-08 00:39:25.749185 | orchestrator | 5 files changed, 75 insertions(+), 10 deletions(-) 2026-04-08 00:39:25.749205 | orchestrator | create mode 100644 fast/host_vars/testbed-node-3/ceph-lvm-configuration.yml 2026-04-08 00:39:25.749225 | orchestrator | create mode 100644 fast/host_vars/testbed-node-4/ceph-lvm-configuration.yml 2026-04-08 00:39:25.749245 | orchestrator | create mode 100644 fast/host_vars/testbed-node-5/ceph-lvm-configuration.yml 2026-04-08 00:39:26.944150 | orchestrator | 2026-04-08 00:39:26 | INFO  | Prepare task for execution of ceph-create-lvm-devices. 2026-04-08 00:39:27.003029 | orchestrator | 2026-04-08 00:39:27 | INFO  | Task c6663b0e-41df-4237-b6a6-f084d55c9c4f (ceph-create-lvm-devices) was prepared for execution. 2026-04-08 00:39:27.003183 | orchestrator | 2026-04-08 00:39:27 | INFO  | It takes a moment until task c6663b0e-41df-4237-b6a6-f084d55c9c4f (ceph-create-lvm-devices) has been started and output is visible here. 2026-04-08 00:39:36.923017 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-08 00:39:36.923161 | orchestrator | 2.16.14 2026-04-08 00:39:36.923179 | orchestrator | 2026-04-08 00:39:36.923192 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-08 00:39:36.923204 | orchestrator | 2026-04-08 00:39:36.923216 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-08 00:39:36.923228 | orchestrator | Wednesday 08 April 2026 00:39:30 +0000 (0:00:00.199) 0:00:00.199 ******* 2026-04-08 00:39:36.923239 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-08 00:39:36.923250 | orchestrator | 2026-04-08 00:39:36.923262 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-08 00:39:36.923273 | orchestrator | Wednesday 08 April 2026 00:39:30 +0000 (0:00:00.211) 0:00:00.411 ******* 2026-04-08 00:39:36.923284 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:39:36.923295 | orchestrator | 2026-04-08 00:39:36.923306 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:39:36.923317 | orchestrator | Wednesday 08 April 2026 00:39:31 +0000 (0:00:00.182) 0:00:00.593 ******* 2026-04-08 00:39:36.923328 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-08 00:39:36.923362 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-08 00:39:36.923373 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-08 00:39:36.923384 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-08 00:39:36.923395 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-08 00:39:36.923406 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-08 00:39:36.923430 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-08 00:39:36.923441 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-08 00:39:36.923452 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-08 00:39:36.923462 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-08 00:39:36.923473 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-08 00:39:36.923484 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-08 00:39:36.923495 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-08 00:39:36.923506 | orchestrator | 2026-04-08 00:39:36.923516 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:39:36.923527 | orchestrator | Wednesday 08 April 2026 00:39:31 +0000 (0:00:00.352) 0:00:00.946 ******* 2026-04-08 00:39:36.923538 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:36.923549 | orchestrator | 2026-04-08 00:39:36.923560 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:39:36.923571 | orchestrator | Wednesday 08 April 2026 00:39:31 +0000 (0:00:00.353) 0:00:01.300 ******* 2026-04-08 00:39:36.923581 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:36.923592 | orchestrator | 2026-04-08 00:39:36.923603 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:39:36.923613 | orchestrator | Wednesday 08 April 2026 00:39:32 +0000 (0:00:00.191) 0:00:01.491 ******* 2026-04-08 00:39:36.923624 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:36.923634 | orchestrator | 2026-04-08 00:39:36.923645 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:39:36.923656 | orchestrator | Wednesday 08 April 2026 00:39:32 +0000 (0:00:00.160) 0:00:01.652 ******* 2026-04-08 00:39:36.923667 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:36.923677 | orchestrator | 2026-04-08 00:39:36.923688 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:39:36.923698 | orchestrator | Wednesday 08 April 2026 00:39:32 +0000 (0:00:00.162) 0:00:01.815 ******* 2026-04-08 00:39:36.923709 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:36.923720 | orchestrator | 2026-04-08 00:39:36.923730 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:39:36.923741 | orchestrator | Wednesday 08 April 2026 00:39:32 +0000 (0:00:00.183) 0:00:01.998 ******* 2026-04-08 00:39:36.923751 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:36.923762 | orchestrator | 2026-04-08 00:39:36.923773 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:39:36.923783 | orchestrator | Wednesday 08 April 2026 00:39:32 +0000 (0:00:00.165) 0:00:02.164 ******* 2026-04-08 00:39:36.923795 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:36.923805 | orchestrator | 2026-04-08 00:39:36.923816 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:39:36.923827 | orchestrator | Wednesday 08 April 2026 00:39:32 +0000 (0:00:00.163) 0:00:02.327 ******* 2026-04-08 00:39:36.923837 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:36.923848 | orchestrator | 2026-04-08 00:39:36.923859 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:39:36.923879 | orchestrator | Wednesday 08 April 2026 00:39:33 +0000 (0:00:00.174) 0:00:02.502 ******* 2026-04-08 00:39:36.923890 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_4596a618-b0c7-4f6c-b3f8-3bb0eece7c92) 2026-04-08 00:39:36.923902 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_4596a618-b0c7-4f6c-b3f8-3bb0eece7c92) 2026-04-08 00:39:36.923913 | orchestrator | 2026-04-08 00:39:36.923923 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:39:36.923952 | orchestrator | Wednesday 08 April 2026 00:39:33 +0000 (0:00:00.361) 0:00:02.863 ******* 2026-04-08 00:39:36.923963 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_9851de66-42ac-4afe-9f6b-65921d8ebe77) 2026-04-08 00:39:36.923974 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_9851de66-42ac-4afe-9f6b-65921d8ebe77) 2026-04-08 00:39:36.923985 | orchestrator | 2026-04-08 00:39:36.923996 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:39:36.924007 | orchestrator | Wednesday 08 April 2026 00:39:33 +0000 (0:00:00.353) 0:00:03.217 ******* 2026-04-08 00:39:36.924017 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3c9bb5e0-782f-4e13-9d09-525e18a95d4a) 2026-04-08 00:39:36.924075 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3c9bb5e0-782f-4e13-9d09-525e18a95d4a) 2026-04-08 00:39:36.924087 | orchestrator | 2026-04-08 00:39:36.924098 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:39:36.924108 | orchestrator | Wednesday 08 April 2026 00:39:34 +0000 (0:00:00.476) 0:00:03.694 ******* 2026-04-08 00:39:36.924119 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_10ca2d35-3b66-46f3-ab0f-253d8a66f2e6) 2026-04-08 00:39:36.924130 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_10ca2d35-3b66-46f3-ab0f-253d8a66f2e6) 2026-04-08 00:39:36.924140 | orchestrator | 2026-04-08 00:39:36.924151 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:39:36.924162 | orchestrator | Wednesday 08 April 2026 00:39:34 +0000 (0:00:00.533) 0:00:04.227 ******* 2026-04-08 00:39:36.924172 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-08 00:39:36.924183 | orchestrator | 2026-04-08 00:39:36.924193 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:39:36.924204 | orchestrator | Wednesday 08 April 2026 00:39:35 +0000 (0:00:00.537) 0:00:04.765 ******* 2026-04-08 00:39:36.924215 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-08 00:39:36.924226 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-08 00:39:36.924237 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-08 00:39:36.924247 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-08 00:39:36.924258 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-08 00:39:36.924269 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-08 00:39:36.924279 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-08 00:39:36.924290 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-08 00:39:36.924300 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-08 00:39:36.924311 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-08 00:39:36.924321 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-08 00:39:36.924332 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-08 00:39:36.924350 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-08 00:39:36.924361 | orchestrator | 2026-04-08 00:39:36.924371 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:39:36.924382 | orchestrator | Wednesday 08 April 2026 00:39:35 +0000 (0:00:00.367) 0:00:05.132 ******* 2026-04-08 00:39:36.924392 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:36.924403 | orchestrator | 2026-04-08 00:39:36.924414 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:39:36.924424 | orchestrator | Wednesday 08 April 2026 00:39:35 +0000 (0:00:00.178) 0:00:05.311 ******* 2026-04-08 00:39:36.924435 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:36.924445 | orchestrator | 2026-04-08 00:39:36.924456 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:39:36.924467 | orchestrator | Wednesday 08 April 2026 00:39:36 +0000 (0:00:00.176) 0:00:05.488 ******* 2026-04-08 00:39:36.924478 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:36.924488 | orchestrator | 2026-04-08 00:39:36.924507 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:39:36.924518 | orchestrator | Wednesday 08 April 2026 00:39:36 +0000 (0:00:00.175) 0:00:05.663 ******* 2026-04-08 00:39:36.924528 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:36.924539 | orchestrator | 2026-04-08 00:39:36.924550 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:39:36.924560 | orchestrator | Wednesday 08 April 2026 00:39:36 +0000 (0:00:00.169) 0:00:05.832 ******* 2026-04-08 00:39:36.924571 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:36.924581 | orchestrator | 2026-04-08 00:39:36.924592 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:39:36.924603 | orchestrator | Wednesday 08 April 2026 00:39:36 +0000 (0:00:00.165) 0:00:05.998 ******* 2026-04-08 00:39:36.924613 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:36.924630 | orchestrator | 2026-04-08 00:39:36.924648 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:39:36.924676 | orchestrator | Wednesday 08 April 2026 00:39:36 +0000 (0:00:00.167) 0:00:06.166 ******* 2026-04-08 00:39:36.924696 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:36.924713 | orchestrator | 2026-04-08 00:39:36.924739 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:39:44.403167 | orchestrator | Wednesday 08 April 2026 00:39:36 +0000 (0:00:00.218) 0:00:06.384 ******* 2026-04-08 00:39:44.403250 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:44.403258 | orchestrator | 2026-04-08 00:39:44.403265 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:39:44.403271 | orchestrator | Wednesday 08 April 2026 00:39:37 +0000 (0:00:00.176) 0:00:06.560 ******* 2026-04-08 00:39:44.403277 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-08 00:39:44.403284 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-08 00:39:44.403290 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-08 00:39:44.403296 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-08 00:39:44.403301 | orchestrator | 2026-04-08 00:39:44.403307 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:39:44.403313 | orchestrator | Wednesday 08 April 2026 00:39:38 +0000 (0:00:00.951) 0:00:07.511 ******* 2026-04-08 00:39:44.403318 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:44.403324 | orchestrator | 2026-04-08 00:39:44.403329 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:39:44.403335 | orchestrator | Wednesday 08 April 2026 00:39:38 +0000 (0:00:00.192) 0:00:07.704 ******* 2026-04-08 00:39:44.403340 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:44.403346 | orchestrator | 2026-04-08 00:39:44.403351 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:39:44.403357 | orchestrator | Wednesday 08 April 2026 00:39:38 +0000 (0:00:00.180) 0:00:07.885 ******* 2026-04-08 00:39:44.403380 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:44.403386 | orchestrator | 2026-04-08 00:39:44.403391 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:39:44.403397 | orchestrator | Wednesday 08 April 2026 00:39:38 +0000 (0:00:00.161) 0:00:08.046 ******* 2026-04-08 00:39:44.403402 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:44.403407 | orchestrator | 2026-04-08 00:39:44.403413 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-08 00:39:44.403428 | orchestrator | Wednesday 08 April 2026 00:39:38 +0000 (0:00:00.191) 0:00:08.238 ******* 2026-04-08 00:39:44.403434 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:44.403440 | orchestrator | 2026-04-08 00:39:44.403445 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-08 00:39:44.403450 | orchestrator | Wednesday 08 April 2026 00:39:38 +0000 (0:00:00.124) 0:00:08.362 ******* 2026-04-08 00:39:44.403456 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8'}}) 2026-04-08 00:39:44.403462 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9c748ac0-b7ad-5284-8a6e-a168bddd5b66'}}) 2026-04-08 00:39:44.403467 | orchestrator | 2026-04-08 00:39:44.403473 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-08 00:39:44.403478 | orchestrator | Wednesday 08 April 2026 00:39:39 +0000 (0:00:00.175) 0:00:08.537 ******* 2026-04-08 00:39:44.403485 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8', 'data_vg': 'ceph-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8'}) 2026-04-08 00:39:44.403492 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9c748ac0-b7ad-5284-8a6e-a168bddd5b66', 'data_vg': 'ceph-9c748ac0-b7ad-5284-8a6e-a168bddd5b66'}) 2026-04-08 00:39:44.403497 | orchestrator | 2026-04-08 00:39:44.403502 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-08 00:39:44.403508 | orchestrator | Wednesday 08 April 2026 00:39:41 +0000 (0:00:01.934) 0:00:10.472 ******* 2026-04-08 00:39:44.403514 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8', 'data_vg': 'ceph-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8'})  2026-04-08 00:39:44.403521 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9c748ac0-b7ad-5284-8a6e-a168bddd5b66', 'data_vg': 'ceph-9c748ac0-b7ad-5284-8a6e-a168bddd5b66'})  2026-04-08 00:39:44.403526 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:44.403531 | orchestrator | 2026-04-08 00:39:44.403537 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-08 00:39:44.403542 | orchestrator | Wednesday 08 April 2026 00:39:41 +0000 (0:00:00.138) 0:00:10.610 ******* 2026-04-08 00:39:44.403548 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8', 'data_vg': 'ceph-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8'}) 2026-04-08 00:39:44.403553 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9c748ac0-b7ad-5284-8a6e-a168bddd5b66', 'data_vg': 'ceph-9c748ac0-b7ad-5284-8a6e-a168bddd5b66'}) 2026-04-08 00:39:44.403559 | orchestrator | 2026-04-08 00:39:44.403564 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-08 00:39:44.403569 | orchestrator | Wednesday 08 April 2026 00:39:42 +0000 (0:00:01.464) 0:00:12.075 ******* 2026-04-08 00:39:44.403575 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8', 'data_vg': 'ceph-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8'})  2026-04-08 00:39:44.403580 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9c748ac0-b7ad-5284-8a6e-a168bddd5b66', 'data_vg': 'ceph-9c748ac0-b7ad-5284-8a6e-a168bddd5b66'})  2026-04-08 00:39:44.403585 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:44.403591 | orchestrator | 2026-04-08 00:39:44.403596 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-08 00:39:44.403607 | orchestrator | Wednesday 08 April 2026 00:39:42 +0000 (0:00:00.151) 0:00:12.226 ******* 2026-04-08 00:39:44.403625 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:44.403631 | orchestrator | 2026-04-08 00:39:44.403637 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-08 00:39:44.403642 | orchestrator | Wednesday 08 April 2026 00:39:42 +0000 (0:00:00.126) 0:00:12.353 ******* 2026-04-08 00:39:44.403648 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8', 'data_vg': 'ceph-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8'})  2026-04-08 00:39:44.403654 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9c748ac0-b7ad-5284-8a6e-a168bddd5b66', 'data_vg': 'ceph-9c748ac0-b7ad-5284-8a6e-a168bddd5b66'})  2026-04-08 00:39:44.403661 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:44.403667 | orchestrator | 2026-04-08 00:39:44.403673 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-08 00:39:44.403679 | orchestrator | Wednesday 08 April 2026 00:39:43 +0000 (0:00:00.301) 0:00:12.655 ******* 2026-04-08 00:39:44.403686 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:44.403692 | orchestrator | 2026-04-08 00:39:44.403698 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-08 00:39:44.403705 | orchestrator | Wednesday 08 April 2026 00:39:43 +0000 (0:00:00.122) 0:00:12.777 ******* 2026-04-08 00:39:44.403711 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8', 'data_vg': 'ceph-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8'})  2026-04-08 00:39:44.403717 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9c748ac0-b7ad-5284-8a6e-a168bddd5b66', 'data_vg': 'ceph-9c748ac0-b7ad-5284-8a6e-a168bddd5b66'})  2026-04-08 00:39:44.403724 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:44.403730 | orchestrator | 2026-04-08 00:39:44.403737 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-08 00:39:44.403743 | orchestrator | Wednesday 08 April 2026 00:39:43 +0000 (0:00:00.143) 0:00:12.920 ******* 2026-04-08 00:39:44.403749 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:44.403756 | orchestrator | 2026-04-08 00:39:44.403762 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-08 00:39:44.403768 | orchestrator | Wednesday 08 April 2026 00:39:43 +0000 (0:00:00.122) 0:00:13.042 ******* 2026-04-08 00:39:44.403775 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8', 'data_vg': 'ceph-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8'})  2026-04-08 00:39:44.403781 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9c748ac0-b7ad-5284-8a6e-a168bddd5b66', 'data_vg': 'ceph-9c748ac0-b7ad-5284-8a6e-a168bddd5b66'})  2026-04-08 00:39:44.403788 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:44.403794 | orchestrator | 2026-04-08 00:39:44.403801 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-08 00:39:44.403807 | orchestrator | Wednesday 08 April 2026 00:39:43 +0000 (0:00:00.142) 0:00:13.185 ******* 2026-04-08 00:39:44.403813 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:39:44.403820 | orchestrator | 2026-04-08 00:39:44.403827 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-08 00:39:44.403833 | orchestrator | Wednesday 08 April 2026 00:39:43 +0000 (0:00:00.126) 0:00:13.311 ******* 2026-04-08 00:39:44.403839 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8', 'data_vg': 'ceph-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8'})  2026-04-08 00:39:44.403846 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9c748ac0-b7ad-5284-8a6e-a168bddd5b66', 'data_vg': 'ceph-9c748ac0-b7ad-5284-8a6e-a168bddd5b66'})  2026-04-08 00:39:44.403852 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:44.403857 | orchestrator | 2026-04-08 00:39:44.403862 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-08 00:39:44.403868 | orchestrator | Wednesday 08 April 2026 00:39:43 +0000 (0:00:00.141) 0:00:13.453 ******* 2026-04-08 00:39:44.403877 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8', 'data_vg': 'ceph-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8'})  2026-04-08 00:39:44.403882 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9c748ac0-b7ad-5284-8a6e-a168bddd5b66', 'data_vg': 'ceph-9c748ac0-b7ad-5284-8a6e-a168bddd5b66'})  2026-04-08 00:39:44.403888 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:44.403893 | orchestrator | 2026-04-08 00:39:44.403898 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-08 00:39:44.403904 | orchestrator | Wednesday 08 April 2026 00:39:44 +0000 (0:00:00.146) 0:00:13.600 ******* 2026-04-08 00:39:44.403909 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8', 'data_vg': 'ceph-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8'})  2026-04-08 00:39:44.403915 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9c748ac0-b7ad-5284-8a6e-a168bddd5b66', 'data_vg': 'ceph-9c748ac0-b7ad-5284-8a6e-a168bddd5b66'})  2026-04-08 00:39:44.403920 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:44.403925 | orchestrator | 2026-04-08 00:39:44.403931 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-08 00:39:44.403936 | orchestrator | Wednesday 08 April 2026 00:39:44 +0000 (0:00:00.138) 0:00:13.738 ******* 2026-04-08 00:39:44.403941 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:44.403947 | orchestrator | 2026-04-08 00:39:44.403952 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-08 00:39:44.403961 | orchestrator | Wednesday 08 April 2026 00:39:44 +0000 (0:00:00.124) 0:00:13.863 ******* 2026-04-08 00:39:49.995891 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:49.995990 | orchestrator | 2026-04-08 00:39:49.996004 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-08 00:39:49.996063 | orchestrator | Wednesday 08 April 2026 00:39:44 +0000 (0:00:00.123) 0:00:13.986 ******* 2026-04-08 00:39:49.996072 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:49.996081 | orchestrator | 2026-04-08 00:39:49.996089 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-08 00:39:49.996097 | orchestrator | Wednesday 08 April 2026 00:39:44 +0000 (0:00:00.113) 0:00:14.100 ******* 2026-04-08 00:39:49.996105 | orchestrator | ok: [testbed-node-3] => { 2026-04-08 00:39:49.996114 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-08 00:39:49.996123 | orchestrator | } 2026-04-08 00:39:49.996131 | orchestrator | 2026-04-08 00:39:49.996139 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-08 00:39:49.996147 | orchestrator | Wednesday 08 April 2026 00:39:44 +0000 (0:00:00.286) 0:00:14.387 ******* 2026-04-08 00:39:49.996155 | orchestrator | ok: [testbed-node-3] => { 2026-04-08 00:39:49.996163 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-08 00:39:49.996171 | orchestrator | } 2026-04-08 00:39:49.996179 | orchestrator | 2026-04-08 00:39:49.996187 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-08 00:39:49.996195 | orchestrator | Wednesday 08 April 2026 00:39:45 +0000 (0:00:00.131) 0:00:14.518 ******* 2026-04-08 00:39:49.996203 | orchestrator | ok: [testbed-node-3] => { 2026-04-08 00:39:49.996211 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-08 00:39:49.996219 | orchestrator | } 2026-04-08 00:39:49.996227 | orchestrator | 2026-04-08 00:39:49.996235 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-08 00:39:49.996243 | orchestrator | Wednesday 08 April 2026 00:39:45 +0000 (0:00:00.130) 0:00:14.649 ******* 2026-04-08 00:39:49.996250 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:39:49.996258 | orchestrator | 2026-04-08 00:39:49.996271 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-08 00:39:49.996279 | orchestrator | Wednesday 08 April 2026 00:39:45 +0000 (0:00:00.618) 0:00:15.268 ******* 2026-04-08 00:39:49.996287 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:39:49.996314 | orchestrator | 2026-04-08 00:39:49.996322 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-08 00:39:49.996330 | orchestrator | Wednesday 08 April 2026 00:39:46 +0000 (0:00:00.480) 0:00:15.748 ******* 2026-04-08 00:39:49.996338 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:39:49.996346 | orchestrator | 2026-04-08 00:39:49.996354 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-08 00:39:49.996361 | orchestrator | Wednesday 08 April 2026 00:39:46 +0000 (0:00:00.485) 0:00:16.234 ******* 2026-04-08 00:39:49.996369 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:39:49.996377 | orchestrator | 2026-04-08 00:39:49.996385 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-08 00:39:49.996393 | orchestrator | Wednesday 08 April 2026 00:39:46 +0000 (0:00:00.136) 0:00:16.370 ******* 2026-04-08 00:39:49.996401 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:49.996408 | orchestrator | 2026-04-08 00:39:49.996417 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-08 00:39:49.996427 | orchestrator | Wednesday 08 April 2026 00:39:47 +0000 (0:00:00.110) 0:00:16.480 ******* 2026-04-08 00:39:49.996435 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:49.996444 | orchestrator | 2026-04-08 00:39:49.996453 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-08 00:39:49.996462 | orchestrator | Wednesday 08 April 2026 00:39:47 +0000 (0:00:00.097) 0:00:16.578 ******* 2026-04-08 00:39:49.996471 | orchestrator | ok: [testbed-node-3] => { 2026-04-08 00:39:49.996480 | orchestrator |  "vgs_report": { 2026-04-08 00:39:49.996490 | orchestrator |  "vg": [] 2026-04-08 00:39:49.996499 | orchestrator |  } 2026-04-08 00:39:49.996508 | orchestrator | } 2026-04-08 00:39:49.996517 | orchestrator | 2026-04-08 00:39:49.996526 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-08 00:39:49.996536 | orchestrator | Wednesday 08 April 2026 00:39:47 +0000 (0:00:00.130) 0:00:16.709 ******* 2026-04-08 00:39:49.996544 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:49.996553 | orchestrator | 2026-04-08 00:39:49.996563 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-08 00:39:49.996571 | orchestrator | Wednesday 08 April 2026 00:39:47 +0000 (0:00:00.105) 0:00:16.814 ******* 2026-04-08 00:39:49.996581 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:49.996590 | orchestrator | 2026-04-08 00:39:49.996599 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-08 00:39:49.996607 | orchestrator | Wednesday 08 April 2026 00:39:47 +0000 (0:00:00.127) 0:00:16.941 ******* 2026-04-08 00:39:49.996616 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:49.996625 | orchestrator | 2026-04-08 00:39:49.996634 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-08 00:39:49.996643 | orchestrator | Wednesday 08 April 2026 00:39:47 +0000 (0:00:00.121) 0:00:17.063 ******* 2026-04-08 00:39:49.996652 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:49.996661 | orchestrator | 2026-04-08 00:39:49.996670 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-08 00:39:49.996678 | orchestrator | Wednesday 08 April 2026 00:39:47 +0000 (0:00:00.277) 0:00:17.340 ******* 2026-04-08 00:39:49.996686 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:49.996693 | orchestrator | 2026-04-08 00:39:49.996701 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-08 00:39:49.996709 | orchestrator | Wednesday 08 April 2026 00:39:47 +0000 (0:00:00.116) 0:00:17.456 ******* 2026-04-08 00:39:49.996717 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:49.996725 | orchestrator | 2026-04-08 00:39:49.996732 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-08 00:39:49.996740 | orchestrator | Wednesday 08 April 2026 00:39:48 +0000 (0:00:00.149) 0:00:17.606 ******* 2026-04-08 00:39:49.996748 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:49.996756 | orchestrator | 2026-04-08 00:39:49.996763 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-08 00:39:49.996777 | orchestrator | Wednesday 08 April 2026 00:39:48 +0000 (0:00:00.121) 0:00:17.727 ******* 2026-04-08 00:39:49.996798 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:49.996806 | orchestrator | 2026-04-08 00:39:49.996814 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-08 00:39:49.996822 | orchestrator | Wednesday 08 April 2026 00:39:48 +0000 (0:00:00.113) 0:00:17.840 ******* 2026-04-08 00:39:49.996830 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:49.996838 | orchestrator | 2026-04-08 00:39:49.996846 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-08 00:39:49.996853 | orchestrator | Wednesday 08 April 2026 00:39:48 +0000 (0:00:00.121) 0:00:17.962 ******* 2026-04-08 00:39:49.996861 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:49.996869 | orchestrator | 2026-04-08 00:39:49.996877 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-08 00:39:49.996885 | orchestrator | Wednesday 08 April 2026 00:39:48 +0000 (0:00:00.118) 0:00:18.080 ******* 2026-04-08 00:39:49.996893 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:49.996900 | orchestrator | 2026-04-08 00:39:49.996908 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-08 00:39:49.996916 | orchestrator | Wednesday 08 April 2026 00:39:48 +0000 (0:00:00.121) 0:00:18.202 ******* 2026-04-08 00:39:49.996924 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:49.996931 | orchestrator | 2026-04-08 00:39:49.996939 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-08 00:39:49.996947 | orchestrator | Wednesday 08 April 2026 00:39:48 +0000 (0:00:00.121) 0:00:18.324 ******* 2026-04-08 00:39:49.996955 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:49.996963 | orchestrator | 2026-04-08 00:39:49.996970 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-08 00:39:49.996978 | orchestrator | Wednesday 08 April 2026 00:39:48 +0000 (0:00:00.108) 0:00:18.432 ******* 2026-04-08 00:39:49.996986 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:49.996994 | orchestrator | 2026-04-08 00:39:49.997005 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-08 00:39:49.997032 | orchestrator | Wednesday 08 April 2026 00:39:49 +0000 (0:00:00.114) 0:00:18.546 ******* 2026-04-08 00:39:49.997041 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8', 'data_vg': 'ceph-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8'})  2026-04-08 00:39:49.997051 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9c748ac0-b7ad-5284-8a6e-a168bddd5b66', 'data_vg': 'ceph-9c748ac0-b7ad-5284-8a6e-a168bddd5b66'})  2026-04-08 00:39:49.997059 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:49.997067 | orchestrator | 2026-04-08 00:39:49.997075 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-08 00:39:49.997082 | orchestrator | Wednesday 08 April 2026 00:39:49 +0000 (0:00:00.128) 0:00:18.675 ******* 2026-04-08 00:39:49.997090 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8', 'data_vg': 'ceph-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8'})  2026-04-08 00:39:49.997098 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9c748ac0-b7ad-5284-8a6e-a168bddd5b66', 'data_vg': 'ceph-9c748ac0-b7ad-5284-8a6e-a168bddd5b66'})  2026-04-08 00:39:49.997106 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:49.997114 | orchestrator | 2026-04-08 00:39:49.997122 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-08 00:39:49.997129 | orchestrator | Wednesday 08 April 2026 00:39:49 +0000 (0:00:00.299) 0:00:18.974 ******* 2026-04-08 00:39:49.997137 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8', 'data_vg': 'ceph-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8'})  2026-04-08 00:39:49.997145 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9c748ac0-b7ad-5284-8a6e-a168bddd5b66', 'data_vg': 'ceph-9c748ac0-b7ad-5284-8a6e-a168bddd5b66'})  2026-04-08 00:39:49.997159 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:49.997166 | orchestrator | 2026-04-08 00:39:49.997174 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-08 00:39:49.997182 | orchestrator | Wednesday 08 April 2026 00:39:49 +0000 (0:00:00.144) 0:00:19.118 ******* 2026-04-08 00:39:49.997190 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8', 'data_vg': 'ceph-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8'})  2026-04-08 00:39:49.997197 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9c748ac0-b7ad-5284-8a6e-a168bddd5b66', 'data_vg': 'ceph-9c748ac0-b7ad-5284-8a6e-a168bddd5b66'})  2026-04-08 00:39:49.997205 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:49.997213 | orchestrator | 2026-04-08 00:39:49.997221 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-08 00:39:49.997228 | orchestrator | Wednesday 08 April 2026 00:39:49 +0000 (0:00:00.140) 0:00:19.259 ******* 2026-04-08 00:39:49.997236 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8', 'data_vg': 'ceph-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8'})  2026-04-08 00:39:49.997244 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9c748ac0-b7ad-5284-8a6e-a168bddd5b66', 'data_vg': 'ceph-9c748ac0-b7ad-5284-8a6e-a168bddd5b66'})  2026-04-08 00:39:49.997252 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:49.997259 | orchestrator | 2026-04-08 00:39:49.997267 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-08 00:39:49.997275 | orchestrator | Wednesday 08 April 2026 00:39:49 +0000 (0:00:00.136) 0:00:19.396 ******* 2026-04-08 00:39:49.997288 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8', 'data_vg': 'ceph-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8'})  2026-04-08 00:39:54.408104 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9c748ac0-b7ad-5284-8a6e-a168bddd5b66', 'data_vg': 'ceph-9c748ac0-b7ad-5284-8a6e-a168bddd5b66'})  2026-04-08 00:39:54.408184 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:54.408191 | orchestrator | 2026-04-08 00:39:54.408197 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-08 00:39:54.408204 | orchestrator | Wednesday 08 April 2026 00:39:50 +0000 (0:00:00.151) 0:00:19.547 ******* 2026-04-08 00:39:54.408209 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8', 'data_vg': 'ceph-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8'})  2026-04-08 00:39:54.408214 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9c748ac0-b7ad-5284-8a6e-a168bddd5b66', 'data_vg': 'ceph-9c748ac0-b7ad-5284-8a6e-a168bddd5b66'})  2026-04-08 00:39:54.408218 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:54.408223 | orchestrator | 2026-04-08 00:39:54.408227 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-08 00:39:54.408232 | orchestrator | Wednesday 08 April 2026 00:39:50 +0000 (0:00:00.127) 0:00:19.675 ******* 2026-04-08 00:39:54.408236 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8', 'data_vg': 'ceph-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8'})  2026-04-08 00:39:54.408241 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9c748ac0-b7ad-5284-8a6e-a168bddd5b66', 'data_vg': 'ceph-9c748ac0-b7ad-5284-8a6e-a168bddd5b66'})  2026-04-08 00:39:54.408246 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:54.408250 | orchestrator | 2026-04-08 00:39:54.408255 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-08 00:39:54.408259 | orchestrator | Wednesday 08 April 2026 00:39:50 +0000 (0:00:00.128) 0:00:19.803 ******* 2026-04-08 00:39:54.408264 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:39:54.408269 | orchestrator | 2026-04-08 00:39:54.408274 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-08 00:39:54.408293 | orchestrator | Wednesday 08 April 2026 00:39:50 +0000 (0:00:00.473) 0:00:20.276 ******* 2026-04-08 00:39:54.408298 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:39:54.408302 | orchestrator | 2026-04-08 00:39:54.408307 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-08 00:39:54.408311 | orchestrator | Wednesday 08 April 2026 00:39:51 +0000 (0:00:00.483) 0:00:20.760 ******* 2026-04-08 00:39:54.408315 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:39:54.408320 | orchestrator | 2026-04-08 00:39:54.408324 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-08 00:39:54.408339 | orchestrator | Wednesday 08 April 2026 00:39:51 +0000 (0:00:00.132) 0:00:20.893 ******* 2026-04-08 00:39:54.408344 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8', 'vg_name': 'ceph-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8'}) 2026-04-08 00:39:54.408350 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-9c748ac0-b7ad-5284-8a6e-a168bddd5b66', 'vg_name': 'ceph-9c748ac0-b7ad-5284-8a6e-a168bddd5b66'}) 2026-04-08 00:39:54.408354 | orchestrator | 2026-04-08 00:39:54.408359 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-08 00:39:54.408363 | orchestrator | Wednesday 08 April 2026 00:39:51 +0000 (0:00:00.152) 0:00:21.046 ******* 2026-04-08 00:39:54.408368 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8', 'data_vg': 'ceph-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8'})  2026-04-08 00:39:54.408372 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9c748ac0-b7ad-5284-8a6e-a168bddd5b66', 'data_vg': 'ceph-9c748ac0-b7ad-5284-8a6e-a168bddd5b66'})  2026-04-08 00:39:54.408377 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:54.408381 | orchestrator | 2026-04-08 00:39:54.408385 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-08 00:39:54.408390 | orchestrator | Wednesday 08 April 2026 00:39:51 +0000 (0:00:00.132) 0:00:21.179 ******* 2026-04-08 00:39:54.408394 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8', 'data_vg': 'ceph-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8'})  2026-04-08 00:39:54.408399 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9c748ac0-b7ad-5284-8a6e-a168bddd5b66', 'data_vg': 'ceph-9c748ac0-b7ad-5284-8a6e-a168bddd5b66'})  2026-04-08 00:39:54.408403 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:54.408408 | orchestrator | 2026-04-08 00:39:54.408412 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-08 00:39:54.408416 | orchestrator | Wednesday 08 April 2026 00:39:52 +0000 (0:00:00.307) 0:00:21.487 ******* 2026-04-08 00:39:54.408421 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8', 'data_vg': 'ceph-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8'})  2026-04-08 00:39:54.408425 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9c748ac0-b7ad-5284-8a6e-a168bddd5b66', 'data_vg': 'ceph-9c748ac0-b7ad-5284-8a6e-a168bddd5b66'})  2026-04-08 00:39:54.408430 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:39:54.408434 | orchestrator | 2026-04-08 00:39:54.408438 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-08 00:39:54.408443 | orchestrator | Wednesday 08 April 2026 00:39:52 +0000 (0:00:00.152) 0:00:21.639 ******* 2026-04-08 00:39:54.408458 | orchestrator | ok: [testbed-node-3] => { 2026-04-08 00:39:54.408463 | orchestrator |  "lvm_report": { 2026-04-08 00:39:54.408468 | orchestrator |  "lv": [ 2026-04-08 00:39:54.408473 | orchestrator |  { 2026-04-08 00:39:54.408477 | orchestrator |  "lv_name": "osd-block-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8", 2026-04-08 00:39:54.408482 | orchestrator |  "vg_name": "ceph-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8" 2026-04-08 00:39:54.408487 | orchestrator |  }, 2026-04-08 00:39:54.408491 | orchestrator |  { 2026-04-08 00:39:54.408496 | orchestrator |  "lv_name": "osd-block-9c748ac0-b7ad-5284-8a6e-a168bddd5b66", 2026-04-08 00:39:54.408505 | orchestrator |  "vg_name": "ceph-9c748ac0-b7ad-5284-8a6e-a168bddd5b66" 2026-04-08 00:39:54.408510 | orchestrator |  } 2026-04-08 00:39:54.408514 | orchestrator |  ], 2026-04-08 00:39:54.408518 | orchestrator |  "pv": [ 2026-04-08 00:39:54.408523 | orchestrator |  { 2026-04-08 00:39:54.408527 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-08 00:39:54.408532 | orchestrator |  "vg_name": "ceph-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8" 2026-04-08 00:39:54.408536 | orchestrator |  }, 2026-04-08 00:39:54.408540 | orchestrator |  { 2026-04-08 00:39:54.408545 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-08 00:39:54.408549 | orchestrator |  "vg_name": "ceph-9c748ac0-b7ad-5284-8a6e-a168bddd5b66" 2026-04-08 00:39:54.408553 | orchestrator |  } 2026-04-08 00:39:54.408558 | orchestrator |  ] 2026-04-08 00:39:54.408562 | orchestrator |  } 2026-04-08 00:39:54.408567 | orchestrator | } 2026-04-08 00:39:54.408571 | orchestrator | 2026-04-08 00:39:54.408576 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-08 00:39:54.408580 | orchestrator | 2026-04-08 00:39:54.408584 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-08 00:39:54.408591 | orchestrator | Wednesday 08 April 2026 00:39:52 +0000 (0:00:00.254) 0:00:21.894 ******* 2026-04-08 00:39:54.408596 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-08 00:39:54.408600 | orchestrator | 2026-04-08 00:39:54.408605 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-08 00:39:54.408609 | orchestrator | Wednesday 08 April 2026 00:39:52 +0000 (0:00:00.213) 0:00:22.108 ******* 2026-04-08 00:39:54.408614 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:39:54.408618 | orchestrator | 2026-04-08 00:39:54.408623 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:39:54.408627 | orchestrator | Wednesday 08 April 2026 00:39:52 +0000 (0:00:00.192) 0:00:22.300 ******* 2026-04-08 00:39:54.408631 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-08 00:39:54.408636 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-08 00:39:54.408640 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-08 00:39:54.408645 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-08 00:39:54.408649 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-08 00:39:54.408653 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-08 00:39:54.408658 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-08 00:39:54.408662 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-08 00:39:54.408666 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-08 00:39:54.408671 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-08 00:39:54.408675 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-08 00:39:54.408679 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-08 00:39:54.408684 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-08 00:39:54.408688 | orchestrator | 2026-04-08 00:39:54.408692 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:39:54.408697 | orchestrator | Wednesday 08 April 2026 00:39:53 +0000 (0:00:00.340) 0:00:22.641 ******* 2026-04-08 00:39:54.408701 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:39:54.408706 | orchestrator | 2026-04-08 00:39:54.408710 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:39:54.408718 | orchestrator | Wednesday 08 April 2026 00:39:53 +0000 (0:00:00.178) 0:00:22.820 ******* 2026-04-08 00:39:54.408723 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:39:54.408727 | orchestrator | 2026-04-08 00:39:54.408731 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:39:54.408736 | orchestrator | Wednesday 08 April 2026 00:39:53 +0000 (0:00:00.163) 0:00:22.983 ******* 2026-04-08 00:39:54.408740 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:39:54.408745 | orchestrator | 2026-04-08 00:39:54.408749 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:39:54.408753 | orchestrator | Wednesday 08 April 2026 00:39:53 +0000 (0:00:00.150) 0:00:23.134 ******* 2026-04-08 00:39:54.408758 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:39:54.408762 | orchestrator | 2026-04-08 00:39:54.408766 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:39:54.408771 | orchestrator | Wednesday 08 April 2026 00:39:54 +0000 (0:00:00.392) 0:00:23.527 ******* 2026-04-08 00:39:54.408775 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:39:54.408780 | orchestrator | 2026-04-08 00:39:54.408784 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:39:54.408788 | orchestrator | Wednesday 08 April 2026 00:39:54 +0000 (0:00:00.180) 0:00:23.708 ******* 2026-04-08 00:39:54.408793 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:39:54.408797 | orchestrator | 2026-04-08 00:39:54.408804 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:03.337182 | orchestrator | Wednesday 08 April 2026 00:39:54 +0000 (0:00:00.161) 0:00:23.869 ******* 2026-04-08 00:40:03.337293 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:03.337310 | orchestrator | 2026-04-08 00:40:03.337324 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:03.337336 | orchestrator | Wednesday 08 April 2026 00:39:54 +0000 (0:00:00.161) 0:00:24.031 ******* 2026-04-08 00:40:03.337348 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:03.337360 | orchestrator | 2026-04-08 00:40:03.337372 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:03.337384 | orchestrator | Wednesday 08 April 2026 00:39:54 +0000 (0:00:00.170) 0:00:24.201 ******* 2026-04-08 00:40:03.337395 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_1f87f253-d467-48bc-bac0-692ec5abf0aa) 2026-04-08 00:40:03.337409 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_1f87f253-d467-48bc-bac0-692ec5abf0aa) 2026-04-08 00:40:03.337421 | orchestrator | 2026-04-08 00:40:03.337435 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:03.337455 | orchestrator | Wednesday 08 April 2026 00:39:55 +0000 (0:00:00.356) 0:00:24.558 ******* 2026-04-08 00:40:03.337483 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_49047f2d-69c1-4fac-a475-f46440c51814) 2026-04-08 00:40:03.337504 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_49047f2d-69c1-4fac-a475-f46440c51814) 2026-04-08 00:40:03.337521 | orchestrator | 2026-04-08 00:40:03.337536 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:03.337571 | orchestrator | Wednesday 08 April 2026 00:39:55 +0000 (0:00:00.389) 0:00:24.947 ******* 2026-04-08 00:40:03.337590 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_0a17ff12-522e-4235-8e4c-edb4898b90f5) 2026-04-08 00:40:03.337607 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_0a17ff12-522e-4235-8e4c-edb4898b90f5) 2026-04-08 00:40:03.337625 | orchestrator | 2026-04-08 00:40:03.337644 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:03.337664 | orchestrator | Wednesday 08 April 2026 00:39:55 +0000 (0:00:00.394) 0:00:25.342 ******* 2026-04-08 00:40:03.337682 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_22a44c82-a679-4e37-857c-f96ffb845a8a) 2026-04-08 00:40:03.337731 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_22a44c82-a679-4e37-857c-f96ffb845a8a) 2026-04-08 00:40:03.337748 | orchestrator | 2026-04-08 00:40:03.337760 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:03.337770 | orchestrator | Wednesday 08 April 2026 00:39:56 +0000 (0:00:00.354) 0:00:25.696 ******* 2026-04-08 00:40:03.337781 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-08 00:40:03.337792 | orchestrator | 2026-04-08 00:40:03.337803 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:03.337813 | orchestrator | Wednesday 08 April 2026 00:39:56 +0000 (0:00:00.287) 0:00:25.984 ******* 2026-04-08 00:40:03.337824 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-08 00:40:03.337835 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-08 00:40:03.337846 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-08 00:40:03.337856 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-08 00:40:03.337867 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-08 00:40:03.337877 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-08 00:40:03.337888 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-08 00:40:03.337899 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-08 00:40:03.337909 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-08 00:40:03.337920 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-08 00:40:03.337931 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-08 00:40:03.337942 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-08 00:40:03.337952 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-08 00:40:03.337963 | orchestrator | 2026-04-08 00:40:03.337974 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:03.337984 | orchestrator | Wednesday 08 April 2026 00:39:57 +0000 (0:00:00.578) 0:00:26.563 ******* 2026-04-08 00:40:03.338115 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:03.338130 | orchestrator | 2026-04-08 00:40:03.338141 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:03.338152 | orchestrator | Wednesday 08 April 2026 00:39:57 +0000 (0:00:00.202) 0:00:26.765 ******* 2026-04-08 00:40:03.338163 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:03.338173 | orchestrator | 2026-04-08 00:40:03.338184 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:03.338195 | orchestrator | Wednesday 08 April 2026 00:39:57 +0000 (0:00:00.181) 0:00:26.947 ******* 2026-04-08 00:40:03.338206 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:03.338216 | orchestrator | 2026-04-08 00:40:03.338248 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:03.338260 | orchestrator | Wednesday 08 April 2026 00:39:57 +0000 (0:00:00.184) 0:00:27.132 ******* 2026-04-08 00:40:03.338271 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:03.338282 | orchestrator | 2026-04-08 00:40:03.338292 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:03.338303 | orchestrator | Wednesday 08 April 2026 00:39:57 +0000 (0:00:00.159) 0:00:27.291 ******* 2026-04-08 00:40:03.338314 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:03.338324 | orchestrator | 2026-04-08 00:40:03.338335 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:03.338356 | orchestrator | Wednesday 08 April 2026 00:39:57 +0000 (0:00:00.161) 0:00:27.452 ******* 2026-04-08 00:40:03.338367 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:03.338378 | orchestrator | 2026-04-08 00:40:03.338389 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:03.338399 | orchestrator | Wednesday 08 April 2026 00:39:58 +0000 (0:00:00.155) 0:00:27.608 ******* 2026-04-08 00:40:03.338411 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:03.338422 | orchestrator | 2026-04-08 00:40:03.338434 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:03.338455 | orchestrator | Wednesday 08 April 2026 00:39:58 +0000 (0:00:00.176) 0:00:27.785 ******* 2026-04-08 00:40:03.338474 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:03.338492 | orchestrator | 2026-04-08 00:40:03.338511 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:03.338533 | orchestrator | Wednesday 08 April 2026 00:39:58 +0000 (0:00:00.164) 0:00:27.949 ******* 2026-04-08 00:40:03.338551 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-08 00:40:03.338577 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-08 00:40:03.338589 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-08 00:40:03.338600 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-08 00:40:03.338610 | orchestrator | 2026-04-08 00:40:03.338621 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:03.338632 | orchestrator | Wednesday 08 April 2026 00:39:59 +0000 (0:00:00.690) 0:00:28.640 ******* 2026-04-08 00:40:03.338642 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:03.338653 | orchestrator | 2026-04-08 00:40:03.338663 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:03.338674 | orchestrator | Wednesday 08 April 2026 00:39:59 +0000 (0:00:00.171) 0:00:28.811 ******* 2026-04-08 00:40:03.338685 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:03.338695 | orchestrator | 2026-04-08 00:40:03.338706 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:03.338716 | orchestrator | Wednesday 08 April 2026 00:39:59 +0000 (0:00:00.179) 0:00:28.991 ******* 2026-04-08 00:40:03.338727 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:03.338738 | orchestrator | 2026-04-08 00:40:03.338748 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:03.338759 | orchestrator | Wednesday 08 April 2026 00:39:59 +0000 (0:00:00.441) 0:00:29.432 ******* 2026-04-08 00:40:03.338769 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:03.338780 | orchestrator | 2026-04-08 00:40:03.338790 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-08 00:40:03.338801 | orchestrator | Wednesday 08 April 2026 00:40:00 +0000 (0:00:00.179) 0:00:29.611 ******* 2026-04-08 00:40:03.338812 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:03.338822 | orchestrator | 2026-04-08 00:40:03.338833 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-08 00:40:03.338843 | orchestrator | Wednesday 08 April 2026 00:40:00 +0000 (0:00:00.108) 0:00:29.720 ******* 2026-04-08 00:40:03.338854 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c5eee886-e951-5b32-a4a0-4842fe7aed13'}}) 2026-04-08 00:40:03.338865 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '16b9c52d-170e-5f8d-b9c1-c30752bb4b9e'}}) 2026-04-08 00:40:03.338876 | orchestrator | 2026-04-08 00:40:03.338887 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-08 00:40:03.338897 | orchestrator | Wednesday 08 April 2026 00:40:00 +0000 (0:00:00.163) 0:00:29.883 ******* 2026-04-08 00:40:03.338909 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c5eee886-e951-5b32-a4a0-4842fe7aed13', 'data_vg': 'ceph-c5eee886-e951-5b32-a4a0-4842fe7aed13'}) 2026-04-08 00:40:03.338921 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e', 'data_vg': 'ceph-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e'}) 2026-04-08 00:40:03.338940 | orchestrator | 2026-04-08 00:40:03.338950 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-08 00:40:03.338961 | orchestrator | Wednesday 08 April 2026 00:40:02 +0000 (0:00:01.684) 0:00:31.568 ******* 2026-04-08 00:40:03.338972 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c5eee886-e951-5b32-a4a0-4842fe7aed13', 'data_vg': 'ceph-c5eee886-e951-5b32-a4a0-4842fe7aed13'})  2026-04-08 00:40:03.338984 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e', 'data_vg': 'ceph-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e'})  2026-04-08 00:40:03.339019 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:03.339031 | orchestrator | 2026-04-08 00:40:03.339042 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-08 00:40:03.339052 | orchestrator | Wednesday 08 April 2026 00:40:02 +0000 (0:00:00.141) 0:00:31.710 ******* 2026-04-08 00:40:03.339063 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c5eee886-e951-5b32-a4a0-4842fe7aed13', 'data_vg': 'ceph-c5eee886-e951-5b32-a4a0-4842fe7aed13'}) 2026-04-08 00:40:03.339082 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e', 'data_vg': 'ceph-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e'}) 2026-04-08 00:40:08.057425 | orchestrator | 2026-04-08 00:40:08.057546 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-08 00:40:08.057579 | orchestrator | Wednesday 08 April 2026 00:40:03 +0000 (0:00:01.163) 0:00:32.873 ******* 2026-04-08 00:40:08.057602 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c5eee886-e951-5b32-a4a0-4842fe7aed13', 'data_vg': 'ceph-c5eee886-e951-5b32-a4a0-4842fe7aed13'})  2026-04-08 00:40:08.057624 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e', 'data_vg': 'ceph-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e'})  2026-04-08 00:40:08.057645 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:08.057667 | orchestrator | 2026-04-08 00:40:08.057688 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-08 00:40:08.057709 | orchestrator | Wednesday 08 April 2026 00:40:03 +0000 (0:00:00.122) 0:00:32.995 ******* 2026-04-08 00:40:08.057729 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:08.057750 | orchestrator | 2026-04-08 00:40:08.057771 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-08 00:40:08.057791 | orchestrator | Wednesday 08 April 2026 00:40:03 +0000 (0:00:00.117) 0:00:33.113 ******* 2026-04-08 00:40:08.057827 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c5eee886-e951-5b32-a4a0-4842fe7aed13', 'data_vg': 'ceph-c5eee886-e951-5b32-a4a0-4842fe7aed13'})  2026-04-08 00:40:08.057849 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e', 'data_vg': 'ceph-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e'})  2026-04-08 00:40:08.057870 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:08.057891 | orchestrator | 2026-04-08 00:40:08.057912 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-08 00:40:08.057933 | orchestrator | Wednesday 08 April 2026 00:40:03 +0000 (0:00:00.127) 0:00:33.240 ******* 2026-04-08 00:40:08.057955 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:08.057977 | orchestrator | 2026-04-08 00:40:08.058096 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-08 00:40:08.058121 | orchestrator | Wednesday 08 April 2026 00:40:03 +0000 (0:00:00.125) 0:00:33.366 ******* 2026-04-08 00:40:08.058139 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c5eee886-e951-5b32-a4a0-4842fe7aed13', 'data_vg': 'ceph-c5eee886-e951-5b32-a4a0-4842fe7aed13'})  2026-04-08 00:40:08.058160 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e', 'data_vg': 'ceph-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e'})  2026-04-08 00:40:08.058180 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:08.058230 | orchestrator | 2026-04-08 00:40:08.058252 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-08 00:40:08.058272 | orchestrator | Wednesday 08 April 2026 00:40:04 +0000 (0:00:00.133) 0:00:33.499 ******* 2026-04-08 00:40:08.058293 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:08.058315 | orchestrator | 2026-04-08 00:40:08.058338 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-08 00:40:08.058361 | orchestrator | Wednesday 08 April 2026 00:40:04 +0000 (0:00:00.254) 0:00:33.754 ******* 2026-04-08 00:40:08.058383 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c5eee886-e951-5b32-a4a0-4842fe7aed13', 'data_vg': 'ceph-c5eee886-e951-5b32-a4a0-4842fe7aed13'})  2026-04-08 00:40:08.058403 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e', 'data_vg': 'ceph-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e'})  2026-04-08 00:40:08.058423 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:08.058442 | orchestrator | 2026-04-08 00:40:08.058461 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-08 00:40:08.058480 | orchestrator | Wednesday 08 April 2026 00:40:04 +0000 (0:00:00.132) 0:00:33.886 ******* 2026-04-08 00:40:08.058499 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:40:08.058520 | orchestrator | 2026-04-08 00:40:08.058540 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-08 00:40:08.058562 | orchestrator | Wednesday 08 April 2026 00:40:04 +0000 (0:00:00.122) 0:00:34.009 ******* 2026-04-08 00:40:08.058582 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c5eee886-e951-5b32-a4a0-4842fe7aed13', 'data_vg': 'ceph-c5eee886-e951-5b32-a4a0-4842fe7aed13'})  2026-04-08 00:40:08.058602 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e', 'data_vg': 'ceph-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e'})  2026-04-08 00:40:08.058623 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:08.058642 | orchestrator | 2026-04-08 00:40:08.058660 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-08 00:40:08.058679 | orchestrator | Wednesday 08 April 2026 00:40:04 +0000 (0:00:00.127) 0:00:34.137 ******* 2026-04-08 00:40:08.058699 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c5eee886-e951-5b32-a4a0-4842fe7aed13', 'data_vg': 'ceph-c5eee886-e951-5b32-a4a0-4842fe7aed13'})  2026-04-08 00:40:08.058719 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e', 'data_vg': 'ceph-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e'})  2026-04-08 00:40:08.058739 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:08.058759 | orchestrator | 2026-04-08 00:40:08.058780 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-08 00:40:08.058825 | orchestrator | Wednesday 08 April 2026 00:40:04 +0000 (0:00:00.133) 0:00:34.270 ******* 2026-04-08 00:40:08.058848 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c5eee886-e951-5b32-a4a0-4842fe7aed13', 'data_vg': 'ceph-c5eee886-e951-5b32-a4a0-4842fe7aed13'})  2026-04-08 00:40:08.058868 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e', 'data_vg': 'ceph-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e'})  2026-04-08 00:40:08.058882 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:08.058893 | orchestrator | 2026-04-08 00:40:08.058904 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-08 00:40:08.058915 | orchestrator | Wednesday 08 April 2026 00:40:04 +0000 (0:00:00.131) 0:00:34.402 ******* 2026-04-08 00:40:08.058926 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:08.058937 | orchestrator | 2026-04-08 00:40:08.058947 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-08 00:40:08.058958 | orchestrator | Wednesday 08 April 2026 00:40:05 +0000 (0:00:00.113) 0:00:34.515 ******* 2026-04-08 00:40:08.058969 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:08.059020 | orchestrator | 2026-04-08 00:40:08.059033 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-08 00:40:08.059044 | orchestrator | Wednesday 08 April 2026 00:40:05 +0000 (0:00:00.120) 0:00:34.636 ******* 2026-04-08 00:40:08.059055 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:08.059066 | orchestrator | 2026-04-08 00:40:08.059085 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-08 00:40:08.059097 | orchestrator | Wednesday 08 April 2026 00:40:05 +0000 (0:00:00.112) 0:00:34.748 ******* 2026-04-08 00:40:08.059107 | orchestrator | ok: [testbed-node-4] => { 2026-04-08 00:40:08.059118 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-08 00:40:08.059129 | orchestrator | } 2026-04-08 00:40:08.059140 | orchestrator | 2026-04-08 00:40:08.059151 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-08 00:40:08.059162 | orchestrator | Wednesday 08 April 2026 00:40:05 +0000 (0:00:00.122) 0:00:34.871 ******* 2026-04-08 00:40:08.059173 | orchestrator | ok: [testbed-node-4] => { 2026-04-08 00:40:08.059184 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-08 00:40:08.059194 | orchestrator | } 2026-04-08 00:40:08.059204 | orchestrator | 2026-04-08 00:40:08.059214 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-08 00:40:08.059224 | orchestrator | Wednesday 08 April 2026 00:40:05 +0000 (0:00:00.139) 0:00:35.010 ******* 2026-04-08 00:40:08.059233 | orchestrator | ok: [testbed-node-4] => { 2026-04-08 00:40:08.059243 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-08 00:40:08.059253 | orchestrator | } 2026-04-08 00:40:08.059262 | orchestrator | 2026-04-08 00:40:08.059272 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-08 00:40:08.059281 | orchestrator | Wednesday 08 April 2026 00:40:05 +0000 (0:00:00.119) 0:00:35.130 ******* 2026-04-08 00:40:08.059291 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:40:08.059300 | orchestrator | 2026-04-08 00:40:08.059310 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-08 00:40:08.059320 | orchestrator | Wednesday 08 April 2026 00:40:06 +0000 (0:00:00.574) 0:00:35.705 ******* 2026-04-08 00:40:08.059329 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:40:08.059339 | orchestrator | 2026-04-08 00:40:08.059348 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-08 00:40:08.059358 | orchestrator | Wednesday 08 April 2026 00:40:06 +0000 (0:00:00.450) 0:00:36.155 ******* 2026-04-08 00:40:08.059367 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:40:08.059377 | orchestrator | 2026-04-08 00:40:08.059386 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-08 00:40:08.059396 | orchestrator | Wednesday 08 April 2026 00:40:07 +0000 (0:00:00.483) 0:00:36.639 ******* 2026-04-08 00:40:08.059405 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:40:08.059415 | orchestrator | 2026-04-08 00:40:08.059424 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-08 00:40:08.059434 | orchestrator | Wednesday 08 April 2026 00:40:07 +0000 (0:00:00.125) 0:00:36.764 ******* 2026-04-08 00:40:08.059443 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:08.059453 | orchestrator | 2026-04-08 00:40:08.059462 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-08 00:40:08.059477 | orchestrator | Wednesday 08 April 2026 00:40:07 +0000 (0:00:00.104) 0:00:36.869 ******* 2026-04-08 00:40:08.059494 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:08.059509 | orchestrator | 2026-04-08 00:40:08.059526 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-08 00:40:08.059542 | orchestrator | Wednesday 08 April 2026 00:40:07 +0000 (0:00:00.093) 0:00:36.962 ******* 2026-04-08 00:40:08.059559 | orchestrator | ok: [testbed-node-4] => { 2026-04-08 00:40:08.059576 | orchestrator |  "vgs_report": { 2026-04-08 00:40:08.059591 | orchestrator |  "vg": [] 2026-04-08 00:40:08.059602 | orchestrator |  } 2026-04-08 00:40:08.059612 | orchestrator | } 2026-04-08 00:40:08.059621 | orchestrator | 2026-04-08 00:40:08.059631 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-08 00:40:08.059649 | orchestrator | Wednesday 08 April 2026 00:40:07 +0000 (0:00:00.119) 0:00:37.082 ******* 2026-04-08 00:40:08.059659 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:08.059668 | orchestrator | 2026-04-08 00:40:08.059678 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-08 00:40:08.059687 | orchestrator | Wednesday 08 April 2026 00:40:07 +0000 (0:00:00.100) 0:00:37.182 ******* 2026-04-08 00:40:08.059697 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:08.059706 | orchestrator | 2026-04-08 00:40:08.059716 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-08 00:40:08.059726 | orchestrator | Wednesday 08 April 2026 00:40:07 +0000 (0:00:00.113) 0:00:37.296 ******* 2026-04-08 00:40:08.059735 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:08.059745 | orchestrator | 2026-04-08 00:40:08.059754 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-08 00:40:08.059764 | orchestrator | Wednesday 08 April 2026 00:40:07 +0000 (0:00:00.106) 0:00:37.402 ******* 2026-04-08 00:40:08.059774 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:08.059784 | orchestrator | 2026-04-08 00:40:08.059802 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-08 00:40:11.922612 | orchestrator | Wednesday 08 April 2026 00:40:08 +0000 (0:00:00.112) 0:00:37.515 ******* 2026-04-08 00:40:11.922723 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:11.922740 | orchestrator | 2026-04-08 00:40:11.922753 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-08 00:40:11.922766 | orchestrator | Wednesday 08 April 2026 00:40:08 +0000 (0:00:00.116) 0:00:37.632 ******* 2026-04-08 00:40:11.922777 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:11.922788 | orchestrator | 2026-04-08 00:40:11.922799 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-08 00:40:11.922810 | orchestrator | Wednesday 08 April 2026 00:40:08 +0000 (0:00:00.228) 0:00:37.861 ******* 2026-04-08 00:40:11.922821 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:11.922831 | orchestrator | 2026-04-08 00:40:11.922842 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-08 00:40:11.922853 | orchestrator | Wednesday 08 April 2026 00:40:08 +0000 (0:00:00.178) 0:00:38.040 ******* 2026-04-08 00:40:11.922864 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:11.922875 | orchestrator | 2026-04-08 00:40:11.922886 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-08 00:40:11.922897 | orchestrator | Wednesday 08 April 2026 00:40:08 +0000 (0:00:00.129) 0:00:38.169 ******* 2026-04-08 00:40:11.922907 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:11.922918 | orchestrator | 2026-04-08 00:40:11.922929 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-08 00:40:11.922940 | orchestrator | Wednesday 08 April 2026 00:40:08 +0000 (0:00:00.103) 0:00:38.272 ******* 2026-04-08 00:40:11.922951 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:11.922962 | orchestrator | 2026-04-08 00:40:11.922972 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-08 00:40:11.923046 | orchestrator | Wednesday 08 April 2026 00:40:08 +0000 (0:00:00.105) 0:00:38.377 ******* 2026-04-08 00:40:11.923061 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:11.923072 | orchestrator | 2026-04-08 00:40:11.923083 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-08 00:40:11.923094 | orchestrator | Wednesday 08 April 2026 00:40:09 +0000 (0:00:00.120) 0:00:38.497 ******* 2026-04-08 00:40:11.923105 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:11.923116 | orchestrator | 2026-04-08 00:40:11.923127 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-08 00:40:11.923157 | orchestrator | Wednesday 08 April 2026 00:40:09 +0000 (0:00:00.117) 0:00:38.615 ******* 2026-04-08 00:40:11.923172 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:11.923191 | orchestrator | 2026-04-08 00:40:11.923211 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-08 00:40:11.923259 | orchestrator | Wednesday 08 April 2026 00:40:09 +0000 (0:00:00.116) 0:00:38.732 ******* 2026-04-08 00:40:11.923281 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:11.923319 | orchestrator | 2026-04-08 00:40:11.923339 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-08 00:40:11.923357 | orchestrator | Wednesday 08 April 2026 00:40:09 +0000 (0:00:00.113) 0:00:38.845 ******* 2026-04-08 00:40:11.923377 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c5eee886-e951-5b32-a4a0-4842fe7aed13', 'data_vg': 'ceph-c5eee886-e951-5b32-a4a0-4842fe7aed13'})  2026-04-08 00:40:11.923399 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e', 'data_vg': 'ceph-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e'})  2026-04-08 00:40:11.923419 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:11.923435 | orchestrator | 2026-04-08 00:40:11.923448 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-08 00:40:11.923461 | orchestrator | Wednesday 08 April 2026 00:40:09 +0000 (0:00:00.131) 0:00:38.977 ******* 2026-04-08 00:40:11.923475 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c5eee886-e951-5b32-a4a0-4842fe7aed13', 'data_vg': 'ceph-c5eee886-e951-5b32-a4a0-4842fe7aed13'})  2026-04-08 00:40:11.923488 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e', 'data_vg': 'ceph-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e'})  2026-04-08 00:40:11.923500 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:11.923511 | orchestrator | 2026-04-08 00:40:11.923522 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-08 00:40:11.923533 | orchestrator | Wednesday 08 April 2026 00:40:09 +0000 (0:00:00.126) 0:00:39.103 ******* 2026-04-08 00:40:11.923543 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c5eee886-e951-5b32-a4a0-4842fe7aed13', 'data_vg': 'ceph-c5eee886-e951-5b32-a4a0-4842fe7aed13'})  2026-04-08 00:40:11.923554 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e', 'data_vg': 'ceph-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e'})  2026-04-08 00:40:11.923565 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:11.923576 | orchestrator | 2026-04-08 00:40:11.923586 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-08 00:40:11.923597 | orchestrator | Wednesday 08 April 2026 00:40:09 +0000 (0:00:00.131) 0:00:39.235 ******* 2026-04-08 00:40:11.923608 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c5eee886-e951-5b32-a4a0-4842fe7aed13', 'data_vg': 'ceph-c5eee886-e951-5b32-a4a0-4842fe7aed13'})  2026-04-08 00:40:11.923619 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e', 'data_vg': 'ceph-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e'})  2026-04-08 00:40:11.923630 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:11.923641 | orchestrator | 2026-04-08 00:40:11.923673 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-08 00:40:11.923693 | orchestrator | Wednesday 08 April 2026 00:40:10 +0000 (0:00:00.241) 0:00:39.477 ******* 2026-04-08 00:40:11.923712 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c5eee886-e951-5b32-a4a0-4842fe7aed13', 'data_vg': 'ceph-c5eee886-e951-5b32-a4a0-4842fe7aed13'})  2026-04-08 00:40:11.923731 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e', 'data_vg': 'ceph-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e'})  2026-04-08 00:40:11.923749 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:11.923764 | orchestrator | 2026-04-08 00:40:11.923775 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-08 00:40:11.923786 | orchestrator | Wednesday 08 April 2026 00:40:10 +0000 (0:00:00.143) 0:00:39.620 ******* 2026-04-08 00:40:11.923797 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c5eee886-e951-5b32-a4a0-4842fe7aed13', 'data_vg': 'ceph-c5eee886-e951-5b32-a4a0-4842fe7aed13'})  2026-04-08 00:40:11.923826 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e', 'data_vg': 'ceph-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e'})  2026-04-08 00:40:11.923840 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:11.923858 | orchestrator | 2026-04-08 00:40:11.923877 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-08 00:40:11.923895 | orchestrator | Wednesday 08 April 2026 00:40:10 +0000 (0:00:00.125) 0:00:39.745 ******* 2026-04-08 00:40:11.923914 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c5eee886-e951-5b32-a4a0-4842fe7aed13', 'data_vg': 'ceph-c5eee886-e951-5b32-a4a0-4842fe7aed13'})  2026-04-08 00:40:11.923933 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e', 'data_vg': 'ceph-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e'})  2026-04-08 00:40:11.923948 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:11.923959 | orchestrator | 2026-04-08 00:40:11.923970 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-08 00:40:11.923980 | orchestrator | Wednesday 08 April 2026 00:40:10 +0000 (0:00:00.126) 0:00:39.872 ******* 2026-04-08 00:40:11.924040 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c5eee886-e951-5b32-a4a0-4842fe7aed13', 'data_vg': 'ceph-c5eee886-e951-5b32-a4a0-4842fe7aed13'})  2026-04-08 00:40:11.924052 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e', 'data_vg': 'ceph-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e'})  2026-04-08 00:40:11.924063 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:11.924074 | orchestrator | 2026-04-08 00:40:11.924084 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-08 00:40:11.924095 | orchestrator | Wednesday 08 April 2026 00:40:10 +0000 (0:00:00.130) 0:00:40.002 ******* 2026-04-08 00:40:11.924106 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:40:11.924117 | orchestrator | 2026-04-08 00:40:11.924128 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-08 00:40:11.924138 | orchestrator | Wednesday 08 April 2026 00:40:10 +0000 (0:00:00.443) 0:00:40.446 ******* 2026-04-08 00:40:11.924149 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:40:11.924160 | orchestrator | 2026-04-08 00:40:11.924170 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-08 00:40:11.924181 | orchestrator | Wednesday 08 April 2026 00:40:11 +0000 (0:00:00.461) 0:00:40.907 ******* 2026-04-08 00:40:11.924192 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:40:11.924203 | orchestrator | 2026-04-08 00:40:11.924214 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-08 00:40:11.924224 | orchestrator | Wednesday 08 April 2026 00:40:11 +0000 (0:00:00.142) 0:00:41.049 ******* 2026-04-08 00:40:11.924235 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e', 'vg_name': 'ceph-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e'}) 2026-04-08 00:40:11.924247 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-c5eee886-e951-5b32-a4a0-4842fe7aed13', 'vg_name': 'ceph-c5eee886-e951-5b32-a4a0-4842fe7aed13'}) 2026-04-08 00:40:11.924258 | orchestrator | 2026-04-08 00:40:11.924269 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-08 00:40:11.924279 | orchestrator | Wednesday 08 April 2026 00:40:11 +0000 (0:00:00.145) 0:00:41.195 ******* 2026-04-08 00:40:11.924290 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c5eee886-e951-5b32-a4a0-4842fe7aed13', 'data_vg': 'ceph-c5eee886-e951-5b32-a4a0-4842fe7aed13'})  2026-04-08 00:40:11.924301 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e', 'data_vg': 'ceph-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e'})  2026-04-08 00:40:11.924312 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:11.924323 | orchestrator | 2026-04-08 00:40:11.924343 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-08 00:40:11.924361 | orchestrator | Wednesday 08 April 2026 00:40:11 +0000 (0:00:00.133) 0:00:41.328 ******* 2026-04-08 00:40:11.924379 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c5eee886-e951-5b32-a4a0-4842fe7aed13', 'data_vg': 'ceph-c5eee886-e951-5b32-a4a0-4842fe7aed13'})  2026-04-08 00:40:11.924411 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e', 'data_vg': 'ceph-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e'})  2026-04-08 00:40:17.020109 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:17.020198 | orchestrator | 2026-04-08 00:40:17.020208 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-08 00:40:17.020217 | orchestrator | Wednesday 08 April 2026 00:40:11 +0000 (0:00:00.130) 0:00:41.459 ******* 2026-04-08 00:40:17.020224 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c5eee886-e951-5b32-a4a0-4842fe7aed13', 'data_vg': 'ceph-c5eee886-e951-5b32-a4a0-4842fe7aed13'})  2026-04-08 00:40:17.020234 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e', 'data_vg': 'ceph-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e'})  2026-04-08 00:40:17.020241 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:17.020247 | orchestrator | 2026-04-08 00:40:17.020254 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-08 00:40:17.020261 | orchestrator | Wednesday 08 April 2026 00:40:12 +0000 (0:00:00.129) 0:00:41.589 ******* 2026-04-08 00:40:17.020268 | orchestrator | ok: [testbed-node-4] => { 2026-04-08 00:40:17.020275 | orchestrator |  "lvm_report": { 2026-04-08 00:40:17.020283 | orchestrator |  "lv": [ 2026-04-08 00:40:17.020290 | orchestrator |  { 2026-04-08 00:40:17.020311 | orchestrator |  "lv_name": "osd-block-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e", 2026-04-08 00:40:17.020319 | orchestrator |  "vg_name": "ceph-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e" 2026-04-08 00:40:17.020325 | orchestrator |  }, 2026-04-08 00:40:17.020332 | orchestrator |  { 2026-04-08 00:40:17.020339 | orchestrator |  "lv_name": "osd-block-c5eee886-e951-5b32-a4a0-4842fe7aed13", 2026-04-08 00:40:17.020346 | orchestrator |  "vg_name": "ceph-c5eee886-e951-5b32-a4a0-4842fe7aed13" 2026-04-08 00:40:17.020352 | orchestrator |  } 2026-04-08 00:40:17.020359 | orchestrator |  ], 2026-04-08 00:40:17.020366 | orchestrator |  "pv": [ 2026-04-08 00:40:17.020373 | orchestrator |  { 2026-04-08 00:40:17.020379 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-08 00:40:17.020386 | orchestrator |  "vg_name": "ceph-c5eee886-e951-5b32-a4a0-4842fe7aed13" 2026-04-08 00:40:17.020393 | orchestrator |  }, 2026-04-08 00:40:17.020399 | orchestrator |  { 2026-04-08 00:40:17.020406 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-08 00:40:17.020413 | orchestrator |  "vg_name": "ceph-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e" 2026-04-08 00:40:17.020419 | orchestrator |  } 2026-04-08 00:40:17.020426 | orchestrator |  ] 2026-04-08 00:40:17.020433 | orchestrator |  } 2026-04-08 00:40:17.020440 | orchestrator | } 2026-04-08 00:40:17.020447 | orchestrator | 2026-04-08 00:40:17.020454 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-08 00:40:17.020461 | orchestrator | 2026-04-08 00:40:17.020467 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-08 00:40:17.020474 | orchestrator | Wednesday 08 April 2026 00:40:12 +0000 (0:00:00.385) 0:00:41.975 ******* 2026-04-08 00:40:17.020481 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-08 00:40:17.020488 | orchestrator | 2026-04-08 00:40:17.020494 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-08 00:40:17.020501 | orchestrator | Wednesday 08 April 2026 00:40:12 +0000 (0:00:00.214) 0:00:42.189 ******* 2026-04-08 00:40:17.020508 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:40:17.020533 | orchestrator | 2026-04-08 00:40:17.020542 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:17.020554 | orchestrator | Wednesday 08 April 2026 00:40:12 +0000 (0:00:00.200) 0:00:42.389 ******* 2026-04-08 00:40:17.020566 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-08 00:40:17.020578 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-08 00:40:17.020589 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-08 00:40:17.020600 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-08 00:40:17.020616 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-08 00:40:17.020628 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-08 00:40:17.020641 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-08 00:40:17.020654 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-08 00:40:17.020666 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-08 00:40:17.020679 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-08 00:40:17.020687 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-08 00:40:17.020696 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-08 00:40:17.020704 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-08 00:40:17.020711 | orchestrator | 2026-04-08 00:40:17.020719 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:17.020727 | orchestrator | Wednesday 08 April 2026 00:40:13 +0000 (0:00:00.340) 0:00:42.730 ******* 2026-04-08 00:40:17.020735 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:17.020744 | orchestrator | 2026-04-08 00:40:17.020752 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:17.020759 | orchestrator | Wednesday 08 April 2026 00:40:13 +0000 (0:00:00.170) 0:00:42.901 ******* 2026-04-08 00:40:17.020767 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:17.020775 | orchestrator | 2026-04-08 00:40:17.020782 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:17.020806 | orchestrator | Wednesday 08 April 2026 00:40:13 +0000 (0:00:00.161) 0:00:43.062 ******* 2026-04-08 00:40:17.020814 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:17.020822 | orchestrator | 2026-04-08 00:40:17.020830 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:17.020838 | orchestrator | Wednesday 08 April 2026 00:40:13 +0000 (0:00:00.175) 0:00:43.238 ******* 2026-04-08 00:40:17.020846 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:17.020854 | orchestrator | 2026-04-08 00:40:17.020862 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:17.020870 | orchestrator | Wednesday 08 April 2026 00:40:13 +0000 (0:00:00.163) 0:00:43.401 ******* 2026-04-08 00:40:17.020878 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:17.020885 | orchestrator | 2026-04-08 00:40:17.020893 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:17.020901 | orchestrator | Wednesday 08 April 2026 00:40:14 +0000 (0:00:00.170) 0:00:43.571 ******* 2026-04-08 00:40:17.020910 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:17.020918 | orchestrator | 2026-04-08 00:40:17.020925 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:17.020934 | orchestrator | Wednesday 08 April 2026 00:40:14 +0000 (0:00:00.507) 0:00:44.079 ******* 2026-04-08 00:40:17.020943 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:17.020951 | orchestrator | 2026-04-08 00:40:17.020967 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:17.020974 | orchestrator | Wednesday 08 April 2026 00:40:14 +0000 (0:00:00.174) 0:00:44.253 ******* 2026-04-08 00:40:17.021028 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:17.021035 | orchestrator | 2026-04-08 00:40:17.021042 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:17.021049 | orchestrator | Wednesday 08 April 2026 00:40:14 +0000 (0:00:00.169) 0:00:44.423 ******* 2026-04-08 00:40:17.021055 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e5ab6e22-4bb9-43f8-8e9a-32a0c49ce343) 2026-04-08 00:40:17.021063 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e5ab6e22-4bb9-43f8-8e9a-32a0c49ce343) 2026-04-08 00:40:17.021070 | orchestrator | 2026-04-08 00:40:17.021077 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:17.021084 | orchestrator | Wednesday 08 April 2026 00:40:15 +0000 (0:00:00.356) 0:00:44.779 ******* 2026-04-08 00:40:17.021090 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_8b42dfd4-7d2e-4d67-9c6f-8993b8aa5f54) 2026-04-08 00:40:17.021097 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_8b42dfd4-7d2e-4d67-9c6f-8993b8aa5f54) 2026-04-08 00:40:17.021104 | orchestrator | 2026-04-08 00:40:17.021110 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:17.021117 | orchestrator | Wednesday 08 April 2026 00:40:15 +0000 (0:00:00.380) 0:00:45.160 ******* 2026-04-08 00:40:17.021123 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_29911cfa-2062-4b30-9263-aae8438640a0) 2026-04-08 00:40:17.021130 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_29911cfa-2062-4b30-9263-aae8438640a0) 2026-04-08 00:40:17.021137 | orchestrator | 2026-04-08 00:40:17.021143 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:17.021150 | orchestrator | Wednesday 08 April 2026 00:40:16 +0000 (0:00:00.397) 0:00:45.557 ******* 2026-04-08 00:40:17.021156 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_7b1a6d0f-b1ea-4446-8ff8-479db77ebf36) 2026-04-08 00:40:17.021163 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_7b1a6d0f-b1ea-4446-8ff8-479db77ebf36) 2026-04-08 00:40:17.021170 | orchestrator | 2026-04-08 00:40:17.021176 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-08 00:40:17.021183 | orchestrator | Wednesday 08 April 2026 00:40:16 +0000 (0:00:00.382) 0:00:45.940 ******* 2026-04-08 00:40:17.021190 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-08 00:40:17.021197 | orchestrator | 2026-04-08 00:40:17.021203 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:17.021210 | orchestrator | Wednesday 08 April 2026 00:40:16 +0000 (0:00:00.276) 0:00:46.216 ******* 2026-04-08 00:40:17.021216 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-08 00:40:17.021223 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-08 00:40:17.021230 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-08 00:40:17.021236 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-08 00:40:17.021243 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-08 00:40:17.021249 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-08 00:40:17.021256 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-08 00:40:17.021262 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-08 00:40:17.021269 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-08 00:40:17.021319 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-08 00:40:17.021327 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-08 00:40:17.021339 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-08 00:40:24.878409 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-08 00:40:24.878485 | orchestrator | 2026-04-08 00:40:24.878493 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:24.878498 | orchestrator | Wednesday 08 April 2026 00:40:17 +0000 (0:00:00.342) 0:00:46.559 ******* 2026-04-08 00:40:24.878504 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:24.878509 | orchestrator | 2026-04-08 00:40:24.878515 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:24.878520 | orchestrator | Wednesday 08 April 2026 00:40:17 +0000 (0:00:00.168) 0:00:46.727 ******* 2026-04-08 00:40:24.878525 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:24.878530 | orchestrator | 2026-04-08 00:40:24.878535 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:24.878540 | orchestrator | Wednesday 08 April 2026 00:40:17 +0000 (0:00:00.182) 0:00:46.910 ******* 2026-04-08 00:40:24.878544 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:24.878549 | orchestrator | 2026-04-08 00:40:24.878554 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:24.878570 | orchestrator | Wednesday 08 April 2026 00:40:17 +0000 (0:00:00.451) 0:00:47.361 ******* 2026-04-08 00:40:24.878576 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:24.878583 | orchestrator | 2026-04-08 00:40:24.878591 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:24.878598 | orchestrator | Wednesday 08 April 2026 00:40:18 +0000 (0:00:00.173) 0:00:47.535 ******* 2026-04-08 00:40:24.878605 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:24.878613 | orchestrator | 2026-04-08 00:40:24.878620 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:24.878629 | orchestrator | Wednesday 08 April 2026 00:40:18 +0000 (0:00:00.166) 0:00:47.701 ******* 2026-04-08 00:40:24.878636 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:24.878644 | orchestrator | 2026-04-08 00:40:24.878651 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:24.878656 | orchestrator | Wednesday 08 April 2026 00:40:18 +0000 (0:00:00.171) 0:00:47.873 ******* 2026-04-08 00:40:24.878661 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:24.878666 | orchestrator | 2026-04-08 00:40:24.878670 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:24.878675 | orchestrator | Wednesday 08 April 2026 00:40:18 +0000 (0:00:00.158) 0:00:48.032 ******* 2026-04-08 00:40:24.878680 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:24.878685 | orchestrator | 2026-04-08 00:40:24.878690 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:24.878695 | orchestrator | Wednesday 08 April 2026 00:40:18 +0000 (0:00:00.169) 0:00:48.202 ******* 2026-04-08 00:40:24.878700 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-08 00:40:24.878706 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-08 00:40:24.878711 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-08 00:40:24.878716 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-08 00:40:24.878720 | orchestrator | 2026-04-08 00:40:24.878725 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:24.878730 | orchestrator | Wednesday 08 April 2026 00:40:19 +0000 (0:00:00.605) 0:00:48.807 ******* 2026-04-08 00:40:24.878735 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:24.878739 | orchestrator | 2026-04-08 00:40:24.878744 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:24.878749 | orchestrator | Wednesday 08 April 2026 00:40:19 +0000 (0:00:00.174) 0:00:48.982 ******* 2026-04-08 00:40:24.878769 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:24.878774 | orchestrator | 2026-04-08 00:40:24.878779 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:24.878784 | orchestrator | Wednesday 08 April 2026 00:40:19 +0000 (0:00:00.171) 0:00:49.154 ******* 2026-04-08 00:40:24.878788 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:24.878793 | orchestrator | 2026-04-08 00:40:24.878798 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-08 00:40:24.878803 | orchestrator | Wednesday 08 April 2026 00:40:19 +0000 (0:00:00.173) 0:00:49.327 ******* 2026-04-08 00:40:24.878807 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:24.878812 | orchestrator | 2026-04-08 00:40:24.878817 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-08 00:40:24.878822 | orchestrator | Wednesday 08 April 2026 00:40:20 +0000 (0:00:00.167) 0:00:49.494 ******* 2026-04-08 00:40:24.878826 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:24.878831 | orchestrator | 2026-04-08 00:40:24.878836 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-08 00:40:24.878840 | orchestrator | Wednesday 08 April 2026 00:40:20 +0000 (0:00:00.110) 0:00:49.605 ******* 2026-04-08 00:40:24.878845 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c80af5d6-1159-5955-8f01-035b314db1bd'}}) 2026-04-08 00:40:24.878851 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd7d0ff5a-46f9-53d2-8425-61ef59e49033'}}) 2026-04-08 00:40:24.878856 | orchestrator | 2026-04-08 00:40:24.878860 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-08 00:40:24.878865 | orchestrator | Wednesday 08 April 2026 00:40:20 +0000 (0:00:00.289) 0:00:49.894 ******* 2026-04-08 00:40:24.878872 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c80af5d6-1159-5955-8f01-035b314db1bd', 'data_vg': 'ceph-c80af5d6-1159-5955-8f01-035b314db1bd'}) 2026-04-08 00:40:24.878878 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d7d0ff5a-46f9-53d2-8425-61ef59e49033', 'data_vg': 'ceph-d7d0ff5a-46f9-53d2-8425-61ef59e49033'}) 2026-04-08 00:40:24.878882 | orchestrator | 2026-04-08 00:40:24.878887 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-08 00:40:24.878903 | orchestrator | Wednesday 08 April 2026 00:40:22 +0000 (0:00:01.767) 0:00:51.662 ******* 2026-04-08 00:40:24.878908 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c80af5d6-1159-5955-8f01-035b314db1bd', 'data_vg': 'ceph-c80af5d6-1159-5955-8f01-035b314db1bd'})  2026-04-08 00:40:24.878915 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d7d0ff5a-46f9-53d2-8425-61ef59e49033', 'data_vg': 'ceph-d7d0ff5a-46f9-53d2-8425-61ef59e49033'})  2026-04-08 00:40:24.878920 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:24.878925 | orchestrator | 2026-04-08 00:40:24.878930 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-08 00:40:24.878934 | orchestrator | Wednesday 08 April 2026 00:40:22 +0000 (0:00:00.144) 0:00:51.807 ******* 2026-04-08 00:40:24.878939 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c80af5d6-1159-5955-8f01-035b314db1bd', 'data_vg': 'ceph-c80af5d6-1159-5955-8f01-035b314db1bd'}) 2026-04-08 00:40:24.878947 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d7d0ff5a-46f9-53d2-8425-61ef59e49033', 'data_vg': 'ceph-d7d0ff5a-46f9-53d2-8425-61ef59e49033'}) 2026-04-08 00:40:24.878952 | orchestrator | 2026-04-08 00:40:24.878956 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-08 00:40:24.878961 | orchestrator | Wednesday 08 April 2026 00:40:23 +0000 (0:00:01.340) 0:00:53.147 ******* 2026-04-08 00:40:24.878966 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c80af5d6-1159-5955-8f01-035b314db1bd', 'data_vg': 'ceph-c80af5d6-1159-5955-8f01-035b314db1bd'})  2026-04-08 00:40:24.879007 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d7d0ff5a-46f9-53d2-8425-61ef59e49033', 'data_vg': 'ceph-d7d0ff5a-46f9-53d2-8425-61ef59e49033'})  2026-04-08 00:40:24.879018 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:24.879024 | orchestrator | 2026-04-08 00:40:24.879029 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-08 00:40:24.879035 | orchestrator | Wednesday 08 April 2026 00:40:23 +0000 (0:00:00.185) 0:00:53.333 ******* 2026-04-08 00:40:24.879041 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:24.879046 | orchestrator | 2026-04-08 00:40:24.879051 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-08 00:40:24.879057 | orchestrator | Wednesday 08 April 2026 00:40:23 +0000 (0:00:00.126) 0:00:53.459 ******* 2026-04-08 00:40:24.879063 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c80af5d6-1159-5955-8f01-035b314db1bd', 'data_vg': 'ceph-c80af5d6-1159-5955-8f01-035b314db1bd'})  2026-04-08 00:40:24.879068 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d7d0ff5a-46f9-53d2-8425-61ef59e49033', 'data_vg': 'ceph-d7d0ff5a-46f9-53d2-8425-61ef59e49033'})  2026-04-08 00:40:24.879073 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:24.879079 | orchestrator | 2026-04-08 00:40:24.879085 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-08 00:40:24.879091 | orchestrator | Wednesday 08 April 2026 00:40:24 +0000 (0:00:00.144) 0:00:53.604 ******* 2026-04-08 00:40:24.879096 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:24.879102 | orchestrator | 2026-04-08 00:40:24.879107 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-08 00:40:24.879113 | orchestrator | Wednesday 08 April 2026 00:40:24 +0000 (0:00:00.133) 0:00:53.737 ******* 2026-04-08 00:40:24.879118 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c80af5d6-1159-5955-8f01-035b314db1bd', 'data_vg': 'ceph-c80af5d6-1159-5955-8f01-035b314db1bd'})  2026-04-08 00:40:24.879124 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d7d0ff5a-46f9-53d2-8425-61ef59e49033', 'data_vg': 'ceph-d7d0ff5a-46f9-53d2-8425-61ef59e49033'})  2026-04-08 00:40:24.879129 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:24.879135 | orchestrator | 2026-04-08 00:40:24.879140 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-08 00:40:24.879146 | orchestrator | Wednesday 08 April 2026 00:40:24 +0000 (0:00:00.139) 0:00:53.876 ******* 2026-04-08 00:40:24.879152 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:24.879157 | orchestrator | 2026-04-08 00:40:24.879163 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-08 00:40:24.879168 | orchestrator | Wednesday 08 April 2026 00:40:24 +0000 (0:00:00.130) 0:00:54.007 ******* 2026-04-08 00:40:24.879174 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c80af5d6-1159-5955-8f01-035b314db1bd', 'data_vg': 'ceph-c80af5d6-1159-5955-8f01-035b314db1bd'})  2026-04-08 00:40:24.879179 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d7d0ff5a-46f9-53d2-8425-61ef59e49033', 'data_vg': 'ceph-d7d0ff5a-46f9-53d2-8425-61ef59e49033'})  2026-04-08 00:40:24.879185 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:24.879190 | orchestrator | 2026-04-08 00:40:24.879196 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-08 00:40:24.879202 | orchestrator | Wednesday 08 April 2026 00:40:24 +0000 (0:00:00.144) 0:00:54.151 ******* 2026-04-08 00:40:24.879207 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:40:24.879213 | orchestrator | 2026-04-08 00:40:24.879219 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-08 00:40:24.879224 | orchestrator | Wednesday 08 April 2026 00:40:24 +0000 (0:00:00.127) 0:00:54.279 ******* 2026-04-08 00:40:24.879234 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c80af5d6-1159-5955-8f01-035b314db1bd', 'data_vg': 'ceph-c80af5d6-1159-5955-8f01-035b314db1bd'})  2026-04-08 00:40:30.473199 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d7d0ff5a-46f9-53d2-8425-61ef59e49033', 'data_vg': 'ceph-d7d0ff5a-46f9-53d2-8425-61ef59e49033'})  2026-04-08 00:40:30.473339 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:30.473360 | orchestrator | 2026-04-08 00:40:30.473374 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-08 00:40:30.473387 | orchestrator | Wednesday 08 April 2026 00:40:25 +0000 (0:00:00.342) 0:00:54.621 ******* 2026-04-08 00:40:30.473399 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c80af5d6-1159-5955-8f01-035b314db1bd', 'data_vg': 'ceph-c80af5d6-1159-5955-8f01-035b314db1bd'})  2026-04-08 00:40:30.473411 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d7d0ff5a-46f9-53d2-8425-61ef59e49033', 'data_vg': 'ceph-d7d0ff5a-46f9-53d2-8425-61ef59e49033'})  2026-04-08 00:40:30.473422 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:30.473433 | orchestrator | 2026-04-08 00:40:30.473444 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-08 00:40:30.473469 | orchestrator | Wednesday 08 April 2026 00:40:25 +0000 (0:00:00.151) 0:00:54.773 ******* 2026-04-08 00:40:30.473481 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c80af5d6-1159-5955-8f01-035b314db1bd', 'data_vg': 'ceph-c80af5d6-1159-5955-8f01-035b314db1bd'})  2026-04-08 00:40:30.473492 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d7d0ff5a-46f9-53d2-8425-61ef59e49033', 'data_vg': 'ceph-d7d0ff5a-46f9-53d2-8425-61ef59e49033'})  2026-04-08 00:40:30.473503 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:30.473514 | orchestrator | 2026-04-08 00:40:30.473525 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-08 00:40:30.473536 | orchestrator | Wednesday 08 April 2026 00:40:25 +0000 (0:00:00.145) 0:00:54.918 ******* 2026-04-08 00:40:30.473547 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:30.473558 | orchestrator | 2026-04-08 00:40:30.473568 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-08 00:40:30.473580 | orchestrator | Wednesday 08 April 2026 00:40:25 +0000 (0:00:00.128) 0:00:55.046 ******* 2026-04-08 00:40:30.473591 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:30.473602 | orchestrator | 2026-04-08 00:40:30.473613 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-08 00:40:30.473624 | orchestrator | Wednesday 08 April 2026 00:40:25 +0000 (0:00:00.138) 0:00:55.185 ******* 2026-04-08 00:40:30.473635 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:30.473646 | orchestrator | 2026-04-08 00:40:30.473658 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-08 00:40:30.473669 | orchestrator | Wednesday 08 April 2026 00:40:25 +0000 (0:00:00.130) 0:00:55.316 ******* 2026-04-08 00:40:30.473679 | orchestrator | ok: [testbed-node-5] => { 2026-04-08 00:40:30.473689 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-08 00:40:30.473699 | orchestrator | } 2026-04-08 00:40:30.473709 | orchestrator | 2026-04-08 00:40:30.473719 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-08 00:40:30.473730 | orchestrator | Wednesday 08 April 2026 00:40:25 +0000 (0:00:00.136) 0:00:55.452 ******* 2026-04-08 00:40:30.473742 | orchestrator | ok: [testbed-node-5] => { 2026-04-08 00:40:30.473754 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-08 00:40:30.473765 | orchestrator | } 2026-04-08 00:40:30.473776 | orchestrator | 2026-04-08 00:40:30.473787 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-08 00:40:30.473797 | orchestrator | Wednesday 08 April 2026 00:40:26 +0000 (0:00:00.133) 0:00:55.586 ******* 2026-04-08 00:40:30.473808 | orchestrator | ok: [testbed-node-5] => { 2026-04-08 00:40:30.473819 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-08 00:40:30.473829 | orchestrator | } 2026-04-08 00:40:30.473839 | orchestrator | 2026-04-08 00:40:30.473851 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-08 00:40:30.473862 | orchestrator | Wednesday 08 April 2026 00:40:26 +0000 (0:00:00.134) 0:00:55.720 ******* 2026-04-08 00:40:30.473884 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:40:30.473895 | orchestrator | 2026-04-08 00:40:30.473906 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-08 00:40:30.473916 | orchestrator | Wednesday 08 April 2026 00:40:26 +0000 (0:00:00.540) 0:00:56.261 ******* 2026-04-08 00:40:30.473927 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:40:30.473938 | orchestrator | 2026-04-08 00:40:30.473949 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-08 00:40:30.473960 | orchestrator | Wednesday 08 April 2026 00:40:27 +0000 (0:00:00.467) 0:00:56.728 ******* 2026-04-08 00:40:30.474004 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:40:30.474057 | orchestrator | 2026-04-08 00:40:30.474071 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-08 00:40:30.474083 | orchestrator | Wednesday 08 April 2026 00:40:27 +0000 (0:00:00.521) 0:00:57.249 ******* 2026-04-08 00:40:30.474094 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:40:30.474105 | orchestrator | 2026-04-08 00:40:30.474116 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-08 00:40:30.474127 | orchestrator | Wednesday 08 April 2026 00:40:28 +0000 (0:00:00.269) 0:00:57.519 ******* 2026-04-08 00:40:30.474138 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:30.474148 | orchestrator | 2026-04-08 00:40:30.474160 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-08 00:40:30.474171 | orchestrator | Wednesday 08 April 2026 00:40:28 +0000 (0:00:00.095) 0:00:57.615 ******* 2026-04-08 00:40:30.474182 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:30.474194 | orchestrator | 2026-04-08 00:40:30.474205 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-08 00:40:30.474216 | orchestrator | Wednesday 08 April 2026 00:40:28 +0000 (0:00:00.104) 0:00:57.719 ******* 2026-04-08 00:40:30.474226 | orchestrator | ok: [testbed-node-5] => { 2026-04-08 00:40:30.474237 | orchestrator |  "vgs_report": { 2026-04-08 00:40:30.474249 | orchestrator |  "vg": [] 2026-04-08 00:40:30.474283 | orchestrator |  } 2026-04-08 00:40:30.474295 | orchestrator | } 2026-04-08 00:40:30.474307 | orchestrator | 2026-04-08 00:40:30.474319 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-08 00:40:30.474330 | orchestrator | Wednesday 08 April 2026 00:40:28 +0000 (0:00:00.115) 0:00:57.834 ******* 2026-04-08 00:40:30.474341 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:30.474352 | orchestrator | 2026-04-08 00:40:30.474363 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-08 00:40:30.474374 | orchestrator | Wednesday 08 April 2026 00:40:28 +0000 (0:00:00.118) 0:00:57.953 ******* 2026-04-08 00:40:30.474385 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:30.474396 | orchestrator | 2026-04-08 00:40:30.474407 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-08 00:40:30.474418 | orchestrator | Wednesday 08 April 2026 00:40:28 +0000 (0:00:00.129) 0:00:58.083 ******* 2026-04-08 00:40:30.474429 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:30.474440 | orchestrator | 2026-04-08 00:40:30.474452 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-08 00:40:30.474462 | orchestrator | Wednesday 08 April 2026 00:40:28 +0000 (0:00:00.120) 0:00:58.203 ******* 2026-04-08 00:40:30.474473 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:30.474484 | orchestrator | 2026-04-08 00:40:30.474495 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-08 00:40:30.474506 | orchestrator | Wednesday 08 April 2026 00:40:28 +0000 (0:00:00.119) 0:00:58.323 ******* 2026-04-08 00:40:30.474517 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:30.474528 | orchestrator | 2026-04-08 00:40:30.474537 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-08 00:40:30.474548 | orchestrator | Wednesday 08 April 2026 00:40:28 +0000 (0:00:00.118) 0:00:58.441 ******* 2026-04-08 00:40:30.474558 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:30.474569 | orchestrator | 2026-04-08 00:40:30.474581 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-08 00:40:30.474604 | orchestrator | Wednesday 08 April 2026 00:40:29 +0000 (0:00:00.115) 0:00:58.556 ******* 2026-04-08 00:40:30.474615 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:30.474625 | orchestrator | 2026-04-08 00:40:30.474636 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-08 00:40:30.474648 | orchestrator | Wednesday 08 April 2026 00:40:29 +0000 (0:00:00.119) 0:00:58.676 ******* 2026-04-08 00:40:30.474658 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:30.474669 | orchestrator | 2026-04-08 00:40:30.474680 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-08 00:40:30.474689 | orchestrator | Wednesday 08 April 2026 00:40:29 +0000 (0:00:00.116) 0:00:58.792 ******* 2026-04-08 00:40:30.474699 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:30.474709 | orchestrator | 2026-04-08 00:40:30.474721 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-08 00:40:30.474732 | orchestrator | Wednesday 08 April 2026 00:40:29 +0000 (0:00:00.250) 0:00:59.043 ******* 2026-04-08 00:40:30.474742 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:30.474753 | orchestrator | 2026-04-08 00:40:30.474765 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-08 00:40:30.474776 | orchestrator | Wednesday 08 April 2026 00:40:29 +0000 (0:00:00.122) 0:00:59.166 ******* 2026-04-08 00:40:30.474787 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:30.474797 | orchestrator | 2026-04-08 00:40:30.474808 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-08 00:40:30.474820 | orchestrator | Wednesday 08 April 2026 00:40:29 +0000 (0:00:00.114) 0:00:59.280 ******* 2026-04-08 00:40:30.474830 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:30.474840 | orchestrator | 2026-04-08 00:40:30.474851 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-08 00:40:30.474862 | orchestrator | Wednesday 08 April 2026 00:40:29 +0000 (0:00:00.123) 0:00:59.403 ******* 2026-04-08 00:40:30.474873 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:30.474883 | orchestrator | 2026-04-08 00:40:30.474893 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-08 00:40:30.474905 | orchestrator | Wednesday 08 April 2026 00:40:30 +0000 (0:00:00.111) 0:00:59.515 ******* 2026-04-08 00:40:30.474915 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:30.474926 | orchestrator | 2026-04-08 00:40:30.474937 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-08 00:40:30.474947 | orchestrator | Wednesday 08 April 2026 00:40:30 +0000 (0:00:00.108) 0:00:59.623 ******* 2026-04-08 00:40:30.474958 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c80af5d6-1159-5955-8f01-035b314db1bd', 'data_vg': 'ceph-c80af5d6-1159-5955-8f01-035b314db1bd'})  2026-04-08 00:40:30.474995 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d7d0ff5a-46f9-53d2-8425-61ef59e49033', 'data_vg': 'ceph-d7d0ff5a-46f9-53d2-8425-61ef59e49033'})  2026-04-08 00:40:30.475004 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:30.475015 | orchestrator | 2026-04-08 00:40:30.475025 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-08 00:40:30.475036 | orchestrator | Wednesday 08 April 2026 00:40:30 +0000 (0:00:00.131) 0:00:59.755 ******* 2026-04-08 00:40:30.475047 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c80af5d6-1159-5955-8f01-035b314db1bd', 'data_vg': 'ceph-c80af5d6-1159-5955-8f01-035b314db1bd'})  2026-04-08 00:40:30.475058 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d7d0ff5a-46f9-53d2-8425-61ef59e49033', 'data_vg': 'ceph-d7d0ff5a-46f9-53d2-8425-61ef59e49033'})  2026-04-08 00:40:30.475081 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:30.475093 | orchestrator | 2026-04-08 00:40:30.475104 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-08 00:40:30.475116 | orchestrator | Wednesday 08 April 2026 00:40:30 +0000 (0:00:00.130) 0:00:59.885 ******* 2026-04-08 00:40:30.475149 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c80af5d6-1159-5955-8f01-035b314db1bd', 'data_vg': 'ceph-c80af5d6-1159-5955-8f01-035b314db1bd'})  2026-04-08 00:40:33.050416 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d7d0ff5a-46f9-53d2-8425-61ef59e49033', 'data_vg': 'ceph-d7d0ff5a-46f9-53d2-8425-61ef59e49033'})  2026-04-08 00:40:33.050541 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:33.050567 | orchestrator | 2026-04-08 00:40:33.050585 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-08 00:40:33.050597 | orchestrator | Wednesday 08 April 2026 00:40:30 +0000 (0:00:00.128) 0:01:00.014 ******* 2026-04-08 00:40:33.050608 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c80af5d6-1159-5955-8f01-035b314db1bd', 'data_vg': 'ceph-c80af5d6-1159-5955-8f01-035b314db1bd'})  2026-04-08 00:40:33.050633 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d7d0ff5a-46f9-53d2-8425-61ef59e49033', 'data_vg': 'ceph-d7d0ff5a-46f9-53d2-8425-61ef59e49033'})  2026-04-08 00:40:33.050643 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:33.050653 | orchestrator | 2026-04-08 00:40:33.050662 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-08 00:40:33.050672 | orchestrator | Wednesday 08 April 2026 00:40:30 +0000 (0:00:00.130) 0:01:00.144 ******* 2026-04-08 00:40:33.050682 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c80af5d6-1159-5955-8f01-035b314db1bd', 'data_vg': 'ceph-c80af5d6-1159-5955-8f01-035b314db1bd'})  2026-04-08 00:40:33.050692 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d7d0ff5a-46f9-53d2-8425-61ef59e49033', 'data_vg': 'ceph-d7d0ff5a-46f9-53d2-8425-61ef59e49033'})  2026-04-08 00:40:33.050702 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:33.050711 | orchestrator | 2026-04-08 00:40:33.050722 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-08 00:40:33.050731 | orchestrator | Wednesday 08 April 2026 00:40:30 +0000 (0:00:00.135) 0:01:00.280 ******* 2026-04-08 00:40:33.050741 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c80af5d6-1159-5955-8f01-035b314db1bd', 'data_vg': 'ceph-c80af5d6-1159-5955-8f01-035b314db1bd'})  2026-04-08 00:40:33.050751 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d7d0ff5a-46f9-53d2-8425-61ef59e49033', 'data_vg': 'ceph-d7d0ff5a-46f9-53d2-8425-61ef59e49033'})  2026-04-08 00:40:33.050761 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:33.050770 | orchestrator | 2026-04-08 00:40:33.050780 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-08 00:40:33.050790 | orchestrator | Wednesday 08 April 2026 00:40:30 +0000 (0:00:00.130) 0:01:00.411 ******* 2026-04-08 00:40:33.050800 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c80af5d6-1159-5955-8f01-035b314db1bd', 'data_vg': 'ceph-c80af5d6-1159-5955-8f01-035b314db1bd'})  2026-04-08 00:40:33.050809 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d7d0ff5a-46f9-53d2-8425-61ef59e49033', 'data_vg': 'ceph-d7d0ff5a-46f9-53d2-8425-61ef59e49033'})  2026-04-08 00:40:33.050819 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:33.050828 | orchestrator | 2026-04-08 00:40:33.050838 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-08 00:40:33.050848 | orchestrator | Wednesday 08 April 2026 00:40:31 +0000 (0:00:00.254) 0:01:00.665 ******* 2026-04-08 00:40:33.050866 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c80af5d6-1159-5955-8f01-035b314db1bd', 'data_vg': 'ceph-c80af5d6-1159-5955-8f01-035b314db1bd'})  2026-04-08 00:40:33.050882 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d7d0ff5a-46f9-53d2-8425-61ef59e49033', 'data_vg': 'ceph-d7d0ff5a-46f9-53d2-8425-61ef59e49033'})  2026-04-08 00:40:33.050899 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:33.050915 | orchestrator | 2026-04-08 00:40:33.050931 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-08 00:40:33.051003 | orchestrator | Wednesday 08 April 2026 00:40:31 +0000 (0:00:00.130) 0:01:00.795 ******* 2026-04-08 00:40:33.051025 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:40:33.051044 | orchestrator | 2026-04-08 00:40:33.051062 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-08 00:40:33.051081 | orchestrator | Wednesday 08 April 2026 00:40:31 +0000 (0:00:00.434) 0:01:01.230 ******* 2026-04-08 00:40:33.051099 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:40:33.051118 | orchestrator | 2026-04-08 00:40:33.051136 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-08 00:40:33.051154 | orchestrator | Wednesday 08 April 2026 00:40:32 +0000 (0:00:00.449) 0:01:01.680 ******* 2026-04-08 00:40:33.051172 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:40:33.051189 | orchestrator | 2026-04-08 00:40:33.051206 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-08 00:40:33.051225 | orchestrator | Wednesday 08 April 2026 00:40:32 +0000 (0:00:00.129) 0:01:01.809 ******* 2026-04-08 00:40:33.051244 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-c80af5d6-1159-5955-8f01-035b314db1bd', 'vg_name': 'ceph-c80af5d6-1159-5955-8f01-035b314db1bd'}) 2026-04-08 00:40:33.051262 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-d7d0ff5a-46f9-53d2-8425-61ef59e49033', 'vg_name': 'ceph-d7d0ff5a-46f9-53d2-8425-61ef59e49033'}) 2026-04-08 00:40:33.051279 | orchestrator | 2026-04-08 00:40:33.051341 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-08 00:40:33.051361 | orchestrator | Wednesday 08 April 2026 00:40:32 +0000 (0:00:00.142) 0:01:01.952 ******* 2026-04-08 00:40:33.051433 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c80af5d6-1159-5955-8f01-035b314db1bd', 'data_vg': 'ceph-c80af5d6-1159-5955-8f01-035b314db1bd'})  2026-04-08 00:40:33.051452 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d7d0ff5a-46f9-53d2-8425-61ef59e49033', 'data_vg': 'ceph-d7d0ff5a-46f9-53d2-8425-61ef59e49033'})  2026-04-08 00:40:33.051469 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:33.051487 | orchestrator | 2026-04-08 00:40:33.051504 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-08 00:40:33.051520 | orchestrator | Wednesday 08 April 2026 00:40:32 +0000 (0:00:00.127) 0:01:02.080 ******* 2026-04-08 00:40:33.051536 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c80af5d6-1159-5955-8f01-035b314db1bd', 'data_vg': 'ceph-c80af5d6-1159-5955-8f01-035b314db1bd'})  2026-04-08 00:40:33.051561 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d7d0ff5a-46f9-53d2-8425-61ef59e49033', 'data_vg': 'ceph-d7d0ff5a-46f9-53d2-8425-61ef59e49033'})  2026-04-08 00:40:33.051580 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:33.051597 | orchestrator | 2026-04-08 00:40:33.051613 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-08 00:40:33.051630 | orchestrator | Wednesday 08 April 2026 00:40:32 +0000 (0:00:00.154) 0:01:02.234 ******* 2026-04-08 00:40:33.051646 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c80af5d6-1159-5955-8f01-035b314db1bd', 'data_vg': 'ceph-c80af5d6-1159-5955-8f01-035b314db1bd'})  2026-04-08 00:40:33.051662 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d7d0ff5a-46f9-53d2-8425-61ef59e49033', 'data_vg': 'ceph-d7d0ff5a-46f9-53d2-8425-61ef59e49033'})  2026-04-08 00:40:33.051678 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:33.051693 | orchestrator | 2026-04-08 00:40:33.051703 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-08 00:40:33.051712 | orchestrator | Wednesday 08 April 2026 00:40:32 +0000 (0:00:00.139) 0:01:02.373 ******* 2026-04-08 00:40:33.051722 | orchestrator | ok: [testbed-node-5] => { 2026-04-08 00:40:33.051731 | orchestrator |  "lvm_report": { 2026-04-08 00:40:33.051742 | orchestrator |  "lv": [ 2026-04-08 00:40:33.051752 | orchestrator |  { 2026-04-08 00:40:33.051762 | orchestrator |  "lv_name": "osd-block-c80af5d6-1159-5955-8f01-035b314db1bd", 2026-04-08 00:40:33.051783 | orchestrator |  "vg_name": "ceph-c80af5d6-1159-5955-8f01-035b314db1bd" 2026-04-08 00:40:33.051793 | orchestrator |  }, 2026-04-08 00:40:33.051802 | orchestrator |  { 2026-04-08 00:40:33.051812 | orchestrator |  "lv_name": "osd-block-d7d0ff5a-46f9-53d2-8425-61ef59e49033", 2026-04-08 00:40:33.051822 | orchestrator |  "vg_name": "ceph-d7d0ff5a-46f9-53d2-8425-61ef59e49033" 2026-04-08 00:40:33.051831 | orchestrator |  } 2026-04-08 00:40:33.051841 | orchestrator |  ], 2026-04-08 00:40:33.051851 | orchestrator |  "pv": [ 2026-04-08 00:40:33.051860 | orchestrator |  { 2026-04-08 00:40:33.051870 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-08 00:40:33.051879 | orchestrator |  "vg_name": "ceph-c80af5d6-1159-5955-8f01-035b314db1bd" 2026-04-08 00:40:33.051889 | orchestrator |  }, 2026-04-08 00:40:33.051898 | orchestrator |  { 2026-04-08 00:40:33.051908 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-08 00:40:33.051917 | orchestrator |  "vg_name": "ceph-d7d0ff5a-46f9-53d2-8425-61ef59e49033" 2026-04-08 00:40:33.051927 | orchestrator |  } 2026-04-08 00:40:33.051936 | orchestrator |  ] 2026-04-08 00:40:33.051946 | orchestrator |  } 2026-04-08 00:40:33.051956 | orchestrator | } 2026-04-08 00:40:33.052001 | orchestrator | 2026-04-08 00:40:33.052011 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:40:33.052021 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-08 00:40:33.052031 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-08 00:40:33.052041 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-08 00:40:33.052050 | orchestrator | 2026-04-08 00:40:33.052060 | orchestrator | 2026-04-08 00:40:33.052070 | orchestrator | 2026-04-08 00:40:33.052079 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:40:33.052089 | orchestrator | Wednesday 08 April 2026 00:40:33 +0000 (0:00:00.128) 0:01:02.502 ******* 2026-04-08 00:40:33.052098 | orchestrator | =============================================================================== 2026-04-08 00:40:33.052108 | orchestrator | Create block VGs -------------------------------------------------------- 5.39s 2026-04-08 00:40:33.052117 | orchestrator | Create block LVs -------------------------------------------------------- 3.97s 2026-04-08 00:40:33.052127 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.73s 2026-04-08 00:40:33.052136 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.49s 2026-04-08 00:40:33.052146 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.40s 2026-04-08 00:40:33.052155 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.40s 2026-04-08 00:40:33.052165 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.35s 2026-04-08 00:40:33.052175 | orchestrator | Add known partitions to the list of available block devices ------------- 1.29s 2026-04-08 00:40:33.052193 | orchestrator | Add known links to the list of available block devices ------------------ 1.03s 2026-04-08 00:40:33.299582 | orchestrator | Add known partitions to the list of available block devices ------------- 0.95s 2026-04-08 00:40:33.299666 | orchestrator | Print LVM report data --------------------------------------------------- 0.77s 2026-04-08 00:40:33.299675 | orchestrator | Add known partitions to the list of available block devices ------------- 0.69s 2026-04-08 00:40:33.299682 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.64s 2026-04-08 00:40:33.299688 | orchestrator | Create dict of block VGs -> PVs from ceph_osd_devices ------------------- 0.63s 2026-04-08 00:40:33.299713 | orchestrator | Count OSDs put on ceph_db_devices defined in lvm_volumes ---------------- 0.61s 2026-04-08 00:40:33.299719 | orchestrator | Add known partitions to the list of available block devices ------------- 0.61s 2026-04-08 00:40:33.299725 | orchestrator | Fail if DB LV defined in lvm_volumes is missing ------------------------- 0.59s 2026-04-08 00:40:33.299742 | orchestrator | Get initial list of available block devices ----------------------------- 0.58s 2026-04-08 00:40:33.299748 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.57s 2026-04-08 00:40:33.299754 | orchestrator | Print 'Create DB LVs for ceph_db_devices' ------------------------------- 0.56s 2026-04-08 00:40:44.633748 | orchestrator | 2026-04-08 00:40:44 | INFO  | Prepare task for execution of facts. 2026-04-08 00:40:44.698237 | orchestrator | 2026-04-08 00:40:44 | INFO  | Task ba966db6-d7f0-49a7-8a55-de751c3c8b42 (facts) was prepared for execution. 2026-04-08 00:40:44.698341 | orchestrator | 2026-04-08 00:40:44 | INFO  | It takes a moment until task ba966db6-d7f0-49a7-8a55-de751c3c8b42 (facts) has been started and output is visible here. 2026-04-08 00:40:55.615144 | orchestrator | 2026-04-08 00:40:55.615258 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-08 00:40:55.615275 | orchestrator | 2026-04-08 00:40:55.615288 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-08 00:40:55.615299 | orchestrator | Wednesday 08 April 2026 00:40:47 +0000 (0:00:00.294) 0:00:00.294 ******* 2026-04-08 00:40:55.615309 | orchestrator | ok: [testbed-manager] 2026-04-08 00:40:55.615320 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:40:55.615330 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:40:55.615340 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:40:55.615350 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:40:55.615360 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:40:55.615369 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:40:55.615379 | orchestrator | 2026-04-08 00:40:55.615389 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-08 00:40:55.615398 | orchestrator | Wednesday 08 April 2026 00:40:48 +0000 (0:00:01.242) 0:00:01.536 ******* 2026-04-08 00:40:55.615408 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:40:55.615418 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:40:55.615428 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:40:55.615437 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:40:55.615447 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:40:55.615457 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:55.615466 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:55.615476 | orchestrator | 2026-04-08 00:40:55.615485 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-08 00:40:55.615495 | orchestrator | 2026-04-08 00:40:55.615504 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-08 00:40:55.615513 | orchestrator | Wednesday 08 April 2026 00:40:49 +0000 (0:00:01.069) 0:00:02.605 ******* 2026-04-08 00:40:55.615523 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:40:55.615531 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:40:55.615541 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:40:55.615551 | orchestrator | ok: [testbed-manager] 2026-04-08 00:40:55.615560 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:40:55.615570 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:40:55.615580 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:40:55.615590 | orchestrator | 2026-04-08 00:40:55.615599 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-08 00:40:55.615609 | orchestrator | 2026-04-08 00:40:55.615618 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-08 00:40:55.615629 | orchestrator | Wednesday 08 April 2026 00:40:54 +0000 (0:00:04.854) 0:00:07.460 ******* 2026-04-08 00:40:55.615639 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:40:55.615649 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:40:55.615659 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:40:55.615692 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:40:55.615704 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:40:55.615714 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:40:55.615724 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:40:55.615733 | orchestrator | 2026-04-08 00:40:55.615743 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:40:55.615753 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:40:55.615764 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:40:55.615774 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:40:55.615784 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:40:55.615793 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:40:55.615804 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:40:55.615813 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:40:55.615823 | orchestrator | 2026-04-08 00:40:55.615833 | orchestrator | 2026-04-08 00:40:55.615843 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:40:55.615853 | orchestrator | Wednesday 08 April 2026 00:40:55 +0000 (0:00:00.477) 0:00:07.937 ******* 2026-04-08 00:40:55.615863 | orchestrator | =============================================================================== 2026-04-08 00:40:55.615872 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.85s 2026-04-08 00:40:55.615882 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.24s 2026-04-08 00:40:55.615904 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.07s 2026-04-08 00:40:55.615914 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.48s 2026-04-08 00:41:07.096765 | orchestrator | 2026-04-08 00:41:07 | INFO  | Prepare task for execution of frr. 2026-04-08 00:41:07.165590 | orchestrator | 2026-04-08 00:41:07 | INFO  | Task 41c0f18d-b0fa-4962-b017-cc545f1afe89 (frr) was prepared for execution. 2026-04-08 00:41:07.165678 | orchestrator | 2026-04-08 00:41:07 | INFO  | It takes a moment until task 41c0f18d-b0fa-4962-b017-cc545f1afe89 (frr) has been started and output is visible here. 2026-04-08 00:41:30.354882 | orchestrator | 2026-04-08 00:41:30.354971 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-04-08 00:41:30.354979 | orchestrator | 2026-04-08 00:41:30.354983 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-04-08 00:41:30.354987 | orchestrator | Wednesday 08 April 2026 00:41:10 +0000 (0:00:00.269) 0:00:00.269 ******* 2026-04-08 00:41:30.354992 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-04-08 00:41:30.354997 | orchestrator | 2026-04-08 00:41:30.355001 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-04-08 00:41:30.355005 | orchestrator | Wednesday 08 April 2026 00:41:10 +0000 (0:00:00.202) 0:00:00.472 ******* 2026-04-08 00:41:30.355009 | orchestrator | changed: [testbed-manager] 2026-04-08 00:41:30.355014 | orchestrator | 2026-04-08 00:41:30.355018 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-04-08 00:41:30.355022 | orchestrator | Wednesday 08 April 2026 00:41:11 +0000 (0:00:01.360) 0:00:01.832 ******* 2026-04-08 00:41:30.355041 | orchestrator | changed: [testbed-manager] 2026-04-08 00:41:30.355045 | orchestrator | 2026-04-08 00:41:30.355049 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-04-08 00:41:30.355053 | orchestrator | Wednesday 08 April 2026 00:41:20 +0000 (0:00:08.606) 0:00:10.439 ******* 2026-04-08 00:41:30.355057 | orchestrator | ok: [testbed-manager] 2026-04-08 00:41:30.355061 | orchestrator | 2026-04-08 00:41:30.355065 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-04-08 00:41:30.355069 | orchestrator | Wednesday 08 April 2026 00:41:21 +0000 (0:00:00.907) 0:00:11.347 ******* 2026-04-08 00:41:30.355073 | orchestrator | changed: [testbed-manager] 2026-04-08 00:41:30.355077 | orchestrator | 2026-04-08 00:41:30.355081 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-04-08 00:41:30.355084 | orchestrator | Wednesday 08 April 2026 00:41:22 +0000 (0:00:00.832) 0:00:12.179 ******* 2026-04-08 00:41:30.355088 | orchestrator | ok: [testbed-manager] 2026-04-08 00:41:30.355092 | orchestrator | 2026-04-08 00:41:30.355096 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-04-08 00:41:30.355099 | orchestrator | Wednesday 08 April 2026 00:41:23 +0000 (0:00:01.054) 0:00:13.234 ******* 2026-04-08 00:41:30.355103 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:41:30.355107 | orchestrator | 2026-04-08 00:41:30.355111 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-04-08 00:41:30.355115 | orchestrator | Wednesday 08 April 2026 00:41:23 +0000 (0:00:00.146) 0:00:13.380 ******* 2026-04-08 00:41:30.355118 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:41:30.355122 | orchestrator | 2026-04-08 00:41:30.355126 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-04-08 00:41:30.355129 | orchestrator | Wednesday 08 April 2026 00:41:23 +0000 (0:00:00.216) 0:00:13.597 ******* 2026-04-08 00:41:30.355133 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:41:30.355137 | orchestrator | 2026-04-08 00:41:30.355141 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-04-08 00:41:30.355145 | orchestrator | Wednesday 08 April 2026 00:41:23 +0000 (0:00:00.141) 0:00:13.739 ******* 2026-04-08 00:41:30.355149 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:41:30.355152 | orchestrator | 2026-04-08 00:41:30.355156 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-04-08 00:41:30.355160 | orchestrator | Wednesday 08 April 2026 00:41:23 +0000 (0:00:00.113) 0:00:13.853 ******* 2026-04-08 00:41:30.355164 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:41:30.355167 | orchestrator | 2026-04-08 00:41:30.355171 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-04-08 00:41:30.355175 | orchestrator | Wednesday 08 April 2026 00:41:23 +0000 (0:00:00.130) 0:00:13.983 ******* 2026-04-08 00:41:30.355179 | orchestrator | changed: [testbed-manager] 2026-04-08 00:41:30.355182 | orchestrator | 2026-04-08 00:41:30.355186 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-04-08 00:41:30.355190 | orchestrator | Wednesday 08 April 2026 00:41:25 +0000 (0:00:01.867) 0:00:15.851 ******* 2026-04-08 00:41:30.355194 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-04-08 00:41:30.355197 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-04-08 00:41:30.355202 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-04-08 00:41:30.355206 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-04-08 00:41:30.355210 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-04-08 00:41:30.355214 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-04-08 00:41:30.355218 | orchestrator | 2026-04-08 00:41:30.355221 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-04-08 00:41:30.355229 | orchestrator | Wednesday 08 April 2026 00:41:27 +0000 (0:00:02.024) 0:00:17.875 ******* 2026-04-08 00:41:30.355233 | orchestrator | ok: [testbed-manager] 2026-04-08 00:41:30.355237 | orchestrator | 2026-04-08 00:41:30.355241 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-04-08 00:41:30.355245 | orchestrator | Wednesday 08 April 2026 00:41:28 +0000 (0:00:01.095) 0:00:18.971 ******* 2026-04-08 00:41:30.355249 | orchestrator | changed: [testbed-manager] 2026-04-08 00:41:30.355252 | orchestrator | 2026-04-08 00:41:30.355256 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:41:30.355260 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-08 00:41:30.355264 | orchestrator | 2026-04-08 00:41:30.355268 | orchestrator | 2026-04-08 00:41:30.355283 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:41:30.355287 | orchestrator | Wednesday 08 April 2026 00:41:30 +0000 (0:00:01.316) 0:00:20.288 ******* 2026-04-08 00:41:30.355291 | orchestrator | =============================================================================== 2026-04-08 00:41:30.355295 | orchestrator | osism.services.frr : Install frr package -------------------------------- 8.61s 2026-04-08 00:41:30.355298 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.02s 2026-04-08 00:41:30.355302 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.87s 2026-04-08 00:41:30.355306 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.36s 2026-04-08 00:41:30.355321 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.32s 2026-04-08 00:41:30.355325 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.10s 2026-04-08 00:41:30.355329 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.05s 2026-04-08 00:41:30.355332 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 0.91s 2026-04-08 00:41:30.355336 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.83s 2026-04-08 00:41:30.355340 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 0.22s 2026-04-08 00:41:30.355343 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.20s 2026-04-08 00:41:30.355347 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 0.15s 2026-04-08 00:41:30.355351 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 0.14s 2026-04-08 00:41:30.355355 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.13s 2026-04-08 00:41:30.355358 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.11s 2026-04-08 00:41:30.558331 | orchestrator | 2026-04-08 00:41:30.560495 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Wed Apr 8 00:41:30 UTC 2026 2026-04-08 00:41:30.560562 | orchestrator | 2026-04-08 00:41:31.588271 | orchestrator | 2026-04-08 00:41:31 | INFO  | Collection nutshell is prepared for execution 2026-04-08 00:41:31.687243 | orchestrator | 2026-04-08 00:41:31 | INFO  | A [0] - dotfiles 2026-04-08 00:41:41.746458 | orchestrator | 2026-04-08 00:41:41 | INFO  | A [0] - homer 2026-04-08 00:41:41.746572 | orchestrator | 2026-04-08 00:41:41 | INFO  | A [0] - netdata 2026-04-08 00:41:41.746580 | orchestrator | 2026-04-08 00:41:41 | INFO  | A [0] - openstackclient 2026-04-08 00:41:41.746591 | orchestrator | 2026-04-08 00:41:41 | INFO  | A [0] - phpmyadmin 2026-04-08 00:41:41.746596 | orchestrator | 2026-04-08 00:41:41 | INFO  | A [0] - common 2026-04-08 00:41:41.750560 | orchestrator | 2026-04-08 00:41:41 | INFO  | A [1] -- loadbalancer 2026-04-08 00:41:41.750772 | orchestrator | 2026-04-08 00:41:41 | INFO  | A [2] --- opensearch 2026-04-08 00:41:41.750986 | orchestrator | 2026-04-08 00:41:41 | INFO  | A [2] --- mariadb-ng 2026-04-08 00:41:41.751048 | orchestrator | 2026-04-08 00:41:41 | INFO  | A [3] ---- horizon 2026-04-08 00:41:41.751687 | orchestrator | 2026-04-08 00:41:41 | INFO  | A [3] ---- keystone 2026-04-08 00:41:41.752491 | orchestrator | 2026-04-08 00:41:41 | INFO  | A [4] ----- neutron 2026-04-08 00:41:41.752703 | orchestrator | 2026-04-08 00:41:41 | INFO  | A [5] ------ wait-for-nova 2026-04-08 00:41:41.752968 | orchestrator | 2026-04-08 00:41:41 | INFO  | A [6] ------- octavia 2026-04-08 00:41:41.754725 | orchestrator | 2026-04-08 00:41:41 | INFO  | A [4] ----- barbican 2026-04-08 00:41:41.754811 | orchestrator | 2026-04-08 00:41:41 | INFO  | A [4] ----- designate 2026-04-08 00:41:41.755152 | orchestrator | 2026-04-08 00:41:41 | INFO  | A [4] ----- ironic 2026-04-08 00:41:41.755164 | orchestrator | 2026-04-08 00:41:41 | INFO  | A [4] ----- placement 2026-04-08 00:41:41.755687 | orchestrator | 2026-04-08 00:41:41 | INFO  | A [4] ----- magnum 2026-04-08 00:41:41.757703 | orchestrator | 2026-04-08 00:41:41 | INFO  | A [1] -- openvswitch 2026-04-08 00:41:41.758428 | orchestrator | 2026-04-08 00:41:41 | INFO  | A [2] --- ovn 2026-04-08 00:41:41.759240 | orchestrator | 2026-04-08 00:41:41 | INFO  | A [1] -- memcached 2026-04-08 00:41:41.759279 | orchestrator | 2026-04-08 00:41:41 | INFO  | A [1] -- redis 2026-04-08 00:41:41.759572 | orchestrator | 2026-04-08 00:41:41 | INFO  | A [1] -- rabbitmq-ng 2026-04-08 00:41:41.760051 | orchestrator | 2026-04-08 00:41:41 | INFO  | A [0] - kubernetes 2026-04-08 00:41:41.763843 | orchestrator | 2026-04-08 00:41:41 | INFO  | A [1] -- kubeconfig 2026-04-08 00:41:41.763941 | orchestrator | 2026-04-08 00:41:41 | INFO  | A [1] -- copy-kubeconfig 2026-04-08 00:41:41.764279 | orchestrator | 2026-04-08 00:41:41 | INFO  | A [0] - ceph 2026-04-08 00:41:41.767541 | orchestrator | 2026-04-08 00:41:41 | INFO  | A [1] -- ceph-pools 2026-04-08 00:41:41.767600 | orchestrator | 2026-04-08 00:41:41 | INFO  | A [2] --- copy-ceph-keys 2026-04-08 00:41:41.767612 | orchestrator | 2026-04-08 00:41:41 | INFO  | A [3] ---- cephclient 2026-04-08 00:41:41.767823 | orchestrator | 2026-04-08 00:41:41 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-04-08 00:41:41.767838 | orchestrator | 2026-04-08 00:41:41 | INFO  | A [4] ----- wait-for-keystone 2026-04-08 00:41:41.768283 | orchestrator | 2026-04-08 00:41:41 | INFO  | A [5] ------ kolla-ceph-rgw 2026-04-08 00:41:41.768448 | orchestrator | 2026-04-08 00:41:41 | INFO  | A [5] ------ glance 2026-04-08 00:41:41.768581 | orchestrator | 2026-04-08 00:41:41 | INFO  | A [5] ------ cinder 2026-04-08 00:41:41.768672 | orchestrator | 2026-04-08 00:41:41 | INFO  | A [5] ------ nova 2026-04-08 00:41:41.769418 | orchestrator | 2026-04-08 00:41:41 | INFO  | A [4] ----- prometheus 2026-04-08 00:41:41.769570 | orchestrator | 2026-04-08 00:41:41 | INFO  | A [5] ------ grafana 2026-04-08 00:41:41.989687 | orchestrator | 2026-04-08 00:41:41 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-04-08 00:41:41.989757 | orchestrator | 2026-04-08 00:41:41 | INFO  | Tasks are running in the background 2026-04-08 00:41:43.661603 | orchestrator | 2026-04-08 00:41:43 | INFO  | No task IDs specified, wait for all currently running tasks 2026-04-08 00:41:45.861457 | orchestrator | 2026-04-08 00:41:45 | INFO  | Task f1e83987-18b7-47a4-bd1d-cfccd73083ed is in state STARTED 2026-04-08 00:41:45.862745 | orchestrator | 2026-04-08 00:41:45 | INFO  | Task e2967b4e-4adc-4f38-b2de-00fc440895d0 is in state STARTED 2026-04-08 00:41:45.865871 | orchestrator | 2026-04-08 00:41:45 | INFO  | Task cbf00e16-1e4f-449a-93f0-ffe41ef4516b is in state STARTED 2026-04-08 00:41:45.866545 | orchestrator | 2026-04-08 00:41:45 | INFO  | Task ab6134c5-4a9e-463c-b8d7-f884f60d587b is in state STARTED 2026-04-08 00:41:45.867666 | orchestrator | 2026-04-08 00:41:45 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:41:45.868120 | orchestrator | 2026-04-08 00:41:45 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:41:45.870866 | orchestrator | 2026-04-08 00:41:45 | INFO  | Task 387595d3-d5f4-4fbe-ae5f-f230f8fb0cf7 is in state STARTED 2026-04-08 00:41:45.870941 | orchestrator | 2026-04-08 00:41:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:41:48.933522 | orchestrator | 2026-04-08 00:41:48 | INFO  | Task f1e83987-18b7-47a4-bd1d-cfccd73083ed is in state STARTED 2026-04-08 00:41:48.933594 | orchestrator | 2026-04-08 00:41:48 | INFO  | Task e2967b4e-4adc-4f38-b2de-00fc440895d0 is in state STARTED 2026-04-08 00:41:48.933600 | orchestrator | 2026-04-08 00:41:48 | INFO  | Task cbf00e16-1e4f-449a-93f0-ffe41ef4516b is in state STARTED 2026-04-08 00:41:48.936816 | orchestrator | 2026-04-08 00:41:48 | INFO  | Task ab6134c5-4a9e-463c-b8d7-f884f60d587b is in state STARTED 2026-04-08 00:41:48.938865 | orchestrator | 2026-04-08 00:41:48 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:41:48.939855 | orchestrator | 2026-04-08 00:41:48 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:41:48.940947 | orchestrator | 2026-04-08 00:41:48 | INFO  | Task 387595d3-d5f4-4fbe-ae5f-f230f8fb0cf7 is in state STARTED 2026-04-08 00:41:48.941120 | orchestrator | 2026-04-08 00:41:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:41:51.976457 | orchestrator | 2026-04-08 00:41:51 | INFO  | Task f1e83987-18b7-47a4-bd1d-cfccd73083ed is in state STARTED 2026-04-08 00:41:51.976531 | orchestrator | 2026-04-08 00:41:51 | INFO  | Task e2967b4e-4adc-4f38-b2de-00fc440895d0 is in state STARTED 2026-04-08 00:41:51.976624 | orchestrator | 2026-04-08 00:41:51 | INFO  | Task cbf00e16-1e4f-449a-93f0-ffe41ef4516b is in state STARTED 2026-04-08 00:41:51.977367 | orchestrator | 2026-04-08 00:41:51 | INFO  | Task ab6134c5-4a9e-463c-b8d7-f884f60d587b is in state STARTED 2026-04-08 00:41:51.980516 | orchestrator | 2026-04-08 00:41:51 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:41:51.980955 | orchestrator | 2026-04-08 00:41:51 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:41:51.981562 | orchestrator | 2026-04-08 00:41:51 | INFO  | Task 387595d3-d5f4-4fbe-ae5f-f230f8fb0cf7 is in state STARTED 2026-04-08 00:41:51.981588 | orchestrator | 2026-04-08 00:41:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:41:55.108757 | orchestrator | 2026-04-08 00:41:55 | INFO  | Task f1e83987-18b7-47a4-bd1d-cfccd73083ed is in state STARTED 2026-04-08 00:41:55.108854 | orchestrator | 2026-04-08 00:41:55 | INFO  | Task e2967b4e-4adc-4f38-b2de-00fc440895d0 is in state STARTED 2026-04-08 00:41:55.108952 | orchestrator | 2026-04-08 00:41:55 | INFO  | Task cbf00e16-1e4f-449a-93f0-ffe41ef4516b is in state STARTED 2026-04-08 00:41:55.108964 | orchestrator | 2026-04-08 00:41:55 | INFO  | Task ab6134c5-4a9e-463c-b8d7-f884f60d587b is in state STARTED 2026-04-08 00:41:55.108971 | orchestrator | 2026-04-08 00:41:55 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:41:55.108977 | orchestrator | 2026-04-08 00:41:55 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:41:55.108984 | orchestrator | 2026-04-08 00:41:55 | INFO  | Task 387595d3-d5f4-4fbe-ae5f-f230f8fb0cf7 is in state STARTED 2026-04-08 00:41:55.109016 | orchestrator | 2026-04-08 00:41:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:41:58.075763 | orchestrator | 2026-04-08 00:41:58 | INFO  | Task f1e83987-18b7-47a4-bd1d-cfccd73083ed is in state STARTED 2026-04-08 00:41:58.077797 | orchestrator | 2026-04-08 00:41:58 | INFO  | Task e2967b4e-4adc-4f38-b2de-00fc440895d0 is in state STARTED 2026-04-08 00:41:58.079112 | orchestrator | 2026-04-08 00:41:58 | INFO  | Task cbf00e16-1e4f-449a-93f0-ffe41ef4516b is in state STARTED 2026-04-08 00:41:58.079174 | orchestrator | 2026-04-08 00:41:58 | INFO  | Task ab6134c5-4a9e-463c-b8d7-f884f60d587b is in state STARTED 2026-04-08 00:41:58.079184 | orchestrator | 2026-04-08 00:41:58 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:41:58.079800 | orchestrator | 2026-04-08 00:41:58 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:41:58.083355 | orchestrator | 2026-04-08 00:41:58 | INFO  | Task 387595d3-d5f4-4fbe-ae5f-f230f8fb0cf7 is in state STARTED 2026-04-08 00:41:58.083368 | orchestrator | 2026-04-08 00:41:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:42:01.603165 | orchestrator | 2026-04-08 00:42:01 | INFO  | Task f1e83987-18b7-47a4-bd1d-cfccd73083ed is in state STARTED 2026-04-08 00:42:01.603311 | orchestrator | 2026-04-08 00:42:01 | INFO  | Task e2967b4e-4adc-4f38-b2de-00fc440895d0 is in state STARTED 2026-04-08 00:42:01.603328 | orchestrator | 2026-04-08 00:42:01 | INFO  | Task cbf00e16-1e4f-449a-93f0-ffe41ef4516b is in state STARTED 2026-04-08 00:42:01.603340 | orchestrator | 2026-04-08 00:42:01 | INFO  | Task ab6134c5-4a9e-463c-b8d7-f884f60d587b is in state STARTED 2026-04-08 00:42:01.603352 | orchestrator | 2026-04-08 00:42:01 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:42:01.603364 | orchestrator | 2026-04-08 00:42:01 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:42:01.603375 | orchestrator | 2026-04-08 00:42:01 | INFO  | Task 387595d3-d5f4-4fbe-ae5f-f230f8fb0cf7 is in state STARTED 2026-04-08 00:42:01.603387 | orchestrator | 2026-04-08 00:42:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:42:04.744855 | orchestrator | 2026-04-08 00:42:04 | INFO  | Task f1e83987-18b7-47a4-bd1d-cfccd73083ed is in state STARTED 2026-04-08 00:42:04.745075 | orchestrator | 2026-04-08 00:42:04 | INFO  | Task e2967b4e-4adc-4f38-b2de-00fc440895d0 is in state STARTED 2026-04-08 00:42:04.745095 | orchestrator | 2026-04-08 00:42:04 | INFO  | Task cbf00e16-1e4f-449a-93f0-ffe41ef4516b is in state STARTED 2026-04-08 00:42:04.745109 | orchestrator | 2026-04-08 00:42:04 | INFO  | Task ab6134c5-4a9e-463c-b8d7-f884f60d587b is in state STARTED 2026-04-08 00:42:04.745125 | orchestrator | 2026-04-08 00:42:04 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:42:04.745140 | orchestrator | 2026-04-08 00:42:04 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:42:04.745151 | orchestrator | 2026-04-08 00:42:04 | INFO  | Task 387595d3-d5f4-4fbe-ae5f-f230f8fb0cf7 is in state STARTED 2026-04-08 00:42:04.745160 | orchestrator | 2026-04-08 00:42:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:42:07.872736 | orchestrator | 2026-04-08 00:42:07 | INFO  | Task f1e83987-18b7-47a4-bd1d-cfccd73083ed is in state STARTED 2026-04-08 00:42:07.873071 | orchestrator | 2026-04-08 00:42:07 | INFO  | Task e2967b4e-4adc-4f38-b2de-00fc440895d0 is in state STARTED 2026-04-08 00:42:07.875826 | orchestrator | 2026-04-08 00:42:07 | INFO  | Task cbf00e16-1e4f-449a-93f0-ffe41ef4516b is in state STARTED 2026-04-08 00:42:07.877083 | orchestrator | 2026-04-08 00:42:07 | INFO  | Task ab6134c5-4a9e-463c-b8d7-f884f60d587b is in state SUCCESS 2026-04-08 00:42:07.877482 | orchestrator | 2026-04-08 00:42:07.877508 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-04-08 00:42:07.877518 | orchestrator | 2026-04-08 00:42:07.877527 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-04-08 00:42:07.877536 | orchestrator | Wednesday 08 April 2026 00:41:51 +0000 (0:00:00.782) 0:00:00.782 ******* 2026-04-08 00:42:07.877559 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:42:07.877577 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:42:07.877586 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:42:07.877595 | orchestrator | changed: [testbed-manager] 2026-04-08 00:42:07.877603 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:42:07.877612 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:42:07.877620 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:42:07.877629 | orchestrator | 2026-04-08 00:42:07.877637 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-04-08 00:42:07.877646 | orchestrator | Wednesday 08 April 2026 00:41:56 +0000 (0:00:05.073) 0:00:05.855 ******* 2026-04-08 00:42:07.877655 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-04-08 00:42:07.877664 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-04-08 00:42:07.877673 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-04-08 00:42:07.877681 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-04-08 00:42:07.877690 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-04-08 00:42:07.877699 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-04-08 00:42:07.877707 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-04-08 00:42:07.877716 | orchestrator | 2026-04-08 00:42:07.877725 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-04-08 00:42:07.877734 | orchestrator | Wednesday 08 April 2026 00:41:58 +0000 (0:00:01.702) 0:00:07.558 ******* 2026-04-08 00:42:07.877746 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-08 00:41:57.598367', 'end': '2026-04-08 00:41:57.606378', 'delta': '0:00:00.008011', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-08 00:42:07.877758 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-08 00:41:57.541739', 'end': '2026-04-08 00:41:57.548174', 'delta': '0:00:00.006435', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-08 00:42:07.877768 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-08 00:41:57.568640', 'end': '2026-04-08 00:41:57.574579', 'delta': '0:00:00.005939', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-08 00:42:07.877820 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-08 00:41:57.869570', 'end': '2026-04-08 00:41:57.877001', 'delta': '0:00:00.007431', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-08 00:42:07.877830 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-08 00:41:58.061444', 'end': '2026-04-08 00:41:58.067015', 'delta': '0:00:00.005571', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-08 00:42:07.877839 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-08 00:41:58.235645', 'end': '2026-04-08 00:41:58.243202', 'delta': '0:00:00.007557', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-08 00:42:07.877848 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-08 00:41:57.850879', 'end': '2026-04-08 00:41:57.859447', 'delta': '0:00:00.008568', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-08 00:42:07.877857 | orchestrator | 2026-04-08 00:42:07.877894 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-04-08 00:42:07.877909 | orchestrator | Wednesday 08 April 2026 00:42:00 +0000 (0:00:02.055) 0:00:09.614 ******* 2026-04-08 00:42:07.877918 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-04-08 00:42:07.877927 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-04-08 00:42:07.877936 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-04-08 00:42:07.877944 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-04-08 00:42:07.877952 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-04-08 00:42:07.877961 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-04-08 00:42:07.877969 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-04-08 00:42:07.877978 | orchestrator | 2026-04-08 00:42:07.877987 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-04-08 00:42:07.877996 | orchestrator | Wednesday 08 April 2026 00:42:03 +0000 (0:00:02.686) 0:00:12.300 ******* 2026-04-08 00:42:07.878004 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-04-08 00:42:07.878050 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-04-08 00:42:07.878063 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-04-08 00:42:07.878072 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-04-08 00:42:07.878082 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-04-08 00:42:07.878091 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-04-08 00:42:07.878111 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-04-08 00:42:07.878128 | orchestrator | 2026-04-08 00:42:07.878137 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:42:07.878155 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:42:07.878166 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:42:07.878176 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:42:07.878185 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:42:07.878194 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:42:07.878203 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:42:07.878212 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:42:07.878220 | orchestrator | 2026-04-08 00:42:07.878229 | orchestrator | 2026-04-08 00:42:07.878238 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:42:07.878492 | orchestrator | Wednesday 08 April 2026 00:42:06 +0000 (0:00:02.993) 0:00:15.293 ******* 2026-04-08 00:42:07.878505 | orchestrator | =============================================================================== 2026-04-08 00:42:07.878513 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 5.07s 2026-04-08 00:42:07.878522 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.99s 2026-04-08 00:42:07.878530 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.69s 2026-04-08 00:42:07.878538 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.06s 2026-04-08 00:42:07.878546 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.70s 2026-04-08 00:42:07.878823 | orchestrator | 2026-04-08 00:42:07 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:42:07.881124 | orchestrator | 2026-04-08 00:42:07 | INFO  | Task 9447d72c-2a1a-45e6-8419-1ff765483034 is in state STARTED 2026-04-08 00:42:07.882339 | orchestrator | 2026-04-08 00:42:07 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:42:07.884592 | orchestrator | 2026-04-08 00:42:07 | INFO  | Task 387595d3-d5f4-4fbe-ae5f-f230f8fb0cf7 is in state STARTED 2026-04-08 00:42:07.884662 | orchestrator | 2026-04-08 00:42:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:42:10.932532 | orchestrator | 2026-04-08 00:42:10 | INFO  | Task f1e83987-18b7-47a4-bd1d-cfccd73083ed is in state STARTED 2026-04-08 00:42:10.937766 | orchestrator | 2026-04-08 00:42:10 | INFO  | Task e2967b4e-4adc-4f38-b2de-00fc440895d0 is in state STARTED 2026-04-08 00:42:10.938080 | orchestrator | 2026-04-08 00:42:10 | INFO  | Task cbf00e16-1e4f-449a-93f0-ffe41ef4516b is in state STARTED 2026-04-08 00:42:10.939459 | orchestrator | 2026-04-08 00:42:10 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:42:10.941035 | orchestrator | 2026-04-08 00:42:10 | INFO  | Task 9447d72c-2a1a-45e6-8419-1ff765483034 is in state STARTED 2026-04-08 00:42:10.942078 | orchestrator | 2026-04-08 00:42:10 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:42:10.944433 | orchestrator | 2026-04-08 00:42:10 | INFO  | Task 387595d3-d5f4-4fbe-ae5f-f230f8fb0cf7 is in state STARTED 2026-04-08 00:42:10.944555 | orchestrator | 2026-04-08 00:42:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:42:14.003704 | orchestrator | 2026-04-08 00:42:13 | INFO  | Task f1e83987-18b7-47a4-bd1d-cfccd73083ed is in state STARTED 2026-04-08 00:42:14.003820 | orchestrator | 2026-04-08 00:42:13 | INFO  | Task e2967b4e-4adc-4f38-b2de-00fc440895d0 is in state STARTED 2026-04-08 00:42:14.003835 | orchestrator | 2026-04-08 00:42:13 | INFO  | Task cbf00e16-1e4f-449a-93f0-ffe41ef4516b is in state STARTED 2026-04-08 00:42:14.003849 | orchestrator | 2026-04-08 00:42:13 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:42:14.003914 | orchestrator | 2026-04-08 00:42:13 | INFO  | Task 9447d72c-2a1a-45e6-8419-1ff765483034 is in state STARTED 2026-04-08 00:42:14.003928 | orchestrator | 2026-04-08 00:42:13 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:42:14.003943 | orchestrator | 2026-04-08 00:42:13 | INFO  | Task 387595d3-d5f4-4fbe-ae5f-f230f8fb0cf7 is in state STARTED 2026-04-08 00:42:14.003956 | orchestrator | 2026-04-08 00:42:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:42:17.030317 | orchestrator | 2026-04-08 00:42:17 | INFO  | Task f1e83987-18b7-47a4-bd1d-cfccd73083ed is in state STARTED 2026-04-08 00:42:17.030419 | orchestrator | 2026-04-08 00:42:17 | INFO  | Task e2967b4e-4adc-4f38-b2de-00fc440895d0 is in state STARTED 2026-04-08 00:42:17.030436 | orchestrator | 2026-04-08 00:42:17 | INFO  | Task cbf00e16-1e4f-449a-93f0-ffe41ef4516b is in state STARTED 2026-04-08 00:42:17.030913 | orchestrator | 2026-04-08 00:42:17 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:42:17.031905 | orchestrator | 2026-04-08 00:42:17 | INFO  | Task 9447d72c-2a1a-45e6-8419-1ff765483034 is in state STARTED 2026-04-08 00:42:17.033899 | orchestrator | 2026-04-08 00:42:17 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:42:17.035068 | orchestrator | 2026-04-08 00:42:17 | INFO  | Task 387595d3-d5f4-4fbe-ae5f-f230f8fb0cf7 is in state STARTED 2026-04-08 00:42:17.035510 | orchestrator | 2026-04-08 00:42:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:42:20.067238 | orchestrator | 2026-04-08 00:42:20 | INFO  | Task f1e83987-18b7-47a4-bd1d-cfccd73083ed is in state STARTED 2026-04-08 00:42:20.069364 | orchestrator | 2026-04-08 00:42:20 | INFO  | Task e2967b4e-4adc-4f38-b2de-00fc440895d0 is in state STARTED 2026-04-08 00:42:20.069414 | orchestrator | 2026-04-08 00:42:20 | INFO  | Task cbf00e16-1e4f-449a-93f0-ffe41ef4516b is in state STARTED 2026-04-08 00:42:20.069428 | orchestrator | 2026-04-08 00:42:20 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:42:20.069439 | orchestrator | 2026-04-08 00:42:20 | INFO  | Task 9447d72c-2a1a-45e6-8419-1ff765483034 is in state STARTED 2026-04-08 00:42:20.070827 | orchestrator | 2026-04-08 00:42:20 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:42:20.071685 | orchestrator | 2026-04-08 00:42:20 | INFO  | Task 387595d3-d5f4-4fbe-ae5f-f230f8fb0cf7 is in state STARTED 2026-04-08 00:42:20.071766 | orchestrator | 2026-04-08 00:42:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:42:23.486560 | orchestrator | 2026-04-08 00:42:23 | INFO  | Task f1e83987-18b7-47a4-bd1d-cfccd73083ed is in state STARTED 2026-04-08 00:42:23.486646 | orchestrator | 2026-04-08 00:42:23 | INFO  | Task e2967b4e-4adc-4f38-b2de-00fc440895d0 is in state STARTED 2026-04-08 00:42:23.486657 | orchestrator | 2026-04-08 00:42:23 | INFO  | Task cbf00e16-1e4f-449a-93f0-ffe41ef4516b is in state STARTED 2026-04-08 00:42:23.486665 | orchestrator | 2026-04-08 00:42:23 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:42:23.486672 | orchestrator | 2026-04-08 00:42:23 | INFO  | Task 9447d72c-2a1a-45e6-8419-1ff765483034 is in state STARTED 2026-04-08 00:42:23.486679 | orchestrator | 2026-04-08 00:42:23 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:42:23.486685 | orchestrator | 2026-04-08 00:42:23 | INFO  | Task 387595d3-d5f4-4fbe-ae5f-f230f8fb0cf7 is in state STARTED 2026-04-08 00:42:23.486709 | orchestrator | 2026-04-08 00:42:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:42:26.241488 | orchestrator | 2026-04-08 00:42:26 | INFO  | Task f1e83987-18b7-47a4-bd1d-cfccd73083ed is in state STARTED 2026-04-08 00:42:26.241559 | orchestrator | 2026-04-08 00:42:26 | INFO  | Task e2967b4e-4adc-4f38-b2de-00fc440895d0 is in state STARTED 2026-04-08 00:42:26.241566 | orchestrator | 2026-04-08 00:42:26 | INFO  | Task cbf00e16-1e4f-449a-93f0-ffe41ef4516b is in state STARTED 2026-04-08 00:42:26.241570 | orchestrator | 2026-04-08 00:42:26 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:42:26.241574 | orchestrator | 2026-04-08 00:42:26 | INFO  | Task 9447d72c-2a1a-45e6-8419-1ff765483034 is in state STARTED 2026-04-08 00:42:26.241579 | orchestrator | 2026-04-08 00:42:26 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:42:26.241582 | orchestrator | 2026-04-08 00:42:26 | INFO  | Task 387595d3-d5f4-4fbe-ae5f-f230f8fb0cf7 is in state STARTED 2026-04-08 00:42:26.241587 | orchestrator | 2026-04-08 00:42:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:42:29.374668 | orchestrator | 2026-04-08 00:42:29 | INFO  | Task f1e83987-18b7-47a4-bd1d-cfccd73083ed is in state STARTED 2026-04-08 00:42:29.374749 | orchestrator | 2026-04-08 00:42:29 | INFO  | Task e2967b4e-4adc-4f38-b2de-00fc440895d0 is in state STARTED 2026-04-08 00:42:29.374758 | orchestrator | 2026-04-08 00:42:29 | INFO  | Task cbf00e16-1e4f-449a-93f0-ffe41ef4516b is in state STARTED 2026-04-08 00:42:29.374765 | orchestrator | 2026-04-08 00:42:29 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:42:29.374793 | orchestrator | 2026-04-08 00:42:29 | INFO  | Task 9447d72c-2a1a-45e6-8419-1ff765483034 is in state STARTED 2026-04-08 00:42:29.374800 | orchestrator | 2026-04-08 00:42:29 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:42:29.374806 | orchestrator | 2026-04-08 00:42:29 | INFO  | Task 387595d3-d5f4-4fbe-ae5f-f230f8fb0cf7 is in state STARTED 2026-04-08 00:42:29.374812 | orchestrator | 2026-04-08 00:42:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:42:32.615614 | orchestrator | 2026-04-08 00:42:32 | INFO  | Task f1e83987-18b7-47a4-bd1d-cfccd73083ed is in state STARTED 2026-04-08 00:42:32.615705 | orchestrator | 2026-04-08 00:42:32 | INFO  | Task e2967b4e-4adc-4f38-b2de-00fc440895d0 is in state STARTED 2026-04-08 00:42:32.615715 | orchestrator | 2026-04-08 00:42:32 | INFO  | Task cbf00e16-1e4f-449a-93f0-ffe41ef4516b is in state STARTED 2026-04-08 00:42:32.615723 | orchestrator | 2026-04-08 00:42:32 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:42:32.615742 | orchestrator | 2026-04-08 00:42:32 | INFO  | Task 9447d72c-2a1a-45e6-8419-1ff765483034 is in state STARTED 2026-04-08 00:42:32.615750 | orchestrator | 2026-04-08 00:42:32 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:42:32.615757 | orchestrator | 2026-04-08 00:42:32 | INFO  | Task 387595d3-d5f4-4fbe-ae5f-f230f8fb0cf7 is in state STARTED 2026-04-08 00:42:32.615764 | orchestrator | 2026-04-08 00:42:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:42:35.619430 | orchestrator | 2026-04-08 00:42:35 | INFO  | Task f1e83987-18b7-47a4-bd1d-cfccd73083ed is in state STARTED 2026-04-08 00:42:35.619603 | orchestrator | 2026-04-08 00:42:35 | INFO  | Task e2967b4e-4adc-4f38-b2de-00fc440895d0 is in state STARTED 2026-04-08 00:42:35.620167 | orchestrator | 2026-04-08 00:42:35 | INFO  | Task cbf00e16-1e4f-449a-93f0-ffe41ef4516b is in state SUCCESS 2026-04-08 00:42:35.621752 | orchestrator | 2026-04-08 00:42:35 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:42:35.622481 | orchestrator | 2026-04-08 00:42:35 | INFO  | Task 9447d72c-2a1a-45e6-8419-1ff765483034 is in state STARTED 2026-04-08 00:42:35.625899 | orchestrator | 2026-04-08 00:42:35 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:42:35.627816 | orchestrator | 2026-04-08 00:42:35 | INFO  | Task 387595d3-d5f4-4fbe-ae5f-f230f8fb0cf7 is in state STARTED 2026-04-08 00:42:35.627900 | orchestrator | 2026-04-08 00:42:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:42:38.784940 | orchestrator | 2026-04-08 00:42:38 | INFO  | Task f1e83987-18b7-47a4-bd1d-cfccd73083ed is in state STARTED 2026-04-08 00:42:38.785047 | orchestrator | 2026-04-08 00:42:38 | INFO  | Task e2967b4e-4adc-4f38-b2de-00fc440895d0 is in state STARTED 2026-04-08 00:42:38.785081 | orchestrator | 2026-04-08 00:42:38 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:42:38.785094 | orchestrator | 2026-04-08 00:42:38 | INFO  | Task 9447d72c-2a1a-45e6-8419-1ff765483034 is in state STARTED 2026-04-08 00:42:38.785106 | orchestrator | 2026-04-08 00:42:38 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:42:38.785117 | orchestrator | 2026-04-08 00:42:38 | INFO  | Task 387595d3-d5f4-4fbe-ae5f-f230f8fb0cf7 is in state STARTED 2026-04-08 00:42:38.785129 | orchestrator | 2026-04-08 00:42:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:42:41.737737 | orchestrator | 2026-04-08 00:42:41 | INFO  | Task f1e83987-18b7-47a4-bd1d-cfccd73083ed is in state STARTED 2026-04-08 00:42:41.737786 | orchestrator | 2026-04-08 00:42:41 | INFO  | Task e2967b4e-4adc-4f38-b2de-00fc440895d0 is in state SUCCESS 2026-04-08 00:42:41.737796 | orchestrator | 2026-04-08 00:42:41 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:42:41.738734 | orchestrator | 2026-04-08 00:42:41 | INFO  | Task 9447d72c-2a1a-45e6-8419-1ff765483034 is in state STARTED 2026-04-08 00:42:41.740148 | orchestrator | 2026-04-08 00:42:41 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:42:41.740172 | orchestrator | 2026-04-08 00:42:41 | INFO  | Task 387595d3-d5f4-4fbe-ae5f-f230f8fb0cf7 is in state STARTED 2026-04-08 00:42:41.740183 | orchestrator | 2026-04-08 00:42:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:42:44.790113 | orchestrator | 2026-04-08 00:42:44 | INFO  | Task f1e83987-18b7-47a4-bd1d-cfccd73083ed is in state STARTED 2026-04-08 00:42:44.792473 | orchestrator | 2026-04-08 00:42:44 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:42:44.793074 | orchestrator | 2026-04-08 00:42:44 | INFO  | Task 9447d72c-2a1a-45e6-8419-1ff765483034 is in state STARTED 2026-04-08 00:42:44.794274 | orchestrator | 2026-04-08 00:42:44 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:42:44.795108 | orchestrator | 2026-04-08 00:42:44 | INFO  | Task 387595d3-d5f4-4fbe-ae5f-f230f8fb0cf7 is in state STARTED 2026-04-08 00:42:44.795163 | orchestrator | 2026-04-08 00:42:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:42:47.851909 | orchestrator | 2026-04-08 00:42:47 | INFO  | Task f1e83987-18b7-47a4-bd1d-cfccd73083ed is in state STARTED 2026-04-08 00:42:47.851983 | orchestrator | 2026-04-08 00:42:47 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:42:47.852534 | orchestrator | 2026-04-08 00:42:47 | INFO  | Task 9447d72c-2a1a-45e6-8419-1ff765483034 is in state STARTED 2026-04-08 00:42:47.855452 | orchestrator | 2026-04-08 00:42:47 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:42:47.855526 | orchestrator | 2026-04-08 00:42:47 | INFO  | Task 387595d3-d5f4-4fbe-ae5f-f230f8fb0cf7 is in state STARTED 2026-04-08 00:42:47.855542 | orchestrator | 2026-04-08 00:42:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:42:50.898938 | orchestrator | 2026-04-08 00:42:50 | INFO  | Task f1e83987-18b7-47a4-bd1d-cfccd73083ed is in state STARTED 2026-04-08 00:42:50.899033 | orchestrator | 2026-04-08 00:42:50 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:42:50.901355 | orchestrator | 2026-04-08 00:42:50 | INFO  | Task 9447d72c-2a1a-45e6-8419-1ff765483034 is in state STARTED 2026-04-08 00:42:50.902343 | orchestrator | 2026-04-08 00:42:50 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:42:50.903517 | orchestrator | 2026-04-08 00:42:50 | INFO  | Task 387595d3-d5f4-4fbe-ae5f-f230f8fb0cf7 is in state STARTED 2026-04-08 00:42:50.904196 | orchestrator | 2026-04-08 00:42:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:42:53.952533 | orchestrator | 2026-04-08 00:42:53 | INFO  | Task f1e83987-18b7-47a4-bd1d-cfccd73083ed is in state SUCCESS 2026-04-08 00:42:53.955334 | orchestrator | 2026-04-08 00:42:53.955382 | orchestrator | 2026-04-08 00:42:53.955395 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-04-08 00:42:53.955408 | orchestrator | 2026-04-08 00:42:53.955419 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-04-08 00:42:53.955430 | orchestrator | Wednesday 08 April 2026 00:41:51 +0000 (0:00:00.709) 0:00:00.709 ******* 2026-04-08 00:42:53.955441 | orchestrator | ok: [testbed-manager] => { 2026-04-08 00:42:53.955472 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-04-08 00:42:53.955485 | orchestrator | } 2026-04-08 00:42:53.955496 | orchestrator | 2026-04-08 00:42:53.955512 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-04-08 00:42:53.955523 | orchestrator | Wednesday 08 April 2026 00:41:51 +0000 (0:00:00.132) 0:00:00.842 ******* 2026-04-08 00:42:53.955534 | orchestrator | ok: [testbed-manager] 2026-04-08 00:42:53.955545 | orchestrator | 2026-04-08 00:42:53.955555 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-04-08 00:42:53.955565 | orchestrator | Wednesday 08 April 2026 00:41:54 +0000 (0:00:02.638) 0:00:03.480 ******* 2026-04-08 00:42:53.955576 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-04-08 00:42:53.955586 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-04-08 00:42:53.955597 | orchestrator | 2026-04-08 00:42:53.955607 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-04-08 00:42:53.955617 | orchestrator | Wednesday 08 April 2026 00:41:57 +0000 (0:00:02.676) 0:00:06.157 ******* 2026-04-08 00:42:53.955627 | orchestrator | changed: [testbed-manager] 2026-04-08 00:42:53.955638 | orchestrator | 2026-04-08 00:42:53.955648 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-04-08 00:42:53.955658 | orchestrator | Wednesday 08 April 2026 00:42:00 +0000 (0:00:03.382) 0:00:09.540 ******* 2026-04-08 00:42:53.955668 | orchestrator | changed: [testbed-manager] 2026-04-08 00:42:53.955678 | orchestrator | 2026-04-08 00:42:53.955688 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-04-08 00:42:53.955699 | orchestrator | Wednesday 08 April 2026 00:42:02 +0000 (0:00:01.963) 0:00:11.503 ******* 2026-04-08 00:42:53.955709 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-04-08 00:42:53.955719 | orchestrator | ok: [testbed-manager] 2026-04-08 00:42:53.955730 | orchestrator | 2026-04-08 00:42:53.955740 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-04-08 00:42:53.955750 | orchestrator | Wednesday 08 April 2026 00:42:29 +0000 (0:00:27.195) 0:00:38.699 ******* 2026-04-08 00:42:53.955760 | orchestrator | changed: [testbed-manager] 2026-04-08 00:42:53.955771 | orchestrator | 2026-04-08 00:42:53.955779 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:42:53.955786 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:42:53.955793 | orchestrator | 2026-04-08 00:42:53.955800 | orchestrator | 2026-04-08 00:42:53.955806 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:42:53.955812 | orchestrator | Wednesday 08 April 2026 00:42:32 +0000 (0:00:03.212) 0:00:41.911 ******* 2026-04-08 00:42:53.955818 | orchestrator | =============================================================================== 2026-04-08 00:42:53.955863 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 27.20s 2026-04-08 00:42:53.956425 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 3.38s 2026-04-08 00:42:53.956451 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 3.22s 2026-04-08 00:42:53.956460 | orchestrator | osism.services.homer : Create required directories ---------------------- 2.68s 2026-04-08 00:42:53.956469 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.64s 2026-04-08 00:42:53.956477 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.96s 2026-04-08 00:42:53.956486 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.13s 2026-04-08 00:42:53.956495 | orchestrator | 2026-04-08 00:42:53.956504 | orchestrator | 2026-04-08 00:42:53.956512 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-04-08 00:42:53.956521 | orchestrator | 2026-04-08 00:42:53.956530 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-04-08 00:42:53.956551 | orchestrator | Wednesday 08 April 2026 00:41:51 +0000 (0:00:00.739) 0:00:00.739 ******* 2026-04-08 00:42:53.956560 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-04-08 00:42:53.956568 | orchestrator | 2026-04-08 00:42:53.956578 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-04-08 00:42:53.956587 | orchestrator | Wednesday 08 April 2026 00:41:52 +0000 (0:00:00.620) 0:00:01.360 ******* 2026-04-08 00:42:53.956595 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-04-08 00:42:53.956604 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-04-08 00:42:53.956614 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-04-08 00:42:53.956623 | orchestrator | 2026-04-08 00:42:53.956632 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-04-08 00:42:53.956641 | orchestrator | Wednesday 08 April 2026 00:41:56 +0000 (0:00:04.048) 0:00:05.409 ******* 2026-04-08 00:42:53.956651 | orchestrator | changed: [testbed-manager] 2026-04-08 00:42:53.956658 | orchestrator | 2026-04-08 00:42:53.956664 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-04-08 00:42:53.956670 | orchestrator | Wednesday 08 April 2026 00:41:58 +0000 (0:00:01.916) 0:00:07.326 ******* 2026-04-08 00:42:53.956685 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-04-08 00:42:53.956691 | orchestrator | ok: [testbed-manager] 2026-04-08 00:42:53.956697 | orchestrator | 2026-04-08 00:42:53.956702 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-04-08 00:42:53.956707 | orchestrator | Wednesday 08 April 2026 00:42:31 +0000 (0:00:33.061) 0:00:40.387 ******* 2026-04-08 00:42:53.956713 | orchestrator | changed: [testbed-manager] 2026-04-08 00:42:53.956718 | orchestrator | 2026-04-08 00:42:53.956724 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-04-08 00:42:53.956729 | orchestrator | Wednesday 08 April 2026 00:42:33 +0000 (0:00:01.832) 0:00:42.220 ******* 2026-04-08 00:42:53.956734 | orchestrator | ok: [testbed-manager] 2026-04-08 00:42:53.956740 | orchestrator | 2026-04-08 00:42:53.956750 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-04-08 00:42:53.956756 | orchestrator | Wednesday 08 April 2026 00:42:34 +0000 (0:00:01.359) 0:00:43.579 ******* 2026-04-08 00:42:53.956761 | orchestrator | changed: [testbed-manager] 2026-04-08 00:42:53.956766 | orchestrator | 2026-04-08 00:42:53.956772 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-04-08 00:42:53.956777 | orchestrator | Wednesday 08 April 2026 00:42:36 +0000 (0:00:02.342) 0:00:45.922 ******* 2026-04-08 00:42:53.956782 | orchestrator | changed: [testbed-manager] 2026-04-08 00:42:53.956788 | orchestrator | 2026-04-08 00:42:53.956793 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-04-08 00:42:53.956798 | orchestrator | Wednesday 08 April 2026 00:42:38 +0000 (0:00:01.791) 0:00:47.713 ******* 2026-04-08 00:42:53.956804 | orchestrator | changed: [testbed-manager] 2026-04-08 00:42:53.956809 | orchestrator | 2026-04-08 00:42:53.956814 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-04-08 00:42:53.956842 | orchestrator | Wednesday 08 April 2026 00:42:39 +0000 (0:00:01.448) 0:00:49.161 ******* 2026-04-08 00:42:53.956851 | orchestrator | ok: [testbed-manager] 2026-04-08 00:42:53.956860 | orchestrator | 2026-04-08 00:42:53.956869 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:42:53.956878 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:42:53.956888 | orchestrator | 2026-04-08 00:42:53.956896 | orchestrator | 2026-04-08 00:42:53.956902 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:42:53.956912 | orchestrator | Wednesday 08 April 2026 00:42:40 +0000 (0:00:00.379) 0:00:49.541 ******* 2026-04-08 00:42:53.956918 | orchestrator | =============================================================================== 2026-04-08 00:42:53.956923 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 33.06s 2026-04-08 00:42:53.956928 | orchestrator | osism.services.openstackclient : Create required directories ------------ 4.05s 2026-04-08 00:42:53.956934 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.34s 2026-04-08 00:42:53.956939 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.92s 2026-04-08 00:42:53.956944 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.83s 2026-04-08 00:42:53.956949 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.79s 2026-04-08 00:42:53.956955 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.45s 2026-04-08 00:42:53.956960 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.36s 2026-04-08 00:42:53.956965 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.62s 2026-04-08 00:42:53.956971 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.38s 2026-04-08 00:42:53.956976 | orchestrator | 2026-04-08 00:42:53.956981 | orchestrator | 2026-04-08 00:42:53.956987 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-04-08 00:42:53.956992 | orchestrator | 2026-04-08 00:42:53.956997 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-08 00:42:53.957003 | orchestrator | Wednesday 08 April 2026 00:41:45 +0000 (0:00:00.294) 0:00:00.294 ******* 2026-04-08 00:42:53.957008 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:42:53.957014 | orchestrator | 2026-04-08 00:42:53.957019 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-04-08 00:42:53.957024 | orchestrator | Wednesday 08 April 2026 00:41:46 +0000 (0:00:01.198) 0:00:01.492 ******* 2026-04-08 00:42:53.957030 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-08 00:42:53.957035 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-08 00:42:53.957042 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-08 00:42:53.957048 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-08 00:42:53.957055 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-08 00:42:53.957061 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-08 00:42:53.957067 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-08 00:42:53.957073 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-08 00:42:53.957079 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-08 00:42:53.957086 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-08 00:42:53.957092 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-08 00:42:53.957104 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-08 00:42:53.957349 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-08 00:42:53.957359 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-08 00:42:53.957366 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-08 00:42:53.957373 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-08 00:42:53.957383 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-08 00:42:53.957400 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-08 00:42:53.957479 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-08 00:42:53.957489 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-08 00:42:53.957498 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-08 00:42:53.957507 | orchestrator | 2026-04-08 00:42:53.957516 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-08 00:42:53.957526 | orchestrator | Wednesday 08 April 2026 00:41:50 +0000 (0:00:03.726) 0:00:05.218 ******* 2026-04-08 00:42:53.957535 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:42:53.957545 | orchestrator | 2026-04-08 00:42:53.957554 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-04-08 00:42:53.957564 | orchestrator | Wednesday 08 April 2026 00:41:51 +0000 (0:00:01.252) 0:00:06.471 ******* 2026-04-08 00:42:53.957603 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:42:53.957664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:42:53.957675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:42:53.957685 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:42:53.957769 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:42:53.957794 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:42:53.957803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:42:53.957813 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.957859 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.957870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.957880 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.957913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.957930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.958156 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.958174 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.958186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.958192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.958197 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.958203 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.958209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.958247 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.958254 | orchestrator | 2026-04-08 00:42:53.958260 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-04-08 00:42:53.958268 | orchestrator | Wednesday 08 April 2026 00:41:56 +0000 (0:00:05.357) 0:00:11.828 ******* 2026-04-08 00:42:53.958275 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-08 00:42:53.958281 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.958287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-08 00:42:53.958293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-08 00:42:53.958299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-08 00:42:53.958305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.958330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.958340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.958350 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:42:53.958361 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.958370 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:42:53.958380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.958390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.958399 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:42:53.958407 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-08 00:42:53.958417 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.958464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.958474 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:42:53.958488 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-08 00:42:53.958498 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.958506 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:53.958515 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.958525 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-08 00:42:53.958535 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.958545 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:53.958553 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.958566 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.958575 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:42:53.958584 | orchestrator | 2026-04-08 00:42:53.958593 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-04-08 00:42:53.958601 | orchestrator | Wednesday 08 April 2026 00:42:00 +0000 (0:00:03.358) 0:00:15.187 ******* 2026-04-08 00:42:53.958637 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-08 00:42:53.958649 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.958659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-08 00:42:53.958667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.958673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.958683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-08 00:42:53.958689 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:42:53.958698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.958728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.958780 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.958790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-08 00:42:53.958799 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:42:53.958809 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:42:53.958861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.958873 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-08 00:42:53.958890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.958900 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:42:53.958910 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.958919 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.958928 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:42:53.958968 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-08 00:42:53.958982 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-08 00:42:53.958991 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.959000 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.959014 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.959023 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:53.959033 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.959041 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:53.959049 | orchestrator | 2026-04-08 00:42:53.959057 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-04-08 00:42:53.959066 | orchestrator | Wednesday 08 April 2026 00:42:04 +0000 (0:00:04.569) 0:00:19.757 ******* 2026-04-08 00:42:53.959074 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:42:53.959083 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:42:53.959091 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:42:53.959099 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:42:53.959107 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:42:53.959115 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:53.959123 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:53.959130 | orchestrator | 2026-04-08 00:42:53.959138 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-04-08 00:42:53.959146 | orchestrator | Wednesday 08 April 2026 00:42:06 +0000 (0:00:01.688) 0:00:21.445 ******* 2026-04-08 00:42:53.959154 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:42:53.959162 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:42:53.959170 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:42:53.959178 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:42:53.959185 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:42:53.959193 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:53.959224 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:53.959233 | orchestrator | 2026-04-08 00:42:53.959241 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-04-08 00:42:53.959249 | orchestrator | Wednesday 08 April 2026 00:42:07 +0000 (0:00:01.439) 0:00:22.884 ******* 2026-04-08 00:42:53.959257 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:42:53.959265 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:42:53.959272 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:42:53.959280 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:42:53.959288 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:42:53.959295 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:53.959304 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:53.959311 | orchestrator | 2026-04-08 00:42:53.959319 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-04-08 00:42:53.959331 | orchestrator | Wednesday 08 April 2026 00:42:09 +0000 (0:00:01.384) 0:00:24.269 ******* 2026-04-08 00:42:53.959339 | orchestrator | changed: [testbed-manager] 2026-04-08 00:42:53.959347 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:42:53.959354 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:42:53.959362 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:42:53.959370 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:42:53.959383 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:42:53.959391 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:42:53.959400 | orchestrator | 2026-04-08 00:42:53.959408 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-04-08 00:42:53.959416 | orchestrator | Wednesday 08 April 2026 00:42:11 +0000 (0:00:02.079) 0:00:26.348 ******* 2026-04-08 00:42:53.959425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:42:53.959434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:42:53.959443 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:42:53.959452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:42:53.959461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.959492 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:42:53.959507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.959521 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:42:53.959530 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.959538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.959547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.959556 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:42:53.959587 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.959600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.959615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.959624 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.959633 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.959642 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.959651 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.959660 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.959670 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.959678 | orchestrator | 2026-04-08 00:42:53.959691 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-04-08 00:42:53.959711 | orchestrator | Wednesday 08 April 2026 00:42:16 +0000 (0:00:05.310) 0:00:31.659 ******* 2026-04-08 00:42:53.959720 | orchestrator | [WARNING]: Skipped 2026-04-08 00:42:53.959730 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-04-08 00:42:53.959739 | orchestrator | to this access issue: 2026-04-08 00:42:53.959748 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-04-08 00:42:53.959757 | orchestrator | directory 2026-04-08 00:42:53.959766 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-08 00:42:53.959775 | orchestrator | 2026-04-08 00:42:53.959784 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-04-08 00:42:53.959796 | orchestrator | Wednesday 08 April 2026 00:42:17 +0000 (0:00:00.871) 0:00:32.530 ******* 2026-04-08 00:42:53.959805 | orchestrator | [WARNING]: Skipped 2026-04-08 00:42:53.959814 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-04-08 00:42:53.959839 | orchestrator | to this access issue: 2026-04-08 00:42:53.959849 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-04-08 00:42:53.959857 | orchestrator | directory 2026-04-08 00:42:53.959866 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-08 00:42:53.959875 | orchestrator | 2026-04-08 00:42:53.959884 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-04-08 00:42:53.959892 | orchestrator | Wednesday 08 April 2026 00:42:18 +0000 (0:00:00.783) 0:00:33.314 ******* 2026-04-08 00:42:53.959901 | orchestrator | [WARNING]: Skipped 2026-04-08 00:42:53.959910 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-04-08 00:42:53.959919 | orchestrator | to this access issue: 2026-04-08 00:42:53.959927 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-04-08 00:42:53.959936 | orchestrator | directory 2026-04-08 00:42:53.959944 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-08 00:42:53.959953 | orchestrator | 2026-04-08 00:42:53.959962 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-04-08 00:42:53.959970 | orchestrator | Wednesday 08 April 2026 00:42:19 +0000 (0:00:00.895) 0:00:34.210 ******* 2026-04-08 00:42:53.959979 | orchestrator | [WARNING]: Skipped 2026-04-08 00:42:53.959987 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-04-08 00:42:53.959996 | orchestrator | to this access issue: 2026-04-08 00:42:53.960005 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-04-08 00:42:53.960014 | orchestrator | directory 2026-04-08 00:42:53.960023 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-08 00:42:53.960032 | orchestrator | 2026-04-08 00:42:53.960041 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-04-08 00:42:53.960050 | orchestrator | Wednesday 08 April 2026 00:42:20 +0000 (0:00:00.878) 0:00:35.088 ******* 2026-04-08 00:42:53.960059 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:42:53.960068 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:42:53.960077 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:42:53.960086 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:42:53.960095 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:42:53.960104 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:42:53.960113 | orchestrator | changed: [testbed-manager] 2026-04-08 00:42:53.960122 | orchestrator | 2026-04-08 00:42:53.960131 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-04-08 00:42:53.960140 | orchestrator | Wednesday 08 April 2026 00:42:25 +0000 (0:00:05.131) 0:00:40.219 ******* 2026-04-08 00:42:53.960149 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-08 00:42:53.960158 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-08 00:42:53.960172 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-08 00:42:53.960181 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-08 00:42:53.960190 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-08 00:42:53.960199 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-08 00:42:53.960208 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-08 00:42:53.960216 | orchestrator | 2026-04-08 00:42:53.960226 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-04-08 00:42:53.960234 | orchestrator | Wednesday 08 April 2026 00:42:28 +0000 (0:00:03.415) 0:00:43.635 ******* 2026-04-08 00:42:53.960243 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:42:53.960252 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:42:53.960261 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:42:53.960270 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:42:53.960279 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:42:53.960287 | orchestrator | changed: [testbed-manager] 2026-04-08 00:42:53.960296 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:42:53.960305 | orchestrator | 2026-04-08 00:42:53.960314 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-04-08 00:42:53.960323 | orchestrator | Wednesday 08 April 2026 00:42:32 +0000 (0:00:03.685) 0:00:47.321 ******* 2026-04-08 00:42:53.960342 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:42:53.960355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.960366 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.960377 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:42:53.960386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.960400 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:42:53.960410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.960419 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.960433 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:42:53.960447 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.960456 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:42:53.960465 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.960479 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.960489 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:42:53.960498 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.960512 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.960525 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.960534 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:42:53.960543 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.960557 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.960566 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.960575 | orchestrator | 2026-04-08 00:42:53.960584 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-04-08 00:42:53.960593 | orchestrator | Wednesday 08 April 2026 00:42:35 +0000 (0:00:03.011) 0:00:50.332 ******* 2026-04-08 00:42:53.960601 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-08 00:42:53.960610 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-08 00:42:53.960618 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-08 00:42:53.960626 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-08 00:42:53.960635 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-08 00:42:53.960644 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-08 00:42:53.960652 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-08 00:42:53.960662 | orchestrator | 2026-04-08 00:42:53.960670 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-04-08 00:42:53.960679 | orchestrator | Wednesday 08 April 2026 00:42:37 +0000 (0:00:01.898) 0:00:52.231 ******* 2026-04-08 00:42:53.960688 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-08 00:42:53.960697 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-08 00:42:53.960706 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-08 00:42:53.960714 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-08 00:42:53.960728 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-08 00:42:53.960737 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-08 00:42:53.960746 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-08 00:42:53.960754 | orchestrator | 2026-04-08 00:42:53.960762 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-04-08 00:42:53.960771 | orchestrator | Wednesday 08 April 2026 00:42:40 +0000 (0:00:03.187) 0:00:55.419 ******* 2026-04-08 00:42:53.960783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:42:53.960797 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:42:53.960806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:42:53.960816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:42:53.960967 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:42:53.960977 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:42:53.960992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.961005 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-08 00:42:53.961020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.961030 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.961039 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.961048 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.961057 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.961071 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.961083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.961098 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.961107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.961116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.961124 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.961133 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.961142 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:42:53.961152 | orchestrator | 2026-04-08 00:42:53.961160 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-04-08 00:42:53.961169 | orchestrator | Wednesday 08 April 2026 00:42:44 +0000 (0:00:03.701) 0:00:59.120 ******* 2026-04-08 00:42:53.961178 | orchestrator | changed: [testbed-manager] => { 2026-04-08 00:42:53.961186 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:42:53.961195 | orchestrator | } 2026-04-08 00:42:53.961204 | orchestrator | changed: [testbed-node-0] => { 2026-04-08 00:42:53.961212 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:42:53.961220 | orchestrator | } 2026-04-08 00:42:53.961229 | orchestrator | changed: [testbed-node-1] => { 2026-04-08 00:42:53.961237 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:42:53.961246 | orchestrator | } 2026-04-08 00:42:53.961255 | orchestrator | changed: [testbed-node-2] => { 2026-04-08 00:42:53.961263 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:42:53.961280 | orchestrator | } 2026-04-08 00:42:53.961288 | orchestrator | changed: [testbed-node-3] => { 2026-04-08 00:42:53.961297 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:42:53.961305 | orchestrator | } 2026-04-08 00:42:53.961314 | orchestrator | changed: [testbed-node-4] => { 2026-04-08 00:42:53.961326 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:42:53.961335 | orchestrator | } 2026-04-08 00:42:53.961344 | orchestrator | changed: [testbed-node-5] => { 2026-04-08 00:42:53.961352 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:42:53.961361 | orchestrator | } 2026-04-08 00:42:53.961370 | orchestrator | 2026-04-08 00:42:53.961379 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-08 00:42:53.961388 | orchestrator | Wednesday 08 April 2026 00:42:44 +0000 (0:00:00.795) 0:00:59.916 ******* 2026-04-08 00:42:53.961402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-08 00:42:53.961411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.961420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.961430 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-08 00:42:53.961439 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.961450 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.961464 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:42:53.961470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-08 00:42:53.961483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.961489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.961495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-08 00:42:53.961501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.961506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.961511 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:42:53.961519 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:42:53.961527 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-08 00:42:53.961540 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.961553 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.961561 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:42:53.961569 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:42:53.961580 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-08 00:42:53.961589 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.961598 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.961606 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:42:53.961614 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-08 00:42:53.961623 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.961636 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:42:53.961642 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:42:53.961648 | orchestrator | 2026-04-08 00:42:53.961653 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-04-08 00:42:53.961659 | orchestrator | Wednesday 08 April 2026 00:42:46 +0000 (0:00:01.636) 0:01:01.553 ******* 2026-04-08 00:42:53.961665 | orchestrator | changed: [testbed-manager] 2026-04-08 00:42:53.961670 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:42:53.961675 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:42:53.961681 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:42:53.961686 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:42:53.961692 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:42:53.961698 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:42:53.961703 | orchestrator | 2026-04-08 00:42:53.961712 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-04-08 00:42:53.961718 | orchestrator | Wednesday 08 April 2026 00:42:47 +0000 (0:00:01.428) 0:01:02.982 ******* 2026-04-08 00:42:53.961723 | orchestrator | changed: [testbed-manager] 2026-04-08 00:42:53.961728 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:42:53.961734 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:42:53.961739 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:42:53.961745 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:42:53.961750 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:42:53.961756 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:42:53.961761 | orchestrator | 2026-04-08 00:42:53.961767 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-08 00:42:53.961772 | orchestrator | Wednesday 08 April 2026 00:42:49 +0000 (0:00:01.196) 0:01:04.178 ******* 2026-04-08 00:42:53.961778 | orchestrator | 2026-04-08 00:42:53.961783 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-08 00:42:53.961789 | orchestrator | Wednesday 08 April 2026 00:42:49 +0000 (0:00:00.072) 0:01:04.251 ******* 2026-04-08 00:42:53.961794 | orchestrator | 2026-04-08 00:42:53.961800 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-08 00:42:53.961806 | orchestrator | Wednesday 08 April 2026 00:42:49 +0000 (0:00:00.070) 0:01:04.321 ******* 2026-04-08 00:42:53.961811 | orchestrator | 2026-04-08 00:42:53.961817 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-08 00:42:53.961842 | orchestrator | Wednesday 08 April 2026 00:42:49 +0000 (0:00:00.063) 0:01:04.384 ******* 2026-04-08 00:42:53.961848 | orchestrator | 2026-04-08 00:42:53.961854 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-08 00:42:53.961859 | orchestrator | Wednesday 08 April 2026 00:42:49 +0000 (0:00:00.066) 0:01:04.451 ******* 2026-04-08 00:42:53.961865 | orchestrator | 2026-04-08 00:42:53.961871 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-08 00:42:53.961876 | orchestrator | Wednesday 08 April 2026 00:42:49 +0000 (0:00:00.064) 0:01:04.515 ******* 2026-04-08 00:42:53.961882 | orchestrator | 2026-04-08 00:42:53.961887 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-08 00:42:53.961961 | orchestrator | Wednesday 08 April 2026 00:42:49 +0000 (0:00:00.063) 0:01:04.579 ******* 2026-04-08 00:42:53.961972 | orchestrator | 2026-04-08 00:42:53.961977 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-04-08 00:42:53.961986 | orchestrator | Wednesday 08 April 2026 00:42:49 +0000 (0:00:00.086) 0:01:04.665 ******* 2026-04-08 00:42:53.961997 | orchestrator | fatal: [testbed-manager]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_dhv20g9q/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_dhv20g9q/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_dhv20g9q/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_dhv20g9q/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd: Bad Request (\"invalid reference format\")\\n'"} 2026-04-08 00:42:53.962007 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_ku41ndth/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_ku41ndth/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_ku41ndth/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_ku41ndth/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd: Bad Request (\"invalid reference format\")\\n'"} 2026-04-08 00:42:53.962046 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_pdrwaz8f/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_pdrwaz8f/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_pdrwaz8f/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_pdrwaz8f/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd: Bad Request (\"invalid reference format\")\\n'"} 2026-04-08 00:42:53.962054 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_32gugd5y/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_32gugd5y/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_32gugd5y/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_32gugd5y/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd: Bad Request (\"invalid reference format\")\\n'"} 2026-04-08 00:42:53.962070 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_jrcnwh1o/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_jrcnwh1o/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_jrcnwh1o/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_jrcnwh1o/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd: Bad Request (\"invalid reference format\")\\n'"} 2026-04-08 00:42:53.962079 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_2pz_hpb3/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_2pz_hpb3/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_2pz_hpb3/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_2pz_hpb3/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd: Bad Request (\"invalid reference format\")\\n'"} 2026-04-08 00:42:53.962095 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_b_97_5j0/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_b_97_5j0/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_b_97_5j0/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_b_97_5j0/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd: Bad Request (\"invalid reference format\")\\n'"} 2026-04-08 00:42:53.962103 | orchestrator | 2026-04-08 00:42:53.962109 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:42:53.962114 | orchestrator | testbed-manager : ok=20  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-08 00:42:53.962119 | orchestrator | testbed-node-0 : ok=16  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-08 00:42:53.962124 | orchestrator | testbed-node-1 : ok=16  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-08 00:42:53.962129 | orchestrator | testbed-node-2 : ok=16  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-08 00:42:53.962134 | orchestrator | testbed-node-3 : ok=16  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-08 00:42:53.962139 | orchestrator | testbed-node-4 : ok=16  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-08 00:42:53.962144 | orchestrator | testbed-node-5 : ok=16  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-08 00:42:53.962148 | orchestrator | 2026-04-08 00:42:53.962153 | orchestrator | 2026-04-08 00:42:53.962158 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:42:53.962163 | orchestrator | Wednesday 08 April 2026 00:42:52 +0000 (0:00:02.873) 0:01:07.539 ******* 2026-04-08 00:42:53.962168 | orchestrator | =============================================================================== 2026-04-08 00:42:53.962173 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.36s 2026-04-08 00:42:53.962178 | orchestrator | common : Copying over config.json files for services -------------------- 5.31s 2026-04-08 00:42:53.962182 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 5.13s 2026-04-08 00:42:53.962190 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 4.57s 2026-04-08 00:42:53.962195 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.73s 2026-04-08 00:42:53.962199 | orchestrator | service-check-containers : common | Check containers -------------------- 3.70s 2026-04-08 00:42:53.962204 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.69s 2026-04-08 00:42:53.962209 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.42s 2026-04-08 00:42:53.962214 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 3.36s 2026-04-08 00:42:53.962221 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.19s 2026-04-08 00:42:53.962229 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.01s 2026-04-08 00:42:53.962234 | orchestrator | common : Restart fluentd container -------------------------------------- 2.87s 2026-04-08 00:42:53.962239 | orchestrator | common : Copying over kolla.target -------------------------------------- 2.08s 2026-04-08 00:42:53.962244 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 1.90s 2026-04-08 00:42:53.962248 | orchestrator | common : Ensure /var/log/journal exists on EL10 systems ----------------- 1.69s 2026-04-08 00:42:53.962253 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.64s 2026-04-08 00:42:53.962258 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 1.44s 2026-04-08 00:42:53.962263 | orchestrator | common : Creating log volume -------------------------------------------- 1.43s 2026-04-08 00:42:53.962268 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.38s 2026-04-08 00:42:53.962272 | orchestrator | common : include_tasks -------------------------------------------------- 1.25s 2026-04-08 00:42:53.962277 | orchestrator | 2026-04-08 00:42:53 | INFO  | Task e339d375-7af2-44fe-aee2-8cf97755718d is in state STARTED 2026-04-08 00:42:53.962283 | orchestrator | 2026-04-08 00:42:53 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:42:53.962287 | orchestrator | 2026-04-08 00:42:53 | INFO  | Task be226e0e-5e47-467f-9a79-36cf1f9fe328 is in state STARTED 2026-04-08 00:42:53.962292 | orchestrator | 2026-04-08 00:42:53 | INFO  | Task b4ed936e-d93c-448e-81a0-7f7ef0853050 is in state STARTED 2026-04-08 00:42:53.962794 | orchestrator | 2026-04-08 00:42:53 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:42:53.966315 | orchestrator | 2026-04-08 00:42:53 | INFO  | Task 9447d72c-2a1a-45e6-8419-1ff765483034 is in state STARTED 2026-04-08 00:42:53.967198 | orchestrator | 2026-04-08 00:42:53 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:42:53.968155 | orchestrator | 2026-04-08 00:42:53 | INFO  | Task 387595d3-d5f4-4fbe-ae5f-f230f8fb0cf7 is in state STARTED 2026-04-08 00:42:53.968171 | orchestrator | 2026-04-08 00:42:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:42:57.019435 | orchestrator | 2026-04-08 00:42:57 | INFO  | Task e339d375-7af2-44fe-aee2-8cf97755718d is in state STARTED 2026-04-08 00:42:57.022533 | orchestrator | 2026-04-08 00:42:57 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:42:57.026477 | orchestrator | 2026-04-08 00:42:57 | INFO  | Task be226e0e-5e47-467f-9a79-36cf1f9fe328 is in state STARTED 2026-04-08 00:42:57.028523 | orchestrator | 2026-04-08 00:42:57 | INFO  | Task b4ed936e-d93c-448e-81a0-7f7ef0853050 is in state STARTED 2026-04-08 00:42:57.030760 | orchestrator | 2026-04-08 00:42:57 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:42:57.033665 | orchestrator | 2026-04-08 00:42:57 | INFO  | Task 9447d72c-2a1a-45e6-8419-1ff765483034 is in state STARTED 2026-04-08 00:42:57.034583 | orchestrator | 2026-04-08 00:42:57 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:42:57.037001 | orchestrator | 2026-04-08 00:42:57 | INFO  | Task 387595d3-d5f4-4fbe-ae5f-f230f8fb0cf7 is in state STARTED 2026-04-08 00:42:57.037065 | orchestrator | 2026-04-08 00:42:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:43:00.166719 | orchestrator | 2026-04-08 00:43:00 | INFO  | Task e339d375-7af2-44fe-aee2-8cf97755718d is in state STARTED 2026-04-08 00:43:00.166981 | orchestrator | 2026-04-08 00:43:00 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:43:00.167639 | orchestrator | 2026-04-08 00:43:00 | INFO  | Task be226e0e-5e47-467f-9a79-36cf1f9fe328 is in state STARTED 2026-04-08 00:43:00.168435 | orchestrator | 2026-04-08 00:43:00 | INFO  | Task b4ed936e-d93c-448e-81a0-7f7ef0853050 is in state STARTED 2026-04-08 00:43:00.169015 | orchestrator | 2026-04-08 00:43:00 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:43:00.169879 | orchestrator | 2026-04-08 00:43:00 | INFO  | Task 9447d72c-2a1a-45e6-8419-1ff765483034 is in state STARTED 2026-04-08 00:43:00.170682 | orchestrator | 2026-04-08 00:43:00 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:43:00.171275 | orchestrator | 2026-04-08 00:43:00 | INFO  | Task 387595d3-d5f4-4fbe-ae5f-f230f8fb0cf7 is in state STARTED 2026-04-08 00:43:00.171300 | orchestrator | 2026-04-08 00:43:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:43:03.214726 | orchestrator | 2026-04-08 00:43:03 | INFO  | Task e339d375-7af2-44fe-aee2-8cf97755718d is in state STARTED 2026-04-08 00:43:03.216174 | orchestrator | 2026-04-08 00:43:03 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:43:03.216474 | orchestrator | 2026-04-08 00:43:03 | INFO  | Task be226e0e-5e47-467f-9a79-36cf1f9fe328 is in state STARTED 2026-04-08 00:43:03.217058 | orchestrator | 2026-04-08 00:43:03 | INFO  | Task b4ed936e-d93c-448e-81a0-7f7ef0853050 is in state STARTED 2026-04-08 00:43:03.218504 | orchestrator | 2026-04-08 00:43:03 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:43:03.218731 | orchestrator | 2026-04-08 00:43:03 | INFO  | Task 9447d72c-2a1a-45e6-8419-1ff765483034 is in state STARTED 2026-04-08 00:43:03.219433 | orchestrator | 2026-04-08 00:43:03 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:43:03.221437 | orchestrator | 2026-04-08 00:43:03 | INFO  | Task 387595d3-d5f4-4fbe-ae5f-f230f8fb0cf7 is in state STARTED 2026-04-08 00:43:03.221471 | orchestrator | 2026-04-08 00:43:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:43:06.251569 | orchestrator | 2026-04-08 00:43:06 | INFO  | Task e339d375-7af2-44fe-aee2-8cf97755718d is in state STARTED 2026-04-08 00:43:06.251703 | orchestrator | 2026-04-08 00:43:06 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:43:06.252249 | orchestrator | 2026-04-08 00:43:06 | INFO  | Task be226e0e-5e47-467f-9a79-36cf1f9fe328 is in state STARTED 2026-04-08 00:43:06.252868 | orchestrator | 2026-04-08 00:43:06 | INFO  | Task b4ed936e-d93c-448e-81a0-7f7ef0853050 is in state STARTED 2026-04-08 00:43:06.254353 | orchestrator | 2026-04-08 00:43:06 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:43:06.255018 | orchestrator | 2026-04-08 00:43:06 | INFO  | Task 9447d72c-2a1a-45e6-8419-1ff765483034 is in state STARTED 2026-04-08 00:43:06.255462 | orchestrator | 2026-04-08 00:43:06 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:43:06.258475 | orchestrator | 2026-04-08 00:43:06 | INFO  | Task 387595d3-d5f4-4fbe-ae5f-f230f8fb0cf7 is in state STARTED 2026-04-08 00:43:06.258535 | orchestrator | 2026-04-08 00:43:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:43:09.337312 | orchestrator | 2026-04-08 00:43:09 | INFO  | Task e339d375-7af2-44fe-aee2-8cf97755718d is in state STARTED 2026-04-08 00:43:09.337442 | orchestrator | 2026-04-08 00:43:09 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:43:09.337469 | orchestrator | 2026-04-08 00:43:09 | INFO  | Task be226e0e-5e47-467f-9a79-36cf1f9fe328 is in state STARTED 2026-04-08 00:43:09.337490 | orchestrator | 2026-04-08 00:43:09 | INFO  | Task b4ed936e-d93c-448e-81a0-7f7ef0853050 is in state STARTED 2026-04-08 00:43:09.337541 | orchestrator | 2026-04-08 00:43:09 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:43:09.337560 | orchestrator | 2026-04-08 00:43:09 | INFO  | Task 9447d72c-2a1a-45e6-8419-1ff765483034 is in state STARTED 2026-04-08 00:43:09.337576 | orchestrator | 2026-04-08 00:43:09 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:43:09.337593 | orchestrator | 2026-04-08 00:43:09 | INFO  | Task 387595d3-d5f4-4fbe-ae5f-f230f8fb0cf7 is in state STARTED 2026-04-08 00:43:09.337609 | orchestrator | 2026-04-08 00:43:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:43:12.394743 | orchestrator | 2026-04-08 00:43:12 | INFO  | Task e339d375-7af2-44fe-aee2-8cf97755718d is in state STARTED 2026-04-08 00:43:12.394902 | orchestrator | 2026-04-08 00:43:12 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:43:12.394924 | orchestrator | 2026-04-08 00:43:12 | INFO  | Task be226e0e-5e47-467f-9a79-36cf1f9fe328 is in state STARTED 2026-04-08 00:43:12.398726 | orchestrator | 2026-04-08 00:43:12 | INFO  | Task b4ed936e-d93c-448e-81a0-7f7ef0853050 is in state STARTED 2026-04-08 00:43:12.398789 | orchestrator | 2026-04-08 00:43:12 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:43:12.398857 | orchestrator | 2026-04-08 00:43:12 | INFO  | Task 9447d72c-2a1a-45e6-8419-1ff765483034 is in state STARTED 2026-04-08 00:43:12.402129 | orchestrator | 2026-04-08 00:43:12 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:43:12.402183 | orchestrator | 2026-04-08 00:43:12 | INFO  | Task 387595d3-d5f4-4fbe-ae5f-f230f8fb0cf7 is in state STARTED 2026-04-08 00:43:12.402214 | orchestrator | 2026-04-08 00:43:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:43:15.506708 | orchestrator | 2026-04-08 00:43:15 | INFO  | Task e339d375-7af2-44fe-aee2-8cf97755718d is in state STARTED 2026-04-08 00:43:15.506901 | orchestrator | 2026-04-08 00:43:15 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:43:15.507168 | orchestrator | 2026-04-08 00:43:15 | INFO  | Task be226e0e-5e47-467f-9a79-36cf1f9fe328 is in state STARTED 2026-04-08 00:43:15.507721 | orchestrator | 2026-04-08 00:43:15 | INFO  | Task b4ed936e-d93c-448e-81a0-7f7ef0853050 is in state STARTED 2026-04-08 00:43:15.508624 | orchestrator | 2026-04-08 00:43:15 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:43:15.508718 | orchestrator | 2026-04-08 00:43:15 | INFO  | Task 9447d72c-2a1a-45e6-8419-1ff765483034 is in state SUCCESS 2026-04-08 00:43:15.509317 | orchestrator | 2026-04-08 00:43:15 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:43:15.509979 | orchestrator | 2026-04-08 00:43:15 | INFO  | Task 387595d3-d5f4-4fbe-ae5f-f230f8fb0cf7 is in state STARTED 2026-04-08 00:43:15.510007 | orchestrator | 2026-04-08 00:43:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:43:18.607726 | orchestrator | 2026-04-08 00:43:18 | INFO  | Task e58199e8-1c32-4359-ab70-2780380ee383 is in state STARTED 2026-04-08 00:43:18.608548 | orchestrator | 2026-04-08 00:43:18 | INFO  | Task e339d375-7af2-44fe-aee2-8cf97755718d is in state STARTED 2026-04-08 00:43:18.609405 | orchestrator | 2026-04-08 00:43:18 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:43:18.611142 | orchestrator | 2026-04-08 00:43:18 | INFO  | Task be226e0e-5e47-467f-9a79-36cf1f9fe328 is in state SUCCESS 2026-04-08 00:43:18.612093 | orchestrator | 2026-04-08 00:43:18.612123 | orchestrator | 2026-04-08 00:43:18.612156 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-04-08 00:43:18.612165 | orchestrator | 2026-04-08 00:43:18.612172 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-04-08 00:43:18.612179 | orchestrator | Wednesday 08 April 2026 00:42:10 +0000 (0:00:00.569) 0:00:00.569 ******* 2026-04-08 00:43:18.612185 | orchestrator | ok: [testbed-manager] 2026-04-08 00:43:18.612192 | orchestrator | 2026-04-08 00:43:18.612199 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-04-08 00:43:18.612206 | orchestrator | Wednesday 08 April 2026 00:42:11 +0000 (0:00:01.430) 0:00:01.999 ******* 2026-04-08 00:43:18.612213 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-04-08 00:43:18.612220 | orchestrator | 2026-04-08 00:43:18.612227 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-04-08 00:43:18.612234 | orchestrator | Wednesday 08 April 2026 00:42:12 +0000 (0:00:00.644) 0:00:02.644 ******* 2026-04-08 00:43:18.612242 | orchestrator | changed: [testbed-manager] 2026-04-08 00:43:18.612249 | orchestrator | 2026-04-08 00:43:18.612256 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-04-08 00:43:18.612262 | orchestrator | Wednesday 08 April 2026 00:42:14 +0000 (0:00:02.192) 0:00:04.836 ******* 2026-04-08 00:43:18.612269 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-04-08 00:43:18.612276 | orchestrator | ok: [testbed-manager] 2026-04-08 00:43:18.612283 | orchestrator | 2026-04-08 00:43:18.612289 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-04-08 00:43:18.612296 | orchestrator | Wednesday 08 April 2026 00:43:09 +0000 (0:00:54.589) 0:00:59.425 ******* 2026-04-08 00:43:18.612303 | orchestrator | changed: [testbed-manager] 2026-04-08 00:43:18.612310 | orchestrator | 2026-04-08 00:43:18.612317 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:43:18.612324 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:43:18.612332 | orchestrator | 2026-04-08 00:43:18.612339 | orchestrator | 2026-04-08 00:43:18.612346 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:43:18.612353 | orchestrator | Wednesday 08 April 2026 00:43:12 +0000 (0:00:03.232) 0:01:02.658 ******* 2026-04-08 00:43:18.612359 | orchestrator | =============================================================================== 2026-04-08 00:43:18.612366 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 54.59s 2026-04-08 00:43:18.612373 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.26s 2026-04-08 00:43:18.612380 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 2.19s 2026-04-08 00:43:18.612386 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.43s 2026-04-08 00:43:18.612393 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.64s 2026-04-08 00:43:18.612400 | orchestrator | 2026-04-08 00:43:18.612407 | orchestrator | 2026-04-08 00:43:18.612413 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-08 00:43:18.612420 | orchestrator | 2026-04-08 00:43:18.612426 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-08 00:43:18.612433 | orchestrator | Wednesday 08 April 2026 00:42:59 +0000 (0:00:00.692) 0:00:00.692 ******* 2026-04-08 00:43:18.612440 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:43:18.612447 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:43:18.612454 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:43:18.612461 | orchestrator | 2026-04-08 00:43:18.612467 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-08 00:43:18.612487 | orchestrator | Wednesday 08 April 2026 00:42:59 +0000 (0:00:00.473) 0:00:01.165 ******* 2026-04-08 00:43:18.612494 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-04-08 00:43:18.612501 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-04-08 00:43:18.612514 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-04-08 00:43:18.612521 | orchestrator | 2026-04-08 00:43:18.612527 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-04-08 00:43:18.612535 | orchestrator | 2026-04-08 00:43:18.612541 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-04-08 00:43:18.612548 | orchestrator | Wednesday 08 April 2026 00:43:00 +0000 (0:00:00.823) 0:00:01.989 ******* 2026-04-08 00:43:18.612554 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:43:18.612561 | orchestrator | 2026-04-08 00:43:18.612568 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-04-08 00:43:18.612575 | orchestrator | Wednesday 08 April 2026 00:43:02 +0000 (0:00:02.046) 0:00:04.035 ******* 2026-04-08 00:43:18.612581 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-08 00:43:18.612585 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-08 00:43:18.612589 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-08 00:43:18.612593 | orchestrator | 2026-04-08 00:43:18.612597 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-04-08 00:43:18.612600 | orchestrator | Wednesday 08 April 2026 00:43:04 +0000 (0:00:02.528) 0:00:06.564 ******* 2026-04-08 00:43:18.612604 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-08 00:43:18.612608 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-08 00:43:18.612613 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-08 00:43:18.612617 | orchestrator | 2026-04-08 00:43:18.612622 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-04-08 00:43:18.612626 | orchestrator | Wednesday 08 April 2026 00:43:06 +0000 (0:00:01.996) 0:00:08.561 ******* 2026-04-08 00:43:18.612644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release//memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-08 00:43:18.612652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release//memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-08 00:43:18.612657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release//memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-08 00:43:18.612666 | orchestrator | 2026-04-08 00:43:18.612671 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-04-08 00:43:18.612676 | orchestrator | Wednesday 08 April 2026 00:43:09 +0000 (0:00:02.392) 0:00:10.954 ******* 2026-04-08 00:43:18.612680 | orchestrator | changed: [testbed-node-0] => { 2026-04-08 00:43:18.612685 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:43:18.612690 | orchestrator | } 2026-04-08 00:43:18.612695 | orchestrator | changed: [testbed-node-1] => { 2026-04-08 00:43:18.612700 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:43:18.612705 | orchestrator | } 2026-04-08 00:43:18.612711 | orchestrator | changed: [testbed-node-2] => { 2026-04-08 00:43:18.612716 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:43:18.612732 | orchestrator | } 2026-04-08 00:43:18.612737 | orchestrator | 2026-04-08 00:43:18.612742 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-08 00:43:18.612748 | orchestrator | Wednesday 08 April 2026 00:43:09 +0000 (0:00:00.495) 0:00:11.450 ******* 2026-04-08 00:43:18.612754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release//memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-08 00:43:18.612766 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:43:18.612782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release//memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-08 00:43:18.612788 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:43:18.612794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release//memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-08 00:43:18.612816 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:43:18.612825 | orchestrator | 2026-04-08 00:43:18.612830 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-04-08 00:43:18.612835 | orchestrator | Wednesday 08 April 2026 00:43:12 +0000 (0:00:02.223) 0:00:13.674 ******* 2026-04-08 00:43:18.612851 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=1.6.24.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fmemcached\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_jxdcxf72/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_jxdcxf72/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_jxdcxf72/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_jxdcxf72/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=1.6.24.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fmemcached: Bad Request (\"invalid reference format\")\\n'"} 2026-04-08 00:43:18.612864 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=1.6.24.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fmemcached\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_82xu6x45/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_82xu6x45/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_82xu6x45/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_82xu6x45/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=1.6.24.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fmemcached: Bad Request (\"invalid reference format\")\\n'"} 2026-04-08 00:43:18.612949 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=1.6.24.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fmemcached\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_37zgrzp4/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_37zgrzp4/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_37zgrzp4/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_37zgrzp4/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=1.6.24.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fmemcached: Bad Request (\"invalid reference format\")\\n'"} 2026-04-08 00:43:18.612957 | orchestrator | 2026-04-08 00:43:18.612962 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:43:18.612967 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-08 00:43:18.612975 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-08 00:43:18.612979 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-08 00:43:18.612984 | orchestrator | 2026-04-08 00:43:18.612992 | orchestrator | 2026-04-08 00:43:18.612996 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:43:18.613001 | orchestrator | Wednesday 08 April 2026 00:43:14 +0000 (0:00:02.547) 0:00:16.221 ******* 2026-04-08 00:43:18.613006 | orchestrator | =============================================================================== 2026-04-08 00:43:18.613010 | orchestrator | memcached : Restart memcached container --------------------------------- 2.55s 2026-04-08 00:43:18.613015 | orchestrator | memcached : Ensuring config directories exist --------------------------- 2.53s 2026-04-08 00:43:18.613019 | orchestrator | service-check-containers : memcached | Check containers ----------------- 2.39s 2026-04-08 00:43:18.613024 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.22s 2026-04-08 00:43:18.613028 | orchestrator | memcached : include_tasks ----------------------------------------------- 2.05s 2026-04-08 00:43:18.613033 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.00s 2026-04-08 00:43:18.613037 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.82s 2026-04-08 00:43:18.613043 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 0.50s 2026-04-08 00:43:18.613051 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.47s 2026-04-08 00:43:18.613058 | orchestrator | 2026-04-08 00:43:18 | INFO  | Task b4ed936e-d93c-448e-81a0-7f7ef0853050 is in state STARTED 2026-04-08 00:43:18.613825 | orchestrator | 2026-04-08 00:43:18 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:43:18.614387 | orchestrator | 2026-04-08 00:43:18 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:43:18.615656 | orchestrator | 2026-04-08 00:43:18 | INFO  | Task 387595d3-d5f4-4fbe-ae5f-f230f8fb0cf7 is in state STARTED 2026-04-08 00:43:18.615742 | orchestrator | 2026-04-08 00:43:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:43:21.651110 | orchestrator | 2026-04-08 00:43:21 | INFO  | Task e58199e8-1c32-4359-ab70-2780380ee383 is in state STARTED 2026-04-08 00:43:21.651627 | orchestrator | 2026-04-08 00:43:21 | INFO  | Task e339d375-7af2-44fe-aee2-8cf97755718d is in state SUCCESS 2026-04-08 00:43:21.654557 | orchestrator | 2026-04-08 00:43:21.654585 | orchestrator | 2026-04-08 00:43:21.654592 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-08 00:43:21.654599 | orchestrator | 2026-04-08 00:43:21.654606 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-08 00:43:21.654612 | orchestrator | Wednesday 08 April 2026 00:43:00 +0000 (0:00:00.899) 0:00:00.899 ******* 2026-04-08 00:43:21.654619 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:43:21.654627 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:43:21.654633 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:43:21.654640 | orchestrator | 2026-04-08 00:43:21.654646 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-08 00:43:21.654653 | orchestrator | Wednesday 08 April 2026 00:43:01 +0000 (0:00:00.991) 0:00:01.890 ******* 2026-04-08 00:43:21.654659 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-04-08 00:43:21.654666 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-04-08 00:43:21.654675 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-04-08 00:43:21.654686 | orchestrator | 2026-04-08 00:43:21.654696 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-04-08 00:43:21.654706 | orchestrator | 2026-04-08 00:43:21.654716 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-04-08 00:43:21.654726 | orchestrator | Wednesday 08 April 2026 00:43:02 +0000 (0:00:00.937) 0:00:02.827 ******* 2026-04-08 00:43:21.654735 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:43:21.654747 | orchestrator | 2026-04-08 00:43:21.654757 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-04-08 00:43:21.654789 | orchestrator | Wednesday 08 April 2026 00:43:04 +0000 (0:00:01.996) 0:00:04.825 ******* 2026-04-08 00:43:21.654818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-08 00:43:21.654830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-08 00:43:21.654837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-08 00:43:21.654844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-08 00:43:21.654869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-08 00:43:21.654876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-08 00:43:21.654888 | orchestrator | 2026-04-08 00:43:21.654895 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-04-08 00:43:21.654901 | orchestrator | Wednesday 08 April 2026 00:43:06 +0000 (0:00:02.596) 0:00:07.421 ******* 2026-04-08 00:43:21.654908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-08 00:43:21.654914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-08 00:43:21.654921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-08 00:43:21.654927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-08 00:43:21.654941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-08 00:43:21.654948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-08 00:43:21.654959 | orchestrator | 2026-04-08 00:43:21.654965 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-04-08 00:43:21.654972 | orchestrator | Wednesday 08 April 2026 00:43:10 +0000 (0:00:03.336) 0:00:10.757 ******* 2026-04-08 00:43:21.654978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-08 00:43:21.654985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-08 00:43:21.654991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-08 00:43:21.654998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-08 00:43:21.655011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-08 00:43:21.655018 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-08 00:43:21.655029 | orchestrator | 2026-04-08 00:43:21.655035 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-04-08 00:43:21.655041 | orchestrator | Wednesday 08 April 2026 00:43:14 +0000 (0:00:04.431) 0:00:15.189 ******* 2026-04-08 00:43:21.655048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-08 00:43:21.655054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-08 00:43:21.655061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-08 00:43:21.655067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-08 00:43:21.655077 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-08 00:43:21.655089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-08 00:43:21.655099 | orchestrator | 2026-04-08 00:43:21.655106 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-04-08 00:43:21.655112 | orchestrator | Wednesday 08 April 2026 00:43:17 +0000 (0:00:02.812) 0:00:18.002 ******* 2026-04-08 00:43:21.655119 | orchestrator | changed: [testbed-node-0] => { 2026-04-08 00:43:21.655125 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:43:21.655131 | orchestrator | } 2026-04-08 00:43:21.655138 | orchestrator | changed: [testbed-node-1] => { 2026-04-08 00:43:21.655144 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:43:21.655150 | orchestrator | } 2026-04-08 00:43:21.655156 | orchestrator | changed: [testbed-node-2] => { 2026-04-08 00:43:21.655162 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:43:21.655170 | orchestrator | } 2026-04-08 00:43:21.655177 | orchestrator | 2026-04-08 00:43:21.655184 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-08 00:43:21.655192 | orchestrator | Wednesday 08 April 2026 00:43:17 +0000 (0:00:00.432) 0:00:18.434 ******* 2026-04-08 00:43:21.655199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-08 00:43:21.655208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-08 00:43:21.655215 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:43:21.655223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-08 00:43:21.655230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-08 00:43:21.655244 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:43:21.655258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-08 00:43:21.655266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-08 00:43:21.655273 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:43:21.655280 | orchestrator | 2026-04-08 00:43:21.655287 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-08 00:43:21.655294 | orchestrator | Wednesday 08 April 2026 00:43:18 +0000 (0:00:00.795) 0:00:19.229 ******* 2026-04-08 00:43:21.655301 | orchestrator | 2026-04-08 00:43:21.655308 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-08 00:43:21.655315 | orchestrator | Wednesday 08 April 2026 00:43:18 +0000 (0:00:00.084) 0:00:19.314 ******* 2026-04-08 00:43:21.655322 | orchestrator | 2026-04-08 00:43:21.655329 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-08 00:43:21.655336 | orchestrator | Wednesday 08 April 2026 00:43:18 +0000 (0:00:00.094) 0:00:19.409 ******* 2026-04-08 00:43:21.655343 | orchestrator | 2026-04-08 00:43:21.655350 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-04-08 00:43:21.655357 | orchestrator | Wednesday 08 April 2026 00:43:18 +0000 (0:00:00.122) 0:00:19.531 ******* 2026-04-08 00:43:21.655375 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=7.0.15.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fredis\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload__ub2tot1/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload__ub2tot1/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload__ub2tot1/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload__ub2tot1/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=7.0.15.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fredis: Bad Request (\"invalid reference format\")\\n'"} 2026-04-08 00:43:21.655390 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=7.0.15.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fredis\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_law8jhnu/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_law8jhnu/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_law8jhnu/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_law8jhnu/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=7.0.15.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fredis: Bad Request (\"invalid reference format\")\\n'"} 2026-04-08 00:43:21.655405 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=7.0.15.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fredis\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_oh53tb3z/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_oh53tb3z/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_oh53tb3z/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_oh53tb3z/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=7.0.15.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fredis: Bad Request (\"invalid reference format\")\\n'"} 2026-04-08 00:43:21.655422 | orchestrator | 2026-04-08 00:43:21.655429 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:43:21.655437 | orchestrator | testbed-node-0 : ok=8  changed=5  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-08 00:43:21.655444 | orchestrator | testbed-node-1 : ok=8  changed=5  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-08 00:43:21.655452 | orchestrator | testbed-node-2 : ok=8  changed=5  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-08 00:43:21.655459 | orchestrator | 2026-04-08 00:43:21.655465 | orchestrator | 2026-04-08 00:43:21.655473 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:43:21.655480 | orchestrator | Wednesday 08 April 2026 00:43:20 +0000 (0:00:01.959) 0:00:21.491 ******* 2026-04-08 00:43:21.655487 | orchestrator | =============================================================================== 2026-04-08 00:43:21.655494 | orchestrator | redis : Copying over redis config files --------------------------------- 4.43s 2026-04-08 00:43:21.655501 | orchestrator | redis : Copying over default config.json files -------------------------- 3.34s 2026-04-08 00:43:21.655509 | orchestrator | service-check-containers : redis | Check containers --------------------- 2.81s 2026-04-08 00:43:21.655515 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.60s 2026-04-08 00:43:21.655523 | orchestrator | redis : include_tasks --------------------------------------------------- 2.00s 2026-04-08 00:43:21.655530 | orchestrator | redis : Restart redis container ----------------------------------------- 1.96s 2026-04-08 00:43:21.655537 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.99s 2026-04-08 00:43:21.655545 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.94s 2026-04-08 00:43:21.655551 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.80s 2026-04-08 00:43:21.655558 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 0.43s 2026-04-08 00:43:21.655568 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.30s 2026-04-08 00:43:21.655575 | orchestrator | 2026-04-08 00:43:21 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:43:21.673001 | orchestrator | 2026-04-08 00:43:21 | INFO  | Task b4ed936e-d93c-448e-81a0-7f7ef0853050 is in state STARTED 2026-04-08 00:43:21.674459 | orchestrator | 2026-04-08 00:43:21 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:43:21.675414 | orchestrator | 2026-04-08 00:43:21 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:43:21.680010 | orchestrator | 2026-04-08 00:43:21 | INFO  | Task 387595d3-d5f4-4fbe-ae5f-f230f8fb0cf7 is in state STARTED 2026-04-08 00:43:21.680485 | orchestrator | 2026-04-08 00:43:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:43:24.717672 | orchestrator | 2026-04-08 00:43:24 | INFO  | Task e58199e8-1c32-4359-ab70-2780380ee383 is in state STARTED 2026-04-08 00:43:24.718483 | orchestrator | 2026-04-08 00:43:24 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:43:24.719281 | orchestrator | 2026-04-08 00:43:24 | INFO  | Task b4ed936e-d93c-448e-81a0-7f7ef0853050 is in state STARTED 2026-04-08 00:43:24.720960 | orchestrator | 2026-04-08 00:43:24 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:43:24.721514 | orchestrator | 2026-04-08 00:43:24 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:43:24.723703 | orchestrator | 2026-04-08 00:43:24 | INFO  | Task 387595d3-d5f4-4fbe-ae5f-f230f8fb0cf7 is in state STARTED 2026-04-08 00:43:24.723733 | orchestrator | 2026-04-08 00:43:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:43:27.763083 | orchestrator | 2026-04-08 00:43:27 | INFO  | Task e58199e8-1c32-4359-ab70-2780380ee383 is in state STARTED 2026-04-08 00:43:27.763872 | orchestrator | 2026-04-08 00:43:27 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:43:27.765827 | orchestrator | 2026-04-08 00:43:27 | INFO  | Task b4ed936e-d93c-448e-81a0-7f7ef0853050 is in state STARTED 2026-04-08 00:43:27.766772 | orchestrator | 2026-04-08 00:43:27 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:43:27.767791 | orchestrator | 2026-04-08 00:43:27 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:43:27.769195 | orchestrator | 2026-04-08 00:43:27.769236 | orchestrator | 2026-04-08 00:43:27.769244 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-08 00:43:27.769251 | orchestrator | 2026-04-08 00:43:27.769257 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-08 00:43:27.769264 | orchestrator | Wednesday 08 April 2026 00:41:50 +0000 (0:00:00.473) 0:00:00.473 ******* 2026-04-08 00:43:27.769271 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-04-08 00:43:27.769278 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-04-08 00:43:27.769284 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-04-08 00:43:27.769290 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-04-08 00:43:27.769296 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-04-08 00:43:27.769303 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-04-08 00:43:27.769310 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-04-08 00:43:27.769316 | orchestrator | 2026-04-08 00:43:27.769323 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-04-08 00:43:27.769329 | orchestrator | 2026-04-08 00:43:27.769335 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-04-08 00:43:27.769364 | orchestrator | Wednesday 08 April 2026 00:41:52 +0000 (0:00:01.927) 0:00:02.401 ******* 2026-04-08 00:43:27.769372 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:43:27.769380 | orchestrator | 2026-04-08 00:43:27.769386 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-04-08 00:43:27.769392 | orchestrator | Wednesday 08 April 2026 00:41:55 +0000 (0:00:02.224) 0:00:04.625 ******* 2026-04-08 00:43:27.769397 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:43:27.769405 | orchestrator | ok: [testbed-manager] 2026-04-08 00:43:27.769411 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:43:27.769418 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:43:27.769424 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:43:27.769430 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:43:27.769436 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:43:27.769442 | orchestrator | 2026-04-08 00:43:27.769448 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-04-08 00:43:27.769454 | orchestrator | Wednesday 08 April 2026 00:41:58 +0000 (0:00:03.204) 0:00:07.830 ******* 2026-04-08 00:43:27.769459 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:43:27.769465 | orchestrator | ok: [testbed-manager] 2026-04-08 00:43:27.769470 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:43:27.769476 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:43:27.769483 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:43:27.769489 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:43:27.769495 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:43:27.769501 | orchestrator | 2026-04-08 00:43:27.769507 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-04-08 00:43:27.769512 | orchestrator | Wednesday 08 April 2026 00:42:01 +0000 (0:00:03.390) 0:00:11.221 ******* 2026-04-08 00:43:27.769519 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:43:27.769525 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:43:27.769531 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:43:27.769537 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:43:27.769542 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:43:27.769549 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:43:27.769555 | orchestrator | changed: [testbed-manager] 2026-04-08 00:43:27.769561 | orchestrator | 2026-04-08 00:43:27.769568 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-04-08 00:43:27.769574 | orchestrator | Wednesday 08 April 2026 00:42:04 +0000 (0:00:02.660) 0:00:13.881 ******* 2026-04-08 00:43:27.769581 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:43:27.769587 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:43:27.769593 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:43:27.769599 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:43:27.769605 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:43:27.769611 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:43:27.769618 | orchestrator | changed: [testbed-manager] 2026-04-08 00:43:27.769624 | orchestrator | 2026-04-08 00:43:27.769630 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-04-08 00:43:27.769636 | orchestrator | Wednesday 08 April 2026 00:42:14 +0000 (0:00:09.983) 0:00:23.865 ******* 2026-04-08 00:43:27.769643 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:43:27.769649 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:43:27.769655 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:43:27.769661 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:43:27.769667 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:43:27.769674 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:43:27.769842 | orchestrator | changed: [testbed-manager] 2026-04-08 00:43:27.769854 | orchestrator | 2026-04-08 00:43:27.769876 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-04-08 00:43:27.769893 | orchestrator | Wednesday 08 April 2026 00:42:58 +0000 (0:00:44.413) 0:01:08.278 ******* 2026-04-08 00:43:27.769901 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:43:27.769909 | orchestrator | 2026-04-08 00:43:27.769916 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-04-08 00:43:27.769923 | orchestrator | Wednesday 08 April 2026 00:43:00 +0000 (0:00:01.433) 0:01:09.712 ******* 2026-04-08 00:43:27.769929 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-04-08 00:43:27.769936 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-04-08 00:43:27.769943 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-04-08 00:43:27.769949 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-04-08 00:43:27.769969 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-04-08 00:43:27.769976 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-04-08 00:43:27.769982 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-04-08 00:43:27.769989 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-04-08 00:43:27.769995 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-04-08 00:43:27.770001 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-04-08 00:43:27.770008 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-04-08 00:43:27.770047 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-04-08 00:43:27.770056 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-04-08 00:43:27.770063 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-04-08 00:43:27.770070 | orchestrator | 2026-04-08 00:43:27.770076 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-04-08 00:43:27.770085 | orchestrator | Wednesday 08 April 2026 00:43:04 +0000 (0:00:04.112) 0:01:13.824 ******* 2026-04-08 00:43:27.770092 | orchestrator | ok: [testbed-manager] 2026-04-08 00:43:27.770098 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:43:27.770105 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:43:27.770112 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:43:27.770119 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:43:27.770126 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:43:27.770132 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:43:27.770139 | orchestrator | 2026-04-08 00:43:27.770146 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-04-08 00:43:27.770153 | orchestrator | Wednesday 08 April 2026 00:43:05 +0000 (0:00:01.276) 0:01:15.101 ******* 2026-04-08 00:43:27.770160 | orchestrator | changed: [testbed-manager] 2026-04-08 00:43:27.770166 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:43:27.770172 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:43:27.770179 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:43:27.770185 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:43:27.770190 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:43:27.770196 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:43:27.770203 | orchestrator | 2026-04-08 00:43:27.770210 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-04-08 00:43:27.770216 | orchestrator | Wednesday 08 April 2026 00:43:06 +0000 (0:00:01.425) 0:01:16.526 ******* 2026-04-08 00:43:27.770223 | orchestrator | ok: [testbed-manager] 2026-04-08 00:43:27.770230 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:43:27.770236 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:43:27.770243 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:43:27.770249 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:43:27.770255 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:43:27.770262 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:43:27.770268 | orchestrator | 2026-04-08 00:43:27.770275 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-04-08 00:43:27.770288 | orchestrator | Wednesday 08 April 2026 00:43:09 +0000 (0:00:02.748) 0:01:19.275 ******* 2026-04-08 00:43:27.770295 | orchestrator | ok: [testbed-manager] 2026-04-08 00:43:27.770301 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:43:27.770308 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:43:27.770315 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:43:27.770321 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:43:27.770327 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:43:27.770334 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:43:27.770340 | orchestrator | 2026-04-08 00:43:27.770347 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-04-08 00:43:27.770354 | orchestrator | Wednesday 08 April 2026 00:43:11 +0000 (0:00:02.235) 0:01:21.510 ******* 2026-04-08 00:43:27.770360 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-04-08 00:43:27.770370 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:43:27.770377 | orchestrator | 2026-04-08 00:43:27.770383 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-04-08 00:43:27.770388 | orchestrator | Wednesday 08 April 2026 00:43:13 +0000 (0:00:01.781) 0:01:23.291 ******* 2026-04-08 00:43:27.770392 | orchestrator | changed: [testbed-manager] 2026-04-08 00:43:27.770396 | orchestrator | 2026-04-08 00:43:27.770400 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-04-08 00:43:27.770404 | orchestrator | Wednesday 08 April 2026 00:43:15 +0000 (0:00:01.749) 0:01:25.040 ******* 2026-04-08 00:43:27.770407 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:43:27.770411 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:43:27.770415 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:43:27.770418 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:43:27.770422 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:43:27.770426 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:43:27.770434 | orchestrator | changed: [testbed-manager] 2026-04-08 00:43:27.770437 | orchestrator | 2026-04-08 00:43:27.770441 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:43:27.770445 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:43:27.770450 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:43:27.770454 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:43:27.770458 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:43:27.770469 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:43:27.770473 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:43:27.770476 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:43:27.770480 | orchestrator | 2026-04-08 00:43:27.770484 | orchestrator | 2026-04-08 00:43:27.770488 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:43:27.770492 | orchestrator | Wednesday 08 April 2026 00:43:26 +0000 (0:00:11.238) 0:01:36.279 ******* 2026-04-08 00:43:27.770495 | orchestrator | =============================================================================== 2026-04-08 00:43:27.770499 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 44.41s 2026-04-08 00:43:27.770507 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.24s 2026-04-08 00:43:27.770511 | orchestrator | osism.services.netdata : Add repository --------------------------------- 9.98s 2026-04-08 00:43:27.770514 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.11s 2026-04-08 00:43:27.770518 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.39s 2026-04-08 00:43:27.770522 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 3.20s 2026-04-08 00:43:27.770525 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 2.75s 2026-04-08 00:43:27.770529 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.66s 2026-04-08 00:43:27.770533 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.24s 2026-04-08 00:43:27.770537 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.22s 2026-04-08 00:43:27.770540 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.93s 2026-04-08 00:43:27.770544 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.78s 2026-04-08 00:43:27.770548 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.75s 2026-04-08 00:43:27.770552 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.43s 2026-04-08 00:43:27.770558 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.42s 2026-04-08 00:43:27.770564 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.28s 2026-04-08 00:43:27.770570 | orchestrator | 2026-04-08 00:43:27 | INFO  | Task 387595d3-d5f4-4fbe-ae5f-f230f8fb0cf7 is in state SUCCESS 2026-04-08 00:43:27.770577 | orchestrator | 2026-04-08 00:43:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:43:30.839915 | orchestrator | 2026-04-08 00:43:30 | INFO  | Task e58199e8-1c32-4359-ab70-2780380ee383 is in state STARTED 2026-04-08 00:43:30.841735 | orchestrator | 2026-04-08 00:43:30 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:43:30.847222 | orchestrator | 2026-04-08 00:43:30 | INFO  | Task b4ed936e-d93c-448e-81a0-7f7ef0853050 is in state STARTED 2026-04-08 00:43:30.847304 | orchestrator | 2026-04-08 00:43:30 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:43:30.851223 | orchestrator | 2026-04-08 00:43:30 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:43:30.851282 | orchestrator | 2026-04-08 00:43:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:43:33.896185 | orchestrator | 2026-04-08 00:43:33 | INFO  | Task e58199e8-1c32-4359-ab70-2780380ee383 is in state STARTED 2026-04-08 00:43:33.896625 | orchestrator | 2026-04-08 00:43:33 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:43:33.897508 | orchestrator | 2026-04-08 00:43:33 | INFO  | Task b4ed936e-d93c-448e-81a0-7f7ef0853050 is in state STARTED 2026-04-08 00:43:33.898174 | orchestrator | 2026-04-08 00:43:33 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:43:33.899538 | orchestrator | 2026-04-08 00:43:33 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:43:33.899572 | orchestrator | 2026-04-08 00:43:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:43:36.950142 | orchestrator | 2026-04-08 00:43:36 | INFO  | Task e58199e8-1c32-4359-ab70-2780380ee383 is in state STARTED 2026-04-08 00:43:36.951203 | orchestrator | 2026-04-08 00:43:36 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:43:36.955929 | orchestrator | 2026-04-08 00:43:36.955992 | orchestrator | 2026-04-08 00:43:36.956005 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-08 00:43:36.956040 | orchestrator | 2026-04-08 00:43:36.956051 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-08 00:43:36.956062 | orchestrator | Wednesday 08 April 2026 00:43:00 +0000 (0:00:00.934) 0:00:00.934 ******* 2026-04-08 00:43:36.956072 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:43:36.956084 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:43:36.956094 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:43:36.956104 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:43:36.956115 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:43:36.956125 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:43:36.956136 | orchestrator | 2026-04-08 00:43:36.956146 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-08 00:43:36.956161 | orchestrator | Wednesday 08 April 2026 00:43:01 +0000 (0:00:01.013) 0:00:01.948 ******* 2026-04-08 00:43:36.956172 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-08 00:43:36.956183 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-08 00:43:36.956193 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-08 00:43:36.956204 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-08 00:43:36.956214 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-08 00:43:36.956224 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-08 00:43:36.956235 | orchestrator | 2026-04-08 00:43:36.956245 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-04-08 00:43:36.956255 | orchestrator | 2026-04-08 00:43:36.956266 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-04-08 00:43:36.956276 | orchestrator | Wednesday 08 April 2026 00:43:03 +0000 (0:00:01.975) 0:00:03.924 ******* 2026-04-08 00:43:36.956288 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-4, testbed-node-3, testbed-node-5 2026-04-08 00:43:36.956300 | orchestrator | 2026-04-08 00:43:36.956310 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-08 00:43:36.956321 | orchestrator | Wednesday 08 April 2026 00:43:05 +0000 (0:00:02.176) 0:00:06.101 ******* 2026-04-08 00:43:36.956331 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-08 00:43:36.956342 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-08 00:43:36.956352 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-08 00:43:36.956363 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-08 00:43:36.956373 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-08 00:43:36.956384 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-08 00:43:36.956394 | orchestrator | 2026-04-08 00:43:36.956405 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-08 00:43:36.956415 | orchestrator | Wednesday 08 April 2026 00:43:07 +0000 (0:00:01.699) 0:00:07.800 ******* 2026-04-08 00:43:36.956426 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-08 00:43:36.956545 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-08 00:43:36.956559 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-08 00:43:36.956635 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-08 00:43:36.956926 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-08 00:43:36.956940 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-08 00:43:36.956951 | orchestrator | 2026-04-08 00:43:36.956962 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-08 00:43:36.956975 | orchestrator | Wednesday 08 April 2026 00:43:09 +0000 (0:00:02.662) 0:00:10.462 ******* 2026-04-08 00:43:36.956986 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-04-08 00:43:36.957009 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:43:36.957021 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-04-08 00:43:36.957032 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:43:36.957043 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-04-08 00:43:36.957054 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-04-08 00:43:36.957065 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:43:36.957075 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-04-08 00:43:36.957086 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:43:36.957097 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:43:36.957108 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-04-08 00:43:36.957120 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:43:36.957130 | orchestrator | 2026-04-08 00:43:36.957141 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-04-08 00:43:36.957152 | orchestrator | Wednesday 08 April 2026 00:43:12 +0000 (0:00:02.606) 0:00:13.068 ******* 2026-04-08 00:43:36.957163 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:43:36.957174 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:43:36.957185 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:43:36.957209 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:43:36.957221 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:43:36.957232 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:43:36.957242 | orchestrator | 2026-04-08 00:43:36.957253 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-04-08 00:43:36.957264 | orchestrator | Wednesday 08 April 2026 00:43:14 +0000 (0:00:02.005) 0:00:15.074 ******* 2026-04-08 00:43:36.957292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-08 00:43:36.957308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-08 00:43:36.957320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-08 00:43:36.957332 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-08 00:43:36.957350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-08 00:43:36.957372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-08 00:43:36.957386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-08 00:43:36.957396 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-08 00:43:36.957407 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-08 00:43:36.957426 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-08 00:43:36.957436 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-08 00:43:36.957455 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-08 00:43:36.957466 | orchestrator | 2026-04-08 00:43:36.957475 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-04-08 00:43:36.957485 | orchestrator | Wednesday 08 April 2026 00:43:17 +0000 (0:00:02.633) 0:00:17.708 ******* 2026-04-08 00:43:36.957494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-08 00:43:36.957504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-08 00:43:36.957520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-08 00:43:36.957531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-08 00:43:36.957546 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-08 00:43:36.957562 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-08 00:43:36.957573 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-08 00:43:36.957583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-08 00:43:36.957598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-08 00:43:36.957610 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-08 00:43:36.957630 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-08 00:43:36.957641 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-08 00:43:36.957730 | orchestrator | 2026-04-08 00:43:36.957742 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-04-08 00:43:36.957751 | orchestrator | Wednesday 08 April 2026 00:43:20 +0000 (0:00:02.890) 0:00:20.599 ******* 2026-04-08 00:43:36.957759 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:43:36.957768 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:43:36.957777 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:43:36.957785 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:43:36.957821 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:43:36.957830 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:43:36.957839 | orchestrator | 2026-04-08 00:43:36.957847 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-04-08 00:43:36.957856 | orchestrator | Wednesday 08 April 2026 00:43:21 +0000 (0:00:01.396) 0:00:21.995 ******* 2026-04-08 00:43:36.957866 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-08 00:43:36.957876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-08 00:43:36.957886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-08 00:43:36.957907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-08 00:43:36.957918 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-08 00:43:36.957941 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-08 00:43:36.957950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-08 00:43:36.957959 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-08 00:43:36.957972 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-08 00:43:36.957988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-08 00:43:36.957998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-08 00:43:36.958012 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-08 00:43:36.958064 | orchestrator | 2026-04-08 00:43:36.958075 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-04-08 00:43:36.958084 | orchestrator | Wednesday 08 April 2026 00:43:24 +0000 (0:00:03.233) 0:00:25.228 ******* 2026-04-08 00:43:36.958094 | orchestrator | changed: [testbed-node-0] => { 2026-04-08 00:43:36.958103 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:43:36.958112 | orchestrator | } 2026-04-08 00:43:36.958122 | orchestrator | changed: [testbed-node-1] => { 2026-04-08 00:43:36.958132 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:43:36.958142 | orchestrator | } 2026-04-08 00:43:36.958151 | orchestrator | changed: [testbed-node-2] => { 2026-04-08 00:43:36.958161 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:43:36.958170 | orchestrator | } 2026-04-08 00:43:36.958180 | orchestrator | changed: [testbed-node-3] => { 2026-04-08 00:43:36.958189 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:43:36.958199 | orchestrator | } 2026-04-08 00:43:36.958208 | orchestrator | changed: [testbed-node-4] => { 2026-04-08 00:43:36.958217 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:43:36.958227 | orchestrator | } 2026-04-08 00:43:36.958236 | orchestrator | changed: [testbed-node-5] => { 2026-04-08 00:43:36.958246 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:43:36.958256 | orchestrator | } 2026-04-08 00:43:36.958265 | orchestrator | 2026-04-08 00:43:36.958273 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-08 00:43:36.958283 | orchestrator | Wednesday 08 April 2026 00:43:26 +0000 (0:00:01.349) 0:00:26.578 ******* 2026-04-08 00:43:36.958292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-08 00:43:36.958310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_per2026-04-08 00:43:36 | INFO  | Task b4ed936e-d93c-448e-81a0-7f7ef0853050 is in state SUCCESS 2026-04-08 00:43:36.958329 | orchestrator | iod': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-08 00:43:36.958338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-08 00:43:36.958932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-08 00:43:36.958968 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:43:36.958978 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:43:36.958988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-08 00:43:36.959000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-08 00:43:36.959010 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:43:36.959024 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-08 00:43:36.959056 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-08 00:43:36.959067 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:43:36.959077 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-08 00:43:36.959087 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-08 00:43:36.959097 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:43:36.959108 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-08 00:43:36.959118 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-08 00:43:36.959134 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:43:36.959144 | orchestrator | 2026-04-08 00:43:36.959158 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-08 00:43:36.959169 | orchestrator | Wednesday 08 April 2026 00:43:30 +0000 (0:00:04.270) 0:00:30.848 ******* 2026-04-08 00:43:36.959179 | orchestrator | 2026-04-08 00:43:36.959189 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-08 00:43:36.959199 | orchestrator | Wednesday 08 April 2026 00:43:31 +0000 (0:00:00.688) 0:00:31.537 ******* 2026-04-08 00:43:36.959209 | orchestrator | 2026-04-08 00:43:36.959219 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-08 00:43:36.959229 | orchestrator | Wednesday 08 April 2026 00:43:31 +0000 (0:00:00.233) 0:00:31.771 ******* 2026-04-08 00:43:36.959238 | orchestrator | 2026-04-08 00:43:36.959253 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-08 00:43:36.959263 | orchestrator | Wednesday 08 April 2026 00:43:31 +0000 (0:00:00.305) 0:00:32.076 ******* 2026-04-08 00:43:36.959274 | orchestrator | 2026-04-08 00:43:36.959284 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-08 00:43:36.959294 | orchestrator | Wednesday 08 April 2026 00:43:31 +0000 (0:00:00.248) 0:00:32.325 ******* 2026-04-08 00:43:36.959304 | orchestrator | 2026-04-08 00:43:36.959314 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-08 00:43:36.959324 | orchestrator | Wednesday 08 April 2026 00:43:32 +0000 (0:00:00.277) 0:00:32.602 ******* 2026-04-08 00:43:36.959334 | orchestrator | 2026-04-08 00:43:36.959344 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-04-08 00:43:36.959354 | orchestrator | Wednesday 08 April 2026 00:43:32 +0000 (0:00:00.197) 0:00:32.800 ******* 2026-04-08 00:43:36.959365 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopenvswitch-db-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_syna8ioa/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_syna8ioa/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_syna8ioa/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_syna8ioa/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopenvswitch-db-server: Bad Request (\"invalid reference format\")\\n'"} 2026-04-08 00:43:36.959396 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopenvswitch-db-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_ro3cwc3j/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_ro3cwc3j/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_ro3cwc3j/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_ro3cwc3j/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopenvswitch-db-server: Bad Request (\"invalid reference format\")\\n'"} 2026-04-08 00:43:36.959413 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopenvswitch-db-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_nppa5iuq/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_nppa5iuq/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_nppa5iuq/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_nppa5iuq/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopenvswitch-db-server: Bad Request (\"invalid reference format\")\\n'"} 2026-04-08 00:43:36.959439 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopenvswitch-db-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_4z2zupap/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_4z2zupap/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_4z2zupap/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_4z2zupap/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopenvswitch-db-server: Bad Request (\"invalid reference format\")\\n'"} 2026-04-08 00:43:36.959461 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopenvswitch-db-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_afwsc8n7/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_afwsc8n7/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_afwsc8n7/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_afwsc8n7/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopenvswitch-db-server: Bad Request (\"invalid reference format\")\\n'"} 2026-04-08 00:43:36.959479 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopenvswitch-db-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_1s3kfacq/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_1s3kfacq/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_1s3kfacq/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_1s3kfacq/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopenvswitch-db-server: Bad Request (\"invalid reference format\")\\n'"} 2026-04-08 00:43:36.959498 | orchestrator | 2026-04-08 00:43:36.959508 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:43:36.959518 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-08 00:43:36.959530 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-08 00:43:36.959543 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-08 00:43:36.959553 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-08 00:43:36.959563 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-08 00:43:36.959577 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-08 00:43:36.959587 | orchestrator | 2026-04-08 00:43:36.959597 | orchestrator | 2026-04-08 00:43:36.959607 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:43:36.959616 | orchestrator | Wednesday 08 April 2026 00:43:36 +0000 (0:00:03.753) 0:00:36.554 ******* 2026-04-08 00:43:36.959626 | orchestrator | =============================================================================== 2026-04-08 00:43:36.959636 | orchestrator | service-check-containers : Include tasks -------------------------------- 4.27s 2026-04-08 00:43:36.959646 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 3.75s 2026-04-08 00:43:36.959655 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 3.23s 2026-04-08 00:43:36.959665 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.89s 2026-04-08 00:43:36.959675 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.66s 2026-04-08 00:43:36.959684 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.63s 2026-04-08 00:43:36.959694 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.61s 2026-04-08 00:43:36.959703 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.16s 2026-04-08 00:43:36.959713 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 2.00s 2026-04-08 00:43:36.959723 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.99s 2026-04-08 00:43:36.959732 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.95s 2026-04-08 00:43:36.959742 | orchestrator | module-load : Load modules ---------------------------------------------- 1.70s 2026-04-08 00:43:36.959751 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.40s 2026-04-08 00:43:36.959761 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 1.35s 2026-04-08 00:43:36.959770 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.01s 2026-04-08 00:43:36.959780 | orchestrator | 2026-04-08 00:43:36 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:43:36.961118 | orchestrator | 2026-04-08 00:43:36 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:43:36.962222 | orchestrator | 2026-04-08 00:43:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:43:40.014984 | orchestrator | 2026-04-08 00:43:40 | INFO  | Task e58199e8-1c32-4359-ab70-2780380ee383 is in state STARTED 2026-04-08 00:43:40.018271 | orchestrator | 2026-04-08 00:43:40 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:43:40.021808 | orchestrator | 2026-04-08 00:43:40 | INFO  | Task aedb2b03-388c-488a-88fd-802fec8ddf8a is in state STARTED 2026-04-08 00:43:40.079153 | orchestrator | 2026-04-08 00:43:40 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:43:40.079256 | orchestrator | 2026-04-08 00:43:40 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:43:40.079271 | orchestrator | 2026-04-08 00:43:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:43:43.058487 | orchestrator | 2026-04-08 00:43:43 | INFO  | Task e58199e8-1c32-4359-ab70-2780380ee383 is in state STARTED 2026-04-08 00:43:43.058759 | orchestrator | 2026-04-08 00:43:43 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:43:43.059630 | orchestrator | 2026-04-08 00:43:43 | INFO  | Task aedb2b03-388c-488a-88fd-802fec8ddf8a is in state STARTED 2026-04-08 00:43:43.060635 | orchestrator | 2026-04-08 00:43:43 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:43:43.061373 | orchestrator | 2026-04-08 00:43:43 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:43:43.061542 | orchestrator | 2026-04-08 00:43:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:43:46.097196 | orchestrator | 2026-04-08 00:43:46 | INFO  | Task e58199e8-1c32-4359-ab70-2780380ee383 is in state STARTED 2026-04-08 00:43:46.097335 | orchestrator | 2026-04-08 00:43:46 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:43:46.097368 | orchestrator | 2026-04-08 00:43:46 | INFO  | Task aedb2b03-388c-488a-88fd-802fec8ddf8a is in state STARTED 2026-04-08 00:43:46.098657 | orchestrator | 2026-04-08 00:43:46 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:43:46.099382 | orchestrator | 2026-04-08 00:43:46 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:43:46.099407 | orchestrator | 2026-04-08 00:43:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:43:49.152265 | orchestrator | 2026-04-08 00:43:49 | INFO  | Task e58199e8-1c32-4359-ab70-2780380ee383 is in state STARTED 2026-04-08 00:43:49.152900 | orchestrator | 2026-04-08 00:43:49 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:43:49.153942 | orchestrator | 2026-04-08 00:43:49 | INFO  | Task aedb2b03-388c-488a-88fd-802fec8ddf8a is in state STARTED 2026-04-08 00:43:49.156417 | orchestrator | 2026-04-08 00:43:49 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:43:49.157819 | orchestrator | 2026-04-08 00:43:49 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:43:49.157845 | orchestrator | 2026-04-08 00:43:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:43:52.187214 | orchestrator | 2026-04-08 00:43:52 | INFO  | Task e58199e8-1c32-4359-ab70-2780380ee383 is in state STARTED 2026-04-08 00:43:52.187885 | orchestrator | 2026-04-08 00:43:52 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:43:52.189072 | orchestrator | 2026-04-08 00:43:52 | INFO  | Task aedb2b03-388c-488a-88fd-802fec8ddf8a is in state STARTED 2026-04-08 00:43:52.189653 | orchestrator | 2026-04-08 00:43:52 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:43:52.190876 | orchestrator | 2026-04-08 00:43:52 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:43:52.190956 | orchestrator | 2026-04-08 00:43:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:43:55.257494 | orchestrator | 2026-04-08 00:43:55 | INFO  | Task e58199e8-1c32-4359-ab70-2780380ee383 is in state STARTED 2026-04-08 00:43:55.258080 | orchestrator | 2026-04-08 00:43:55 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:43:55.258976 | orchestrator | 2026-04-08 00:43:55 | INFO  | Task aedb2b03-388c-488a-88fd-802fec8ddf8a is in state STARTED 2026-04-08 00:43:55.260096 | orchestrator | 2026-04-08 00:43:55 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:43:55.260582 | orchestrator | 2026-04-08 00:43:55 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:43:55.260644 | orchestrator | 2026-04-08 00:43:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:43:58.306348 | orchestrator | 2026-04-08 00:43:58 | INFO  | Task e58199e8-1c32-4359-ab70-2780380ee383 is in state STARTED 2026-04-08 00:43:58.308716 | orchestrator | 2026-04-08 00:43:58 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:43:58.311567 | orchestrator | 2026-04-08 00:43:58.311607 | orchestrator | 2026-04-08 00:43:58 | INFO  | Task aedb2b03-388c-488a-88fd-802fec8ddf8a is in state SUCCESS 2026-04-08 00:43:58.312918 | orchestrator | 2026-04-08 00:43:58.313013 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-08 00:43:58.313029 | orchestrator | 2026-04-08 00:43:58.313042 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-08 00:43:58.313054 | orchestrator | Wednesday 08 April 2026 00:43:41 +0000 (0:00:00.376) 0:00:00.376 ******* 2026-04-08 00:43:58.313065 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:43:58.313078 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:43:58.313089 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:43:58.313100 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:43:58.313111 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:43:58.313121 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:43:58.313132 | orchestrator | 2026-04-08 00:43:58.313144 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-08 00:43:58.313155 | orchestrator | Wednesday 08 April 2026 00:43:42 +0000 (0:00:00.835) 0:00:01.211 ******* 2026-04-08 00:43:58.313166 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-04-08 00:43:58.313177 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-04-08 00:43:58.313188 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-04-08 00:43:58.313199 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-04-08 00:43:58.313210 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-04-08 00:43:58.313221 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-04-08 00:43:58.313232 | orchestrator | 2026-04-08 00:43:58.313242 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-04-08 00:43:58.313254 | orchestrator | 2026-04-08 00:43:58.313265 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-04-08 00:43:58.313276 | orchestrator | Wednesday 08 April 2026 00:43:43 +0000 (0:00:01.570) 0:00:02.781 ******* 2026-04-08 00:43:58.313304 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:43:58.313318 | orchestrator | 2026-04-08 00:43:58.313329 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-04-08 00:43:58.313340 | orchestrator | Wednesday 08 April 2026 00:43:45 +0000 (0:00:01.282) 0:00:04.064 ******* 2026-04-08 00:43:58.313353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:43:58.313395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:43:58.313417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:43:58.313437 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:43:58.313457 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:43:58.313509 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:43:58.313531 | orchestrator | 2026-04-08 00:43:58.313548 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-04-08 00:43:58.313566 | orchestrator | Wednesday 08 April 2026 00:43:46 +0000 (0:00:01.666) 0:00:05.730 ******* 2026-04-08 00:43:58.313588 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:43:58.313608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:43:58.313635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:43:58.313670 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:43:58.313691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:43:58.313711 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:43:58.313731 | orchestrator | 2026-04-08 00:43:58.313743 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-04-08 00:43:58.313754 | orchestrator | Wednesday 08 April 2026 00:43:48 +0000 (0:00:01.522) 0:00:07.253 ******* 2026-04-08 00:43:58.313765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:43:58.313810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:43:58.313834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:43:58.313846 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:43:58.313857 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:43:58.313881 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:43:58.313892 | orchestrator | 2026-04-08 00:43:58.313903 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-04-08 00:43:58.313914 | orchestrator | Wednesday 08 April 2026 00:43:49 +0000 (0:00:01.300) 0:00:08.553 ******* 2026-04-08 00:43:58.313925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:43:58.313936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:43:58.313947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:43:58.313958 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:43:58.313969 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:43:58.313987 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:43:58.313998 | orchestrator | 2026-04-08 00:43:58.314103 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-04-08 00:43:58.314119 | orchestrator | Wednesday 08 April 2026 00:43:51 +0000 (0:00:02.107) 0:00:10.660 ******* 2026-04-08 00:43:58.314140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:43:58.314157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:43:58.314169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:43:58.314181 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:43:58.314192 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:43:58.314204 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:43:58.314214 | orchestrator | 2026-04-08 00:43:58.314226 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-04-08 00:43:58.314237 | orchestrator | Wednesday 08 April 2026 00:43:53 +0000 (0:00:01.673) 0:00:12.334 ******* 2026-04-08 00:43:58.314477 | orchestrator | changed: [testbed-node-0] => { 2026-04-08 00:43:58.314500 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:43:58.314511 | orchestrator | } 2026-04-08 00:43:58.314532 | orchestrator | changed: [testbed-node-1] => { 2026-04-08 00:43:58.314550 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:43:58.314567 | orchestrator | } 2026-04-08 00:43:58.314585 | orchestrator | changed: [testbed-node-2] => { 2026-04-08 00:43:58.314603 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:43:58.314620 | orchestrator | } 2026-04-08 00:43:58.314638 | orchestrator | changed: [testbed-node-3] => { 2026-04-08 00:43:58.314657 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:43:58.314675 | orchestrator | } 2026-04-08 00:43:58.314693 | orchestrator | changed: [testbed-node-4] => { 2026-04-08 00:43:58.314710 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:43:58.314728 | orchestrator | } 2026-04-08 00:43:58.314746 | orchestrator | changed: [testbed-node-5] => { 2026-04-08 00:43:58.314811 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:43:58.314835 | orchestrator | } 2026-04-08 00:43:58.314855 | orchestrator | 2026-04-08 00:43:58.314876 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-08 00:43:58.314953 | orchestrator | Wednesday 08 April 2026 00:43:54 +0000 (0:00:01.103) 0:00:13.438 ******* 2026-04-08 00:43:58.314986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:43:58.314998 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:43:58.315010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:43:58.315021 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:43:58.315041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:43:58.315054 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:43:58.315065 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:43:58.315075 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:43:58.315086 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:43:58.315098 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:43:58.315109 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:43:58.315120 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:43:58.315130 | orchestrator | 2026-04-08 00:43:58.315141 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-04-08 00:43:58.315152 | orchestrator | Wednesday 08 April 2026 00:43:56 +0000 (0:00:01.475) 0:00:14.913 ******* 2026-04-08 00:43:58.315166 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-08 00:43:58.315185 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-08 00:43:58.315215 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-08 00:43:58.315233 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-08 00:43:58.315250 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-08 00:43:58.315268 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-08 00:43:58.315286 | orchestrator | 2026-04-08 00:43:58.315304 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:43:58.315336 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-08 00:43:58.315356 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-08 00:43:58.315376 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-08 00:43:58.315396 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-08 00:43:58.315415 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-08 00:43:58.315434 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-08 00:43:58.315454 | orchestrator | 2026-04-08 00:43:58.315473 | orchestrator | 2026-04-08 00:43:58.315642 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:43:58.315658 | orchestrator | Wednesday 08 April 2026 00:43:57 +0000 (0:00:01.560) 0:00:16.473 ******* 2026-04-08 00:43:58.315670 | orchestrator | =============================================================================== 2026-04-08 00:43:58.315681 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.11s 2026-04-08 00:43:58.315692 | orchestrator | service-check-containers : ovn_controller | Check containers ------------ 1.67s 2026-04-08 00:43:58.315702 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.67s 2026-04-08 00:43:58.315713 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.57s 2026-04-08 00:43:58.315725 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 1.56s 2026-04-08 00:43:58.315735 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.52s 2026-04-08 00:43:58.315746 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.47s 2026-04-08 00:43:58.315756 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.30s 2026-04-08 00:43:58.315873 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.28s 2026-04-08 00:43:58.315887 | orchestrator | service-check-containers : ovn_controller | Notify handlers to restart containers --- 1.10s 2026-04-08 00:43:58.315898 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.84s 2026-04-08 00:43:58.315909 | orchestrator | 2026-04-08 00:43:58 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:43:58.315927 | orchestrator | 2026-04-08 00:43:58 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:43:58.315939 | orchestrator | 2026-04-08 00:43:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:44:01.357071 | orchestrator | 2026-04-08 00:44:01 | INFO  | Task e58199e8-1c32-4359-ab70-2780380ee383 is in state SUCCESS 2026-04-08 00:44:01.358887 | orchestrator | 2026-04-08 00:44:01.358944 | orchestrator | 2026-04-08 00:44:01.358959 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-04-08 00:44:01.359085 | orchestrator | 2026-04-08 00:44:01.359103 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-04-08 00:44:01.359115 | orchestrator | Wednesday 08 April 2026 00:43:21 +0000 (0:00:00.102) 0:00:00.102 ******* 2026-04-08 00:44:01.359127 | orchestrator | ok: [localhost] => { 2026-04-08 00:44:01.359301 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-04-08 00:44:01.359319 | orchestrator | } 2026-04-08 00:44:01.359331 | orchestrator | 2026-04-08 00:44:01.359339 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-04-08 00:44:01.359346 | orchestrator | Wednesday 08 April 2026 00:43:21 +0000 (0:00:00.103) 0:00:00.205 ******* 2026-04-08 00:44:01.359354 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-04-08 00:44:01.359363 | orchestrator | ...ignoring 2026-04-08 00:44:01.359370 | orchestrator | 2026-04-08 00:44:01.359377 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-04-08 00:44:01.359384 | orchestrator | Wednesday 08 April 2026 00:43:25 +0000 (0:00:03.998) 0:00:04.204 ******* 2026-04-08 00:44:01.359391 | orchestrator | skipping: [localhost] 2026-04-08 00:44:01.359398 | orchestrator | 2026-04-08 00:44:01.359405 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-04-08 00:44:01.359412 | orchestrator | Wednesday 08 April 2026 00:43:25 +0000 (0:00:00.117) 0:00:04.321 ******* 2026-04-08 00:44:01.359418 | orchestrator | ok: [localhost] 2026-04-08 00:44:01.359425 | orchestrator | 2026-04-08 00:44:01.359504 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-08 00:44:01.359519 | orchestrator | 2026-04-08 00:44:01.359525 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-08 00:44:01.359532 | orchestrator | Wednesday 08 April 2026 00:43:25 +0000 (0:00:00.427) 0:00:04.749 ******* 2026-04-08 00:44:01.359539 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:44:01.359546 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:44:01.359553 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:44:01.359560 | orchestrator | 2026-04-08 00:44:01.359566 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-08 00:44:01.359573 | orchestrator | Wednesday 08 April 2026 00:43:26 +0000 (0:00:00.570) 0:00:05.320 ******* 2026-04-08 00:44:01.359580 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-04-08 00:44:01.359587 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-04-08 00:44:01.359594 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-04-08 00:44:01.359600 | orchestrator | 2026-04-08 00:44:01.359612 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-04-08 00:44:01.359620 | orchestrator | 2026-04-08 00:44:01.359627 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-08 00:44:01.359634 | orchestrator | Wednesday 08 April 2026 00:43:27 +0000 (0:00:01.180) 0:00:06.500 ******* 2026-04-08 00:44:01.359641 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:44:01.359647 | orchestrator | 2026-04-08 00:44:01.359654 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-08 00:44:01.359661 | orchestrator | Wednesday 08 April 2026 00:43:29 +0000 (0:00:02.110) 0:00:08.610 ******* 2026-04-08 00:44:01.359667 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:44:01.359674 | orchestrator | 2026-04-08 00:44:01.359681 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-04-08 00:44:01.359687 | orchestrator | Wednesday 08 April 2026 00:43:31 +0000 (0:00:01.636) 0:00:10.246 ******* 2026-04-08 00:44:01.359712 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:44:01.359720 | orchestrator | 2026-04-08 00:44:01.359726 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-04-08 00:44:01.359733 | orchestrator | Wednesday 08 April 2026 00:43:31 +0000 (0:00:00.326) 0:00:10.573 ******* 2026-04-08 00:44:01.359740 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:44:01.359746 | orchestrator | 2026-04-08 00:44:01.359757 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-04-08 00:44:01.359764 | orchestrator | Wednesday 08 April 2026 00:43:31 +0000 (0:00:00.323) 0:00:10.896 ******* 2026-04-08 00:44:01.359796 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:44:01.359809 | orchestrator | 2026-04-08 00:44:01.359821 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-04-08 00:44:01.359833 | orchestrator | Wednesday 08 April 2026 00:43:32 +0000 (0:00:00.348) 0:00:11.245 ******* 2026-04-08 00:44:01.359842 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:44:01.359880 | orchestrator | 2026-04-08 00:44:01.359887 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-08 00:44:01.359894 | orchestrator | Wednesday 08 April 2026 00:43:32 +0000 (0:00:00.513) 0:00:11.758 ******* 2026-04-08 00:44:01.359901 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:44:01.359907 | orchestrator | 2026-04-08 00:44:01.359914 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-08 00:44:01.359921 | orchestrator | Wednesday 08 April 2026 00:43:34 +0000 (0:00:01.756) 0:00:13.515 ******* 2026-04-08 00:44:01.359927 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:44:01.359934 | orchestrator | 2026-04-08 00:44:01.359940 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-04-08 00:44:01.359947 | orchestrator | Wednesday 08 April 2026 00:43:35 +0000 (0:00:01.161) 0:00:14.676 ******* 2026-04-08 00:44:01.359953 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:44:01.359960 | orchestrator | 2026-04-08 00:44:01.359967 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-04-08 00:44:01.359973 | orchestrator | Wednesday 08 April 2026 00:43:36 +0000 (0:00:00.641) 0:00:15.318 ******* 2026-04-08 00:44:01.359980 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:44:01.359986 | orchestrator | 2026-04-08 00:44:01.360004 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-04-08 00:44:01.360011 | orchestrator | Wednesday 08 April 2026 00:43:36 +0000 (0:00:00.296) 0:00:15.615 ******* 2026-04-08 00:44:01.360022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-08 00:44:01.360033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-08 00:44:01.360054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-08 00:44:01.360062 | orchestrator | 2026-04-08 00:44:01.360069 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-04-08 00:44:01.360075 | orchestrator | Wednesday 08 April 2026 00:43:37 +0000 (0:00:01.300) 0:00:16.915 ******* 2026-04-08 00:44:01.360099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-08 00:44:01.360112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-08 00:44:01.360133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-08 00:44:01.360147 | orchestrator | 2026-04-08 00:44:01.360158 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-04-08 00:44:01.360169 | orchestrator | Wednesday 08 April 2026 00:43:39 +0000 (0:00:01.884) 0:00:18.799 ******* 2026-04-08 00:44:01.360186 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-08 00:44:01.360199 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-08 00:44:01.360210 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-08 00:44:01.360222 | orchestrator | 2026-04-08 00:44:01.360233 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-04-08 00:44:01.360244 | orchestrator | Wednesday 08 April 2026 00:43:41 +0000 (0:00:01.995) 0:00:20.795 ******* 2026-04-08 00:44:01.360256 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-08 00:44:01.360267 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-08 00:44:01.360279 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-08 00:44:01.360290 | orchestrator | 2026-04-08 00:44:01.360302 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-04-08 00:44:01.360314 | orchestrator | Wednesday 08 April 2026 00:43:44 +0000 (0:00:02.987) 0:00:23.782 ******* 2026-04-08 00:44:01.360415 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-08 00:44:01.360425 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-08 00:44:01.360432 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-08 00:44:01.360440 | orchestrator | 2026-04-08 00:44:01.360460 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-04-08 00:44:01.360477 | orchestrator | Wednesday 08 April 2026 00:43:45 +0000 (0:00:01.304) 0:00:25.087 ******* 2026-04-08 00:44:01.360491 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-08 00:44:01.360501 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-08 00:44:01.360510 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-08 00:44:01.360521 | orchestrator | 2026-04-08 00:44:01.360530 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-04-08 00:44:01.360540 | orchestrator | Wednesday 08 April 2026 00:43:47 +0000 (0:00:01.451) 0:00:26.538 ******* 2026-04-08 00:44:01.360559 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-08 00:44:01.360569 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-08 00:44:01.360580 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-08 00:44:01.360590 | orchestrator | 2026-04-08 00:44:01.360601 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-04-08 00:44:01.360612 | orchestrator | Wednesday 08 April 2026 00:43:48 +0000 (0:00:01.310) 0:00:27.849 ******* 2026-04-08 00:44:01.360623 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-08 00:44:01.360634 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-08 00:44:01.360646 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-08 00:44:01.360655 | orchestrator | 2026-04-08 00:44:01.360661 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-08 00:44:01.360668 | orchestrator | Wednesday 08 April 2026 00:43:50 +0000 (0:00:01.283) 0:00:29.132 ******* 2026-04-08 00:44:01.360675 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:44:01.360682 | orchestrator | 2026-04-08 00:44:01.360688 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-04-08 00:44:01.360695 | orchestrator | Wednesday 08 April 2026 00:43:51 +0000 (0:00:01.372) 0:00:30.505 ******* 2026-04-08 00:44:01.360703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-08 00:44:01.360717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-08 00:44:01.360732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-08 00:44:01.360745 | orchestrator | 2026-04-08 00:44:01.360752 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-04-08 00:44:01.360759 | orchestrator | Wednesday 08 April 2026 00:43:52 +0000 (0:00:01.383) 0:00:31.888 ******* 2026-04-08 00:44:01.360766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-08 00:44:01.360824 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:44:01.360837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-08 00:44:01.360845 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:44:01.360860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-08 00:44:01.360873 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:44:01.360880 | orchestrator | 2026-04-08 00:44:01.360887 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-04-08 00:44:01.360893 | orchestrator | Wednesday 08 April 2026 00:43:53 +0000 (0:00:00.771) 0:00:32.660 ******* 2026-04-08 00:44:01.360901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-08 00:44:01.360908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-08 00:44:01.360916 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:44:01.360922 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:44:01.360933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-08 00:44:01.360945 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:44:01.360952 | orchestrator | 2026-04-08 00:44:01.360958 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-04-08 00:44:01.360967 | orchestrator | Wednesday 08 April 2026 00:43:54 +0000 (0:00:01.270) 0:00:33.931 ******* 2026-04-08 00:44:01.360986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-08 00:44:01.361000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-08 00:44:01.361105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-08 00:44:01.361118 | orchestrator | 2026-04-08 00:44:01.361126 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-04-08 00:44:01.361134 | orchestrator | Wednesday 08 April 2026 00:43:56 +0000 (0:00:01.226) 0:00:35.157 ******* 2026-04-08 00:44:01.361143 | orchestrator | changed: [testbed-node-0] => { 2026-04-08 00:44:01.361156 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:44:01.361163 | orchestrator | } 2026-04-08 00:44:01.361170 | orchestrator | changed: [testbed-node-1] => { 2026-04-08 00:44:01.361176 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:44:01.361183 | orchestrator | } 2026-04-08 00:44:01.361189 | orchestrator | changed: [testbed-node-2] => { 2026-04-08 00:44:01.361195 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:44:01.361201 | orchestrator | } 2026-04-08 00:44:01.361207 | orchestrator | 2026-04-08 00:44:01.361218 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-08 00:44:01.361229 | orchestrator | Wednesday 08 April 2026 00:43:56 +0000 (0:00:00.458) 0:00:35.616 ******* 2026-04-08 00:44:01.361251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-08 00:44:01.361267 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:44:01.361278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-08 00:44:01.361288 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:44:01.361304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-08 00:44:01.361327 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:44:01.361336 | orchestrator | 2026-04-08 00:44:01.361347 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-04-08 00:44:01.361358 | orchestrator | Wednesday 08 April 2026 00:43:57 +0000 (0:00:00.895) 0:00:36.512 ******* 2026-04-08 00:44:01.361368 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:44:01.361379 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:44:01.361389 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:44:01.361400 | orchestrator | 2026-04-08 00:44:01.361411 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-04-08 00:44:01.361423 | orchestrator | Wednesday 08 April 2026 00:43:58 +0000 (0:00:00.822) 0:00:37.335 ******* 2026-04-08 00:44:01.361443 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=4.1.8.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Frabbitmq\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_qnyfq7nq/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_qnyfq7nq/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_qnyfq7nq/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=4.1.8.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Frabbitmq: Bad Request (\"invalid reference format\")\\n'"} 2026-04-08 00:44:01.361463 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=4.1.8.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Frabbitmq\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_1wac91ma/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_1wac91ma/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_1wac91ma/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=4.1.8.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Frabbitmq: Bad Request (\"invalid reference format\")\\n'"} 2026-04-08 00:44:01.361490 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=4.1.8.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Frabbitmq\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_kgz1lsf1/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_kgz1lsf1/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_kgz1lsf1/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=4.1.8.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Frabbitmq: Bad Request (\"invalid reference format\")\\n'"} 2026-04-08 00:44:01.361502 | orchestrator | 2026-04-08 00:44:01.361514 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:44:01.361525 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-08 00:44:01.361537 | orchestrator | testbed-node-0 : ok=19  changed=12  unreachable=0 failed=1  skipped=9  rescued=0 ignored=0 2026-04-08 00:44:01.361549 | orchestrator | testbed-node-1 : ok=17  changed=12  unreachable=0 failed=1  skipped=3  rescued=0 ignored=0 2026-04-08 00:44:01.361565 | orchestrator | testbed-node-2 : ok=17  changed=12  unreachable=0 failed=1  skipped=3  rescued=0 ignored=0 2026-04-08 00:44:01.361576 | orchestrator | 2026-04-08 00:44:01.361588 | orchestrator | 2026-04-08 00:44:01.361599 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:44:01.361609 | orchestrator | Wednesday 08 April 2026 00:43:59 +0000 (0:00:00.936) 0:00:38.272 ******* 2026-04-08 00:44:01.361625 | orchestrator | =============================================================================== 2026-04-08 00:44:01.361636 | orchestrator | Check RabbitMQ service -------------------------------------------------- 4.00s 2026-04-08 00:44:01.361648 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.99s 2026-04-08 00:44:01.361658 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 2.11s 2026-04-08 00:44:01.361669 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.00s 2026-04-08 00:44:01.361680 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.88s 2026-04-08 00:44:01.361690 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.76s 2026-04-08 00:44:01.361700 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.64s 2026-04-08 00:44:01.361711 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.45s 2026-04-08 00:44:01.361722 | orchestrator | service-cert-copy : rabbitmq | Copying over extra CA certificates ------- 1.38s 2026-04-08 00:44:01.361732 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.37s 2026-04-08 00:44:01.361740 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.31s 2026-04-08 00:44:01.361746 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.30s 2026-04-08 00:44:01.361752 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.30s 2026-04-08 00:44:01.361758 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.28s 2026-04-08 00:44:01.361764 | orchestrator | service-cert-copy : rabbitmq | Copying over backend internal TLS key ---- 1.27s 2026-04-08 00:44:01.361819 | orchestrator | service-check-containers : rabbitmq | Check containers ------------------ 1.23s 2026-04-08 00:44:01.361827 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.18s 2026-04-08 00:44:01.361833 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.16s 2026-04-08 00:44:01.361839 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 0.94s 2026-04-08 00:44:01.361845 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.90s 2026-04-08 00:44:01.361852 | orchestrator | 2026-04-08 00:44:01 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:44:01.361858 | orchestrator | 2026-04-08 00:44:01 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:44:01.361865 | orchestrator | 2026-04-08 00:44:01 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:44:01.361871 | orchestrator | 2026-04-08 00:44:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:44:04.401785 | orchestrator | 2026-04-08 00:44:04 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:44:04.403386 | orchestrator | 2026-04-08 00:44:04 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:44:04.404530 | orchestrator | 2026-04-08 00:44:04 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:44:04.404550 | orchestrator | 2026-04-08 00:44:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:44:07.438472 | orchestrator | 2026-04-08 00:44:07 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:44:07.442141 | orchestrator | 2026-04-08 00:44:07 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:44:07.444618 | orchestrator | 2026-04-08 00:44:07 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:44:07.444659 | orchestrator | 2026-04-08 00:44:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:44:10.478166 | orchestrator | 2026-04-08 00:44:10 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:44:10.479047 | orchestrator | 2026-04-08 00:44:10 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:44:10.481613 | orchestrator | 2026-04-08 00:44:10 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:44:10.481665 | orchestrator | 2026-04-08 00:44:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:44:13.521226 | orchestrator | 2026-04-08 00:44:13 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:44:13.523514 | orchestrator | 2026-04-08 00:44:13 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:44:13.523598 | orchestrator | 2026-04-08 00:44:13 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:44:13.523623 | orchestrator | 2026-04-08 00:44:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:44:16.558935 | orchestrator | 2026-04-08 00:44:16 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:44:16.559721 | orchestrator | 2026-04-08 00:44:16 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:44:16.560798 | orchestrator | 2026-04-08 00:44:16 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:44:16.560855 | orchestrator | 2026-04-08 00:44:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:44:19.585357 | orchestrator | 2026-04-08 00:44:19 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:44:19.585712 | orchestrator | 2026-04-08 00:44:19 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:44:19.586991 | orchestrator | 2026-04-08 00:44:19 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:44:19.587028 | orchestrator | 2026-04-08 00:44:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:44:22.619029 | orchestrator | 2026-04-08 00:44:22 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:44:22.619144 | orchestrator | 2026-04-08 00:44:22 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:44:22.619160 | orchestrator | 2026-04-08 00:44:22 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:44:22.619173 | orchestrator | 2026-04-08 00:44:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:44:25.652088 | orchestrator | 2026-04-08 00:44:25 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:44:25.652213 | orchestrator | 2026-04-08 00:44:25 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:44:25.652995 | orchestrator | 2026-04-08 00:44:25 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:44:25.653026 | orchestrator | 2026-04-08 00:44:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:44:28.700894 | orchestrator | 2026-04-08 00:44:28 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:44:28.703272 | orchestrator | 2026-04-08 00:44:28 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:44:28.705250 | orchestrator | 2026-04-08 00:44:28 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:44:28.705310 | orchestrator | 2026-04-08 00:44:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:44:31.734636 | orchestrator | 2026-04-08 00:44:31 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:44:31.735236 | orchestrator | 2026-04-08 00:44:31 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:44:31.737155 | orchestrator | 2026-04-08 00:44:31 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:44:31.737211 | orchestrator | 2026-04-08 00:44:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:44:34.776306 | orchestrator | 2026-04-08 00:44:34 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:44:34.776993 | orchestrator | 2026-04-08 00:44:34 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:44:34.778541 | orchestrator | 2026-04-08 00:44:34 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:44:34.778581 | orchestrator | 2026-04-08 00:44:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:44:37.826489 | orchestrator | 2026-04-08 00:44:37 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:44:37.829267 | orchestrator | 2026-04-08 00:44:37 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:44:37.832026 | orchestrator | 2026-04-08 00:44:37 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:44:37.832890 | orchestrator | 2026-04-08 00:44:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:44:40.861982 | orchestrator | 2026-04-08 00:44:40 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:44:40.863520 | orchestrator | 2026-04-08 00:44:40 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:44:40.864914 | orchestrator | 2026-04-08 00:44:40 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:44:40.865177 | orchestrator | 2026-04-08 00:44:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:44:43.901341 | orchestrator | 2026-04-08 00:44:43 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:44:43.901426 | orchestrator | 2026-04-08 00:44:43 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:44:43.901434 | orchestrator | 2026-04-08 00:44:43 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:44:43.901439 | orchestrator | 2026-04-08 00:44:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:44:46.927698 | orchestrator | 2026-04-08 00:44:46 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:44:46.927850 | orchestrator | 2026-04-08 00:44:46 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:44:46.932525 | orchestrator | 2026-04-08 00:44:46 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:44:46.932596 | orchestrator | 2026-04-08 00:44:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:44:49.952899 | orchestrator | 2026-04-08 00:44:49 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:44:49.952983 | orchestrator | 2026-04-08 00:44:49 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:44:49.953869 | orchestrator | 2026-04-08 00:44:49 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:44:49.953886 | orchestrator | 2026-04-08 00:44:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:44:52.996073 | orchestrator | 2026-04-08 00:44:52 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:44:52.996166 | orchestrator | 2026-04-08 00:44:52 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:44:52.996395 | orchestrator | 2026-04-08 00:44:52 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:44:52.996420 | orchestrator | 2026-04-08 00:44:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:44:56.090120 | orchestrator | 2026-04-08 00:44:56 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:44:56.090237 | orchestrator | 2026-04-08 00:44:56 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:44:56.090259 | orchestrator | 2026-04-08 00:44:56 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:44:56.090277 | orchestrator | 2026-04-08 00:44:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:44:59.129114 | orchestrator | 2026-04-08 00:44:59 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:44:59.133925 | orchestrator | 2026-04-08 00:44:59 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:44:59.134779 | orchestrator | 2026-04-08 00:44:59 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:44:59.134808 | orchestrator | 2026-04-08 00:44:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:45:02.178948 | orchestrator | 2026-04-08 00:45:02 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:45:02.179047 | orchestrator | 2026-04-08 00:45:02 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:45:02.179530 | orchestrator | 2026-04-08 00:45:02 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:45:02.179556 | orchestrator | 2026-04-08 00:45:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:45:05.234701 | orchestrator | 2026-04-08 00:45:05 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:45:05.234953 | orchestrator | 2026-04-08 00:45:05 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:45:05.235811 | orchestrator | 2026-04-08 00:45:05 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:45:05.235831 | orchestrator | 2026-04-08 00:45:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:45:08.283160 | orchestrator | 2026-04-08 00:45:08 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:45:08.283470 | orchestrator | 2026-04-08 00:45:08 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:45:08.284286 | orchestrator | 2026-04-08 00:45:08 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:45:08.284310 | orchestrator | 2026-04-08 00:45:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:45:11.328395 | orchestrator | 2026-04-08 00:45:11 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:45:11.328525 | orchestrator | 2026-04-08 00:45:11 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:45:11.329510 | orchestrator | 2026-04-08 00:45:11 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:45:11.329596 | orchestrator | 2026-04-08 00:45:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:45:14.371110 | orchestrator | 2026-04-08 00:45:14 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:45:14.375035 | orchestrator | 2026-04-08 00:45:14 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:45:14.381326 | orchestrator | 2026-04-08 00:45:14 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:45:14.381402 | orchestrator | 2026-04-08 00:45:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:45:17.419460 | orchestrator | 2026-04-08 00:45:17 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:45:17.420272 | orchestrator | 2026-04-08 00:45:17 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:45:17.421646 | orchestrator | 2026-04-08 00:45:17 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:45:17.421684 | orchestrator | 2026-04-08 00:45:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:45:20.467487 | orchestrator | 2026-04-08 00:45:20 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:45:20.470114 | orchestrator | 2026-04-08 00:45:20 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:45:20.472806 | orchestrator | 2026-04-08 00:45:20 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:45:20.473131 | orchestrator | 2026-04-08 00:45:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:45:23.512706 | orchestrator | 2026-04-08 00:45:23 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:45:23.515512 | orchestrator | 2026-04-08 00:45:23 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:45:23.517714 | orchestrator | 2026-04-08 00:45:23 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:45:23.517947 | orchestrator | 2026-04-08 00:45:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:45:26.569200 | orchestrator | 2026-04-08 00:45:26 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:45:26.569632 | orchestrator | 2026-04-08 00:45:26 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:45:26.570655 | orchestrator | 2026-04-08 00:45:26 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:45:26.570677 | orchestrator | 2026-04-08 00:45:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:45:29.612168 | orchestrator | 2026-04-08 00:45:29 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:45:29.612965 | orchestrator | 2026-04-08 00:45:29 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:45:29.615606 | orchestrator | 2026-04-08 00:45:29 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:45:29.615873 | orchestrator | 2026-04-08 00:45:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:45:32.686158 | orchestrator | 2026-04-08 00:45:32 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:45:32.686241 | orchestrator | 2026-04-08 00:45:32 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:45:32.686251 | orchestrator | 2026-04-08 00:45:32 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:45:32.686287 | orchestrator | 2026-04-08 00:45:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:45:35.733118 | orchestrator | 2026-04-08 00:45:35 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:45:35.735371 | orchestrator | 2026-04-08 00:45:35 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:45:35.737835 | orchestrator | 2026-04-08 00:45:35 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:45:35.737895 | orchestrator | 2026-04-08 00:45:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:45:38.784110 | orchestrator | 2026-04-08 00:45:38 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:45:38.785531 | orchestrator | 2026-04-08 00:45:38 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:45:38.788112 | orchestrator | 2026-04-08 00:45:38 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:45:38.788155 | orchestrator | 2026-04-08 00:45:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:45:41.822996 | orchestrator | 2026-04-08 00:45:41 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:45:41.823116 | orchestrator | 2026-04-08 00:45:41 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:45:41.827010 | orchestrator | 2026-04-08 00:45:41 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:45:41.827062 | orchestrator | 2026-04-08 00:45:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:45:44.867485 | orchestrator | 2026-04-08 00:45:44 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:45:44.872628 | orchestrator | 2026-04-08 00:45:44 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:45:44.874338 | orchestrator | 2026-04-08 00:45:44 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:45:44.874398 | orchestrator | 2026-04-08 00:45:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:45:47.903641 | orchestrator | 2026-04-08 00:45:47 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:45:47.906251 | orchestrator | 2026-04-08 00:45:47 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:45:47.907881 | orchestrator | 2026-04-08 00:45:47 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:45:47.907919 | orchestrator | 2026-04-08 00:45:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:45:50.954601 | orchestrator | 2026-04-08 00:45:50 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:45:50.954693 | orchestrator | 2026-04-08 00:45:50 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:45:50.955064 | orchestrator | 2026-04-08 00:45:50 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:45:50.955187 | orchestrator | 2026-04-08 00:45:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:45:54.018125 | orchestrator | 2026-04-08 00:45:54 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:45:54.018868 | orchestrator | 2026-04-08 00:45:54 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:45:54.019515 | orchestrator | 2026-04-08 00:45:54 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:45:54.019556 | orchestrator | 2026-04-08 00:45:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:45:57.072211 | orchestrator | 2026-04-08 00:45:57 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:45:57.073877 | orchestrator | 2026-04-08 00:45:57 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:45:57.075574 | orchestrator | 2026-04-08 00:45:57 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:45:57.075605 | orchestrator | 2026-04-08 00:45:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:46:00.113330 | orchestrator | 2026-04-08 00:46:00 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:46:00.116639 | orchestrator | 2026-04-08 00:46:00 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:46:00.117905 | orchestrator | 2026-04-08 00:46:00 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:46:00.118050 | orchestrator | 2026-04-08 00:46:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:46:03.144739 | orchestrator | 2026-04-08 00:46:03 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:46:03.144926 | orchestrator | 2026-04-08 00:46:03 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:46:03.145812 | orchestrator | 2026-04-08 00:46:03 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:46:03.145924 | orchestrator | 2026-04-08 00:46:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:46:06.290713 | orchestrator | 2026-04-08 00:46:06 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:46:06.290814 | orchestrator | 2026-04-08 00:46:06 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:46:06.290880 | orchestrator | 2026-04-08 00:46:06 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:46:06.290892 | orchestrator | 2026-04-08 00:46:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:46:09.302055 | orchestrator | 2026-04-08 00:46:09 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:46:09.303817 | orchestrator | 2026-04-08 00:46:09 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:46:09.305590 | orchestrator | 2026-04-08 00:46:09 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:46:09.305622 | orchestrator | 2026-04-08 00:46:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:46:12.330973 | orchestrator | 2026-04-08 00:46:12 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:46:12.331262 | orchestrator | 2026-04-08 00:46:12 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:46:12.332212 | orchestrator | 2026-04-08 00:46:12 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state STARTED 2026-04-08 00:46:12.332292 | orchestrator | 2026-04-08 00:46:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:46:15.371530 | orchestrator | 2026-04-08 00:46:15 | INFO  | Task dcfd88e9-df0b-4986-b4e9-5576620d0706 is in state STARTED 2026-04-08 00:46:15.374336 | orchestrator | 2026-04-08 00:46:15 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:46:15.378927 | orchestrator | 2026-04-08 00:46:15 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:46:15.382566 | orchestrator | 2026-04-08 00:46:15 | INFO  | Task 4635f8ce-d14d-4512-8cb1-fd2e3af38381 is in state SUCCESS 2026-04-08 00:46:15.384644 | orchestrator | 2026-04-08 00:46:15.384717 | orchestrator | 2026-04-08 00:46:15.384730 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-04-08 00:46:15.384737 | orchestrator | 2026-04-08 00:46:15.384744 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-04-08 00:46:15.384827 | orchestrator | Wednesday 08 April 2026 00:41:45 +0000 (0:00:00.269) 0:00:00.269 ******* 2026-04-08 00:46:15.384860 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:46:15.384867 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:46:15.384874 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:46:15.384880 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:46:15.384887 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:46:15.384893 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:46:15.384899 | orchestrator | 2026-04-08 00:46:15.384906 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-04-08 00:46:15.384913 | orchestrator | Wednesday 08 April 2026 00:41:46 +0000 (0:00:00.622) 0:00:00.891 ******* 2026-04-08 00:46:15.384959 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:46:15.384968 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:46:15.384976 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:46:15.384982 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:46:15.384989 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:46:15.384994 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:46:15.384998 | orchestrator | 2026-04-08 00:46:15.385003 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-04-08 00:46:15.385007 | orchestrator | Wednesday 08 April 2026 00:41:47 +0000 (0:00:00.785) 0:00:01.677 ******* 2026-04-08 00:46:15.385011 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:46:15.385015 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:46:15.385021 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:46:15.385029 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:46:15.385038 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:46:15.385044 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:46:15.385050 | orchestrator | 2026-04-08 00:46:15.385056 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-04-08 00:46:15.385307 | orchestrator | Wednesday 08 April 2026 00:41:47 +0000 (0:00:00.671) 0:00:02.349 ******* 2026-04-08 00:46:15.385321 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:46:15.385328 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:46:15.385335 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:46:15.385341 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:46:15.385348 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:46:15.385354 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:46:15.385361 | orchestrator | 2026-04-08 00:46:15.385368 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-04-08 00:46:15.385375 | orchestrator | Wednesday 08 April 2026 00:41:50 +0000 (0:00:02.840) 0:00:05.189 ******* 2026-04-08 00:46:15.385382 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:46:15.385389 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:46:15.385397 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:46:15.385404 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:46:15.385411 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:46:15.385418 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:46:15.385426 | orchestrator | 2026-04-08 00:46:15.385430 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-04-08 00:46:15.385435 | orchestrator | Wednesday 08 April 2026 00:41:51 +0000 (0:00:00.845) 0:00:06.034 ******* 2026-04-08 00:46:15.385440 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:46:15.385445 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:46:15.385450 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:46:15.385454 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:46:15.385458 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:46:15.385462 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:46:15.385466 | orchestrator | 2026-04-08 00:46:15.385487 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-04-08 00:46:15.385515 | orchestrator | Wednesday 08 April 2026 00:41:52 +0000 (0:00:01.298) 0:00:07.333 ******* 2026-04-08 00:46:15.385521 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:46:15.385530 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:46:15.385535 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:46:15.385541 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:46:15.385547 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:46:15.385553 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:46:15.385559 | orchestrator | 2026-04-08 00:46:15.385565 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-04-08 00:46:15.385573 | orchestrator | Wednesday 08 April 2026 00:41:53 +0000 (0:00:00.930) 0:00:08.263 ******* 2026-04-08 00:46:15.385579 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:46:15.385585 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:46:15.385590 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:46:15.385600 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:46:15.385606 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:46:15.385613 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:46:15.385623 | orchestrator | 2026-04-08 00:46:15.385631 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-04-08 00:46:15.385637 | orchestrator | Wednesday 08 April 2026 00:41:54 +0000 (0:00:00.947) 0:00:09.210 ******* 2026-04-08 00:46:15.385644 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-08 00:46:15.385651 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-08 00:46:15.385657 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:46:15.385663 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-08 00:46:15.385669 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-08 00:46:15.385675 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:46:15.385682 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-08 00:46:15.385688 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-08 00:46:15.385694 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-08 00:46:15.385700 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-08 00:46:15.385718 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:46:15.385722 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-08 00:46:15.385727 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-08 00:46:15.385730 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:46:15.385734 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:46:15.385738 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-08 00:46:15.385742 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-08 00:46:15.385746 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:46:15.385750 | orchestrator | 2026-04-08 00:46:15.385754 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-04-08 00:46:15.385757 | orchestrator | Wednesday 08 April 2026 00:41:55 +0000 (0:00:00.825) 0:00:10.036 ******* 2026-04-08 00:46:15.385761 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:46:15.385765 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:46:15.385768 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:46:15.385772 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:46:15.385776 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:46:15.385779 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:46:15.385783 | orchestrator | 2026-04-08 00:46:15.385787 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-04-08 00:46:15.385792 | orchestrator | Wednesday 08 April 2026 00:41:56 +0000 (0:00:01.176) 0:00:11.213 ******* 2026-04-08 00:46:15.385802 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:46:15.385807 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:46:15.385811 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:46:15.385814 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:46:15.385818 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:46:15.385822 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:46:15.385826 | orchestrator | 2026-04-08 00:46:15.385878 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-04-08 00:46:15.385885 | orchestrator | Wednesday 08 April 2026 00:41:58 +0000 (0:00:01.771) 0:00:12.984 ******* 2026-04-08 00:46:15.385889 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:46:15.385893 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:46:15.385897 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:46:15.385901 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:46:15.385904 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:46:15.385908 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:46:15.385912 | orchestrator | 2026-04-08 00:46:15.385915 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-04-08 00:46:15.385919 | orchestrator | Wednesday 08 April 2026 00:42:05 +0000 (0:00:06.624) 0:00:19.609 ******* 2026-04-08 00:46:15.385923 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:46:15.385927 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:46:15.385930 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:46:15.385934 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:46:15.385938 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:46:15.385942 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:46:15.385945 | orchestrator | 2026-04-08 00:46:15.385949 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-04-08 00:46:15.385953 | orchestrator | Wednesday 08 April 2026 00:42:08 +0000 (0:00:02.768) 0:00:22.377 ******* 2026-04-08 00:46:15.385957 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:46:15.385960 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:46:15.385964 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:46:15.385968 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:46:15.385972 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:46:15.385975 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:46:15.385979 | orchestrator | 2026-04-08 00:46:15.385989 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-04-08 00:46:15.385994 | orchestrator | Wednesday 08 April 2026 00:42:09 +0000 (0:00:01.703) 0:00:24.081 ******* 2026-04-08 00:46:15.385998 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:46:15.386001 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:46:15.386005 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:46:15.386009 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:46:15.386047 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:46:15.386052 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:46:15.386056 | orchestrator | 2026-04-08 00:46:15.386060 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-04-08 00:46:15.386064 | orchestrator | Wednesday 08 April 2026 00:42:10 +0000 (0:00:01.178) 0:00:25.260 ******* 2026-04-08 00:46:15.386068 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-04-08 00:46:15.386072 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-04-08 00:46:15.386076 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:46:15.386079 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-04-08 00:46:15.386083 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-04-08 00:46:15.386087 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:46:15.386091 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-04-08 00:46:15.386094 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-04-08 00:46:15.386098 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-04-08 00:46:15.386107 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:46:15.386111 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-04-08 00:46:15.386115 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-04-08 00:46:15.386118 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-04-08 00:46:15.386122 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:46:15.386126 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:46:15.386130 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-04-08 00:46:15.386133 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-04-08 00:46:15.386137 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:46:15.386141 | orchestrator | 2026-04-08 00:46:15.386145 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-04-08 00:46:15.386156 | orchestrator | Wednesday 08 April 2026 00:42:11 +0000 (0:00:00.825) 0:00:26.085 ******* 2026-04-08 00:46:15.386160 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:46:15.386164 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:46:15.386168 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:46:15.386172 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:46:15.386176 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:46:15.386179 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:46:15.386183 | orchestrator | 2026-04-08 00:46:15.386187 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-04-08 00:46:15.386191 | orchestrator | Wednesday 08 April 2026 00:42:12 +0000 (0:00:01.142) 0:00:27.228 ******* 2026-04-08 00:46:15.386195 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:46:15.386199 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:46:15.386202 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:46:15.386206 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:46:15.386210 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:46:15.386213 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:46:15.386217 | orchestrator | 2026-04-08 00:46:15.386221 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-04-08 00:46:15.386225 | orchestrator | 2026-04-08 00:46:15.386229 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-04-08 00:46:15.386232 | orchestrator | Wednesday 08 April 2026 00:42:14 +0000 (0:00:01.770) 0:00:28.998 ******* 2026-04-08 00:46:15.386236 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:46:15.386240 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:46:15.386244 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:46:15.386247 | orchestrator | 2026-04-08 00:46:15.386251 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-04-08 00:46:15.386255 | orchestrator | Wednesday 08 April 2026 00:42:16 +0000 (0:00:01.404) 0:00:30.403 ******* 2026-04-08 00:46:15.386259 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:46:15.386263 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:46:15.386268 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:46:15.386274 | orchestrator | 2026-04-08 00:46:15.386280 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-04-08 00:46:15.386286 | orchestrator | Wednesday 08 April 2026 00:42:17 +0000 (0:00:01.167) 0:00:31.570 ******* 2026-04-08 00:46:15.386292 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:46:15.386298 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:46:15.386304 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:46:15.386310 | orchestrator | 2026-04-08 00:46:15.386317 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-04-08 00:46:15.386323 | orchestrator | Wednesday 08 April 2026 00:42:18 +0000 (0:00:00.817) 0:00:32.388 ******* 2026-04-08 00:46:15.386329 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:46:15.386334 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:46:15.386342 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:46:15.386349 | orchestrator | 2026-04-08 00:46:15.386360 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-04-08 00:46:15.386370 | orchestrator | Wednesday 08 April 2026 00:42:18 +0000 (0:00:00.807) 0:00:33.195 ******* 2026-04-08 00:46:15.386383 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:46:15.386390 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:46:15.386396 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:46:15.386402 | orchestrator | 2026-04-08 00:46:15.386408 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-04-08 00:46:15.386414 | orchestrator | Wednesday 08 April 2026 00:42:19 +0000 (0:00:00.489) 0:00:33.685 ******* 2026-04-08 00:46:15.386420 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:46:15.386426 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:46:15.386432 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:46:15.386438 | orchestrator | 2026-04-08 00:46:15.386451 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-04-08 00:46:15.386458 | orchestrator | Wednesday 08 April 2026 00:42:19 +0000 (0:00:00.666) 0:00:34.351 ******* 2026-04-08 00:46:15.386465 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:46:15.386472 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:46:15.386476 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:46:15.386480 | orchestrator | 2026-04-08 00:46:15.386484 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-04-08 00:46:15.386488 | orchestrator | Wednesday 08 April 2026 00:42:21 +0000 (0:00:01.398) 0:00:35.750 ******* 2026-04-08 00:46:15.386492 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:46:15.386496 | orchestrator | 2026-04-08 00:46:15.386500 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-04-08 00:46:15.386503 | orchestrator | Wednesday 08 April 2026 00:42:21 +0000 (0:00:00.552) 0:00:36.302 ******* 2026-04-08 00:46:15.386507 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:46:15.386511 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:46:15.386515 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:46:15.386519 | orchestrator | 2026-04-08 00:46:15.386523 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-04-08 00:46:15.386527 | orchestrator | Wednesday 08 April 2026 00:42:24 +0000 (0:00:02.835) 0:00:39.138 ******* 2026-04-08 00:46:15.386531 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:46:15.386534 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:46:15.386538 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:46:15.386542 | orchestrator | 2026-04-08 00:46:15.386546 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-04-08 00:46:15.386550 | orchestrator | Wednesday 08 April 2026 00:42:25 +0000 (0:00:00.551) 0:00:39.690 ******* 2026-04-08 00:46:15.386554 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:46:15.386558 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:46:15.386561 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:46:15.386565 | orchestrator | 2026-04-08 00:46:15.386569 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-04-08 00:46:15.386572 | orchestrator | Wednesday 08 April 2026 00:42:26 +0000 (0:00:00.972) 0:00:40.662 ******* 2026-04-08 00:46:15.386576 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:46:15.386581 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:46:15.386587 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:46:15.386593 | orchestrator | 2026-04-08 00:46:15.386599 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-04-08 00:46:15.386612 | orchestrator | Wednesday 08 April 2026 00:42:27 +0000 (0:00:01.477) 0:00:42.139 ******* 2026-04-08 00:46:15.386619 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:46:15.386625 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:46:15.386631 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:46:15.386637 | orchestrator | 2026-04-08 00:46:15.386644 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-04-08 00:46:15.386650 | orchestrator | Wednesday 08 April 2026 00:42:28 +0000 (0:00:00.451) 0:00:42.590 ******* 2026-04-08 00:46:15.386656 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:46:15.386669 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:46:15.386677 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:46:15.386681 | orchestrator | 2026-04-08 00:46:15.386685 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-04-08 00:46:15.386689 | orchestrator | Wednesday 08 April 2026 00:42:28 +0000 (0:00:00.451) 0:00:43.042 ******* 2026-04-08 00:46:15.386692 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:46:15.386696 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:46:15.386700 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:46:15.386704 | orchestrator | 2026-04-08 00:46:15.386707 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-04-08 00:46:15.386711 | orchestrator | Wednesday 08 April 2026 00:42:31 +0000 (0:00:02.361) 0:00:45.403 ******* 2026-04-08 00:46:15.386715 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:46:15.386719 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:46:15.386723 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:46:15.386726 | orchestrator | 2026-04-08 00:46:15.386730 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-04-08 00:46:15.386734 | orchestrator | Wednesday 08 April 2026 00:42:33 +0000 (0:00:02.592) 0:00:47.995 ******* 2026-04-08 00:46:15.386739 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:46:15.386746 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:46:15.386754 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:46:15.386763 | orchestrator | 2026-04-08 00:46:15.386770 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-04-08 00:46:15.386776 | orchestrator | Wednesday 08 April 2026 00:42:34 +0000 (0:00:00.610) 0:00:48.606 ******* 2026-04-08 00:46:15.386781 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-08 00:46:15.386789 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-08 00:46:15.386795 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-08 00:46:15.386801 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-08 00:46:15.386807 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-08 00:46:15.386812 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-08 00:46:15.386818 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-08 00:46:15.386855 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-08 00:46:15.386864 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-08 00:46:15.386870 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-08 00:46:15.386876 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-08 00:46:15.386881 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-08 00:46:15.386887 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-08 00:46:15.386893 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-08 00:46:15.386906 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-08 00:46:15.386913 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:46:15.386919 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:46:15.386927 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:46:15.386931 | orchestrator | 2026-04-08 00:46:15.386935 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-04-08 00:46:15.386939 | orchestrator | Wednesday 08 April 2026 00:43:27 +0000 (0:00:53.500) 0:01:42.107 ******* 2026-04-08 00:46:15.386942 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:46:15.386946 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:46:15.386950 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:46:15.386954 | orchestrator | 2026-04-08 00:46:15.386958 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-04-08 00:46:15.386967 | orchestrator | Wednesday 08 April 2026 00:43:28 +0000 (0:00:00.351) 0:01:42.459 ******* 2026-04-08 00:46:15.386971 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:46:15.386975 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:46:15.386978 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:46:15.386982 | orchestrator | 2026-04-08 00:46:15.386986 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-04-08 00:46:15.386990 | orchestrator | Wednesday 08 April 2026 00:43:29 +0000 (0:00:01.691) 0:01:44.150 ******* 2026-04-08 00:46:15.386993 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:46:15.386997 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:46:15.387001 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:46:15.387005 | orchestrator | 2026-04-08 00:46:15.387009 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-04-08 00:46:15.387013 | orchestrator | Wednesday 08 April 2026 00:43:31 +0000 (0:00:01.546) 0:01:45.697 ******* 2026-04-08 00:46:15.387017 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:46:15.387021 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:46:15.387024 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:46:15.387028 | orchestrator | 2026-04-08 00:46:15.387032 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-04-08 00:46:15.387035 | orchestrator | Wednesday 08 April 2026 00:43:57 +0000 (0:00:25.898) 0:02:11.596 ******* 2026-04-08 00:46:15.387039 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:46:15.387043 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:46:15.387047 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:46:15.387051 | orchestrator | 2026-04-08 00:46:15.387055 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-04-08 00:46:15.387058 | orchestrator | Wednesday 08 April 2026 00:43:57 +0000 (0:00:00.660) 0:02:12.256 ******* 2026-04-08 00:46:15.387062 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:46:15.387066 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:46:15.387070 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:46:15.387073 | orchestrator | 2026-04-08 00:46:15.387077 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-04-08 00:46:15.387081 | orchestrator | Wednesday 08 April 2026 00:43:58 +0000 (0:00:00.810) 0:02:13.067 ******* 2026-04-08 00:46:15.387085 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:46:15.387089 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:46:15.387092 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:46:15.387096 | orchestrator | 2026-04-08 00:46:15.387100 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-04-08 00:46:15.387104 | orchestrator | Wednesday 08 April 2026 00:43:59 +0000 (0:00:00.662) 0:02:13.729 ******* 2026-04-08 00:46:15.387107 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:46:15.387111 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:46:15.387115 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:46:15.387119 | orchestrator | 2026-04-08 00:46:15.387123 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-04-08 00:46:15.387131 | orchestrator | Wednesday 08 April 2026 00:43:59 +0000 (0:00:00.583) 0:02:14.313 ******* 2026-04-08 00:46:15.387135 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:46:15.387138 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:46:15.387142 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:46:15.387146 | orchestrator | 2026-04-08 00:46:15.387150 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-04-08 00:46:15.387154 | orchestrator | Wednesday 08 April 2026 00:44:00 +0000 (0:00:00.303) 0:02:14.616 ******* 2026-04-08 00:46:15.387158 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:46:15.387161 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:46:15.387165 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:46:15.387169 | orchestrator | 2026-04-08 00:46:15.387173 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-04-08 00:46:15.387177 | orchestrator | Wednesday 08 April 2026 00:44:00 +0000 (0:00:00.670) 0:02:15.287 ******* 2026-04-08 00:46:15.387190 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:46:15.387195 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:46:15.387199 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:46:15.387202 | orchestrator | 2026-04-08 00:46:15.387206 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-04-08 00:46:15.387210 | orchestrator | Wednesday 08 April 2026 00:44:01 +0000 (0:00:00.885) 0:02:16.173 ******* 2026-04-08 00:46:15.387214 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:46:15.387218 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:46:15.387222 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:46:15.387226 | orchestrator | 2026-04-08 00:46:15.387229 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-04-08 00:46:15.387233 | orchestrator | Wednesday 08 April 2026 00:44:02 +0000 (0:00:00.872) 0:02:17.045 ******* 2026-04-08 00:46:15.387237 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:46:15.387241 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:46:15.387245 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:46:15.387248 | orchestrator | 2026-04-08 00:46:15.387252 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-04-08 00:46:15.387256 | orchestrator | Wednesday 08 April 2026 00:44:03 +0000 (0:00:00.774) 0:02:17.820 ******* 2026-04-08 00:46:15.387260 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:46:15.387264 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:46:15.387268 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:46:15.387272 | orchestrator | 2026-04-08 00:46:15.387275 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-04-08 00:46:15.387279 | orchestrator | Wednesday 08 April 2026 00:44:03 +0000 (0:00:00.300) 0:02:18.120 ******* 2026-04-08 00:46:15.387283 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:46:15.387287 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:46:15.387291 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:46:15.387295 | orchestrator | 2026-04-08 00:46:15.387298 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-04-08 00:46:15.387302 | orchestrator | Wednesday 08 April 2026 00:44:04 +0000 (0:00:00.489) 0:02:18.610 ******* 2026-04-08 00:46:15.387306 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:46:15.387310 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:46:15.387314 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:46:15.387318 | orchestrator | 2026-04-08 00:46:15.387322 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-04-08 00:46:15.387326 | orchestrator | Wednesday 08 April 2026 00:44:04 +0000 (0:00:00.600) 0:02:19.210 ******* 2026-04-08 00:46:15.387329 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:46:15.387337 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:46:15.387341 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:46:15.387345 | orchestrator | 2026-04-08 00:46:15.387350 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-04-08 00:46:15.387353 | orchestrator | Wednesday 08 April 2026 00:44:05 +0000 (0:00:00.589) 0:02:19.799 ******* 2026-04-08 00:46:15.387360 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-08 00:46:15.387364 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-08 00:46:15.387368 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-08 00:46:15.387372 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-08 00:46:15.387376 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-08 00:46:15.387379 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-08 00:46:15.387383 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-08 00:46:15.387388 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-08 00:46:15.387392 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-08 00:46:15.387396 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-04-08 00:46:15.387400 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-08 00:46:15.387404 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-08 00:46:15.387407 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-04-08 00:46:15.387411 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-08 00:46:15.387415 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-08 00:46:15.387418 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-08 00:46:15.387422 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-08 00:46:15.387426 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-08 00:46:15.387430 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-08 00:46:15.387434 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-08 00:46:15.387437 | orchestrator | 2026-04-08 00:46:15.387441 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-04-08 00:46:15.387445 | orchestrator | 2026-04-08 00:46:15.387451 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-04-08 00:46:15.387456 | orchestrator | Wednesday 08 April 2026 00:44:08 +0000 (0:00:02.886) 0:02:22.685 ******* 2026-04-08 00:46:15.387460 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:46:15.387463 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:46:15.387467 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:46:15.387471 | orchestrator | 2026-04-08 00:46:15.387475 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-04-08 00:46:15.387479 | orchestrator | Wednesday 08 April 2026 00:44:08 +0000 (0:00:00.275) 0:02:22.961 ******* 2026-04-08 00:46:15.387482 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:46:15.387486 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:46:15.387490 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:46:15.387494 | orchestrator | 2026-04-08 00:46:15.387498 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-04-08 00:46:15.387502 | orchestrator | Wednesday 08 April 2026 00:44:09 +0000 (0:00:00.537) 0:02:23.498 ******* 2026-04-08 00:46:15.387506 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:46:15.387510 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:46:15.387517 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:46:15.387531 | orchestrator | 2026-04-08 00:46:15.387539 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-04-08 00:46:15.387545 | orchestrator | Wednesday 08 April 2026 00:44:09 +0000 (0:00:00.311) 0:02:23.810 ******* 2026-04-08 00:46:15.387551 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:46:15.387556 | orchestrator | 2026-04-08 00:46:15.387565 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-04-08 00:46:15.387571 | orchestrator | Wednesday 08 April 2026 00:44:09 +0000 (0:00:00.527) 0:02:24.338 ******* 2026-04-08 00:46:15.387577 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:46:15.387583 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:46:15.387588 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:46:15.387594 | orchestrator | 2026-04-08 00:46:15.387600 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-04-08 00:46:15.387606 | orchestrator | Wednesday 08 April 2026 00:44:10 +0000 (0:00:00.272) 0:02:24.610 ******* 2026-04-08 00:46:15.387612 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:46:15.387618 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:46:15.387628 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:46:15.387634 | orchestrator | 2026-04-08 00:46:15.387645 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-04-08 00:46:15.387657 | orchestrator | Wednesday 08 April 2026 00:44:10 +0000 (0:00:00.299) 0:02:24.910 ******* 2026-04-08 00:46:15.387664 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:46:15.387670 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:46:15.387676 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:46:15.387682 | orchestrator | 2026-04-08 00:46:15.387688 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-04-08 00:46:15.387694 | orchestrator | Wednesday 08 April 2026 00:44:10 +0000 (0:00:00.428) 0:02:25.338 ******* 2026-04-08 00:46:15.387701 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:46:15.387706 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:46:15.387713 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:46:15.387719 | orchestrator | 2026-04-08 00:46:15.387725 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-04-08 00:46:15.387731 | orchestrator | Wednesday 08 April 2026 00:44:11 +0000 (0:00:00.612) 0:02:25.951 ******* 2026-04-08 00:46:15.387737 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:46:15.387741 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:46:15.387745 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:46:15.387749 | orchestrator | 2026-04-08 00:46:15.387752 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-04-08 00:46:15.387756 | orchestrator | Wednesday 08 April 2026 00:44:12 +0000 (0:00:01.100) 0:02:27.051 ******* 2026-04-08 00:46:15.387760 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:46:15.387764 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:46:15.387768 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:46:15.387772 | orchestrator | 2026-04-08 00:46:15.387775 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-04-08 00:46:15.387779 | orchestrator | Wednesday 08 April 2026 00:44:13 +0000 (0:00:01.161) 0:02:28.213 ******* 2026-04-08 00:46:15.387783 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:46:15.387787 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:46:15.387790 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:46:15.387794 | orchestrator | 2026-04-08 00:46:15.387798 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-08 00:46:15.387802 | orchestrator | 2026-04-08 00:46:15.387805 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-08 00:46:15.387809 | orchestrator | Wednesday 08 April 2026 00:44:24 +0000 (0:00:10.774) 0:02:38.988 ******* 2026-04-08 00:46:15.387813 | orchestrator | ok: [testbed-manager] 2026-04-08 00:46:15.387817 | orchestrator | 2026-04-08 00:46:15.387821 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-08 00:46:15.387913 | orchestrator | Wednesday 08 April 2026 00:44:25 +0000 (0:00:00.698) 0:02:39.687 ******* 2026-04-08 00:46:15.387919 | orchestrator | changed: [testbed-manager] 2026-04-08 00:46:15.387923 | orchestrator | 2026-04-08 00:46:15.387927 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-08 00:46:15.387930 | orchestrator | Wednesday 08 April 2026 00:44:25 +0000 (0:00:00.444) 0:02:40.131 ******* 2026-04-08 00:46:15.387934 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-08 00:46:15.387938 | orchestrator | 2026-04-08 00:46:15.387942 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-08 00:46:15.387946 | orchestrator | Wednesday 08 April 2026 00:44:26 +0000 (0:00:00.593) 0:02:40.724 ******* 2026-04-08 00:46:15.387950 | orchestrator | changed: [testbed-manager] 2026-04-08 00:46:15.387953 | orchestrator | 2026-04-08 00:46:15.387957 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-08 00:46:15.387961 | orchestrator | Wednesday 08 April 2026 00:44:27 +0000 (0:00:00.841) 0:02:41.566 ******* 2026-04-08 00:46:15.387965 | orchestrator | changed: [testbed-manager] 2026-04-08 00:46:15.387968 | orchestrator | 2026-04-08 00:46:15.387977 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-08 00:46:15.387981 | orchestrator | Wednesday 08 April 2026 00:44:27 +0000 (0:00:00.553) 0:02:42.120 ******* 2026-04-08 00:46:15.387985 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-08 00:46:15.387989 | orchestrator | 2026-04-08 00:46:15.387993 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-08 00:46:15.387997 | orchestrator | Wednesday 08 April 2026 00:44:29 +0000 (0:00:01.851) 0:02:43.972 ******* 2026-04-08 00:46:15.388000 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-08 00:46:15.388004 | orchestrator | 2026-04-08 00:46:15.388008 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-08 00:46:15.388011 | orchestrator | Wednesday 08 April 2026 00:44:30 +0000 (0:00:00.711) 0:02:44.683 ******* 2026-04-08 00:46:15.388015 | orchestrator | changed: [testbed-manager] 2026-04-08 00:46:15.388019 | orchestrator | 2026-04-08 00:46:15.388023 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-08 00:46:15.388026 | orchestrator | Wednesday 08 April 2026 00:44:30 +0000 (0:00:00.347) 0:02:45.031 ******* 2026-04-08 00:46:15.388030 | orchestrator | changed: [testbed-manager] 2026-04-08 00:46:15.388034 | orchestrator | 2026-04-08 00:46:15.388037 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-04-08 00:46:15.388041 | orchestrator | 2026-04-08 00:46:15.388045 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-04-08 00:46:15.388049 | orchestrator | Wednesday 08 April 2026 00:44:31 +0000 (0:00:00.370) 0:02:45.401 ******* 2026-04-08 00:46:15.388053 | orchestrator | ok: [testbed-manager] 2026-04-08 00:46:15.388056 | orchestrator | 2026-04-08 00:46:15.388060 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-04-08 00:46:15.388064 | orchestrator | Wednesday 08 April 2026 00:44:31 +0000 (0:00:00.155) 0:02:45.557 ******* 2026-04-08 00:46:15.388067 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-04-08 00:46:15.388072 | orchestrator | 2026-04-08 00:46:15.388075 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-04-08 00:46:15.388079 | orchestrator | Wednesday 08 April 2026 00:44:31 +0000 (0:00:00.199) 0:02:45.756 ******* 2026-04-08 00:46:15.388083 | orchestrator | ok: [testbed-manager] 2026-04-08 00:46:15.388087 | orchestrator | 2026-04-08 00:46:15.388090 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-04-08 00:46:15.388094 | orchestrator | Wednesday 08 April 2026 00:44:32 +0000 (0:00:00.678) 0:02:46.435 ******* 2026-04-08 00:46:15.388102 | orchestrator | ok: [testbed-manager] 2026-04-08 00:46:15.388106 | orchestrator | 2026-04-08 00:46:15.388110 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-04-08 00:46:15.388114 | orchestrator | Wednesday 08 April 2026 00:44:33 +0000 (0:00:01.261) 0:02:47.696 ******* 2026-04-08 00:46:15.388123 | orchestrator | changed: [testbed-manager] 2026-04-08 00:46:15.388127 | orchestrator | 2026-04-08 00:46:15.388131 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-04-08 00:46:15.388135 | orchestrator | Wednesday 08 April 2026 00:44:34 +0000 (0:00:00.910) 0:02:48.607 ******* 2026-04-08 00:46:15.388139 | orchestrator | ok: [testbed-manager] 2026-04-08 00:46:15.388142 | orchestrator | 2026-04-08 00:46:15.388146 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-04-08 00:46:15.388150 | orchestrator | Wednesday 08 April 2026 00:44:34 +0000 (0:00:00.439) 0:02:49.046 ******* 2026-04-08 00:46:15.388153 | orchestrator | changed: [testbed-manager] 2026-04-08 00:46:15.388157 | orchestrator | 2026-04-08 00:46:15.388161 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-04-08 00:46:15.388165 | orchestrator | Wednesday 08 April 2026 00:44:41 +0000 (0:00:06.814) 0:02:55.861 ******* 2026-04-08 00:46:15.388168 | orchestrator | changed: [testbed-manager] 2026-04-08 00:46:15.388172 | orchestrator | 2026-04-08 00:46:15.388176 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-04-08 00:46:15.388180 | orchestrator | Wednesday 08 April 2026 00:44:53 +0000 (0:00:11.639) 0:03:07.500 ******* 2026-04-08 00:46:15.388184 | orchestrator | ok: [testbed-manager] 2026-04-08 00:46:15.388188 | orchestrator | 2026-04-08 00:46:15.388192 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-04-08 00:46:15.388195 | orchestrator | 2026-04-08 00:46:15.388199 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-04-08 00:46:15.388203 | orchestrator | Wednesday 08 April 2026 00:44:53 +0000 (0:00:00.433) 0:03:07.934 ******* 2026-04-08 00:46:15.388206 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:46:15.388210 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:46:15.388216 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:46:15.388222 | orchestrator | 2026-04-08 00:46:15.388228 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-04-08 00:46:15.388234 | orchestrator | Wednesday 08 April 2026 00:44:53 +0000 (0:00:00.251) 0:03:08.185 ******* 2026-04-08 00:46:15.388241 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:46:15.388247 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:46:15.388253 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:46:15.388260 | orchestrator | 2026-04-08 00:46:15.388265 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-04-08 00:46:15.388271 | orchestrator | Wednesday 08 April 2026 00:44:54 +0000 (0:00:00.406) 0:03:08.592 ******* 2026-04-08 00:46:15.388277 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:46:15.388284 | orchestrator | 2026-04-08 00:46:15.388294 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-04-08 00:46:15.388303 | orchestrator | Wednesday 08 April 2026 00:44:54 +0000 (0:00:00.578) 0:03:09.170 ******* 2026-04-08 00:46:15.388311 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-08 00:46:15.388317 | orchestrator | 2026-04-08 00:46:15.388323 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-04-08 00:46:15.388329 | orchestrator | Wednesday 08 April 2026 00:44:55 +0000 (0:00:00.835) 0:03:10.006 ******* 2026-04-08 00:46:15.388335 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-08 00:46:15.388341 | orchestrator | 2026-04-08 00:46:15.388347 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-04-08 00:46:15.388359 | orchestrator | Wednesday 08 April 2026 00:44:56 +0000 (0:00:00.970) 0:03:10.977 ******* 2026-04-08 00:46:15.388364 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:46:15.388371 | orchestrator | 2026-04-08 00:46:15.388377 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-04-08 00:46:15.388385 | orchestrator | Wednesday 08 April 2026 00:44:56 +0000 (0:00:00.089) 0:03:11.066 ******* 2026-04-08 00:46:15.388392 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-08 00:46:15.388404 | orchestrator | 2026-04-08 00:46:15.388410 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-04-08 00:46:15.388417 | orchestrator | Wednesday 08 April 2026 00:44:57 +0000 (0:00:00.978) 0:03:12.044 ******* 2026-04-08 00:46:15.388422 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:46:15.388426 | orchestrator | 2026-04-08 00:46:15.388430 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-04-08 00:46:15.388434 | orchestrator | Wednesday 08 April 2026 00:44:57 +0000 (0:00:00.106) 0:03:12.151 ******* 2026-04-08 00:46:15.388438 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:46:15.388441 | orchestrator | 2026-04-08 00:46:15.388445 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-04-08 00:46:15.388449 | orchestrator | Wednesday 08 April 2026 00:44:58 +0000 (0:00:00.245) 0:03:12.396 ******* 2026-04-08 00:46:15.388453 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:46:15.388456 | orchestrator | 2026-04-08 00:46:15.388460 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-04-08 00:46:15.388464 | orchestrator | Wednesday 08 April 2026 00:44:58 +0000 (0:00:00.108) 0:03:12.504 ******* 2026-04-08 00:46:15.388468 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:46:15.388471 | orchestrator | 2026-04-08 00:46:15.388475 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-04-08 00:46:15.388479 | orchestrator | Wednesday 08 April 2026 00:44:58 +0000 (0:00:00.122) 0:03:12.627 ******* 2026-04-08 00:46:15.388483 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-08 00:46:15.388487 | orchestrator | 2026-04-08 00:46:15.388491 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-04-08 00:46:15.388495 | orchestrator | Wednesday 08 April 2026 00:45:02 +0000 (0:00:04.608) 0:03:17.235 ******* 2026-04-08 00:46:15.388499 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-04-08 00:46:15.388507 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-04-08 00:46:15.388512 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-04-08 00:46:15.388516 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-04-08 00:46:15.388520 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-04-08 00:46:15.388524 | orchestrator | 2026-04-08 00:46:15.388528 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-04-08 00:46:15.388531 | orchestrator | Wednesday 08 April 2026 00:45:47 +0000 (0:00:44.480) 0:04:01.716 ******* 2026-04-08 00:46:15.388535 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-08 00:46:15.388539 | orchestrator | 2026-04-08 00:46:15.388543 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-04-08 00:46:15.388546 | orchestrator | Wednesday 08 April 2026 00:45:48 +0000 (0:00:01.069) 0:04:02.786 ******* 2026-04-08 00:46:15.388550 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-08 00:46:15.388554 | orchestrator | 2026-04-08 00:46:15.388558 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-04-08 00:46:15.388562 | orchestrator | Wednesday 08 April 2026 00:45:49 +0000 (0:00:01.526) 0:04:04.313 ******* 2026-04-08 00:46:15.388565 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-08 00:46:15.388569 | orchestrator | 2026-04-08 00:46:15.388573 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-04-08 00:46:15.388577 | orchestrator | Wednesday 08 April 2026 00:45:51 +0000 (0:00:01.144) 0:04:05.457 ******* 2026-04-08 00:46:15.388581 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:46:15.388588 | orchestrator | 2026-04-08 00:46:15.388594 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-04-08 00:46:15.388600 | orchestrator | Wednesday 08 April 2026 00:45:51 +0000 (0:00:00.100) 0:04:05.558 ******* 2026-04-08 00:46:15.388606 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-04-08 00:46:15.388618 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-04-08 00:46:15.388623 | orchestrator | 2026-04-08 00:46:15.388630 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-04-08 00:46:15.388637 | orchestrator | Wednesday 08 April 2026 00:45:53 +0000 (0:00:02.031) 0:04:07.589 ******* 2026-04-08 00:46:15.388641 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:46:15.388645 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:46:15.388649 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:46:15.388653 | orchestrator | 2026-04-08 00:46:15.388656 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-04-08 00:46:15.388660 | orchestrator | Wednesday 08 April 2026 00:45:53 +0000 (0:00:00.262) 0:04:07.852 ******* 2026-04-08 00:46:15.388664 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:46:15.388668 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:46:15.388672 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:46:15.388676 | orchestrator | 2026-04-08 00:46:15.388680 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-04-08 00:46:15.388683 | orchestrator | 2026-04-08 00:46:15.388687 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-04-08 00:46:15.388691 | orchestrator | Wednesday 08 April 2026 00:45:54 +0000 (0:00:00.943) 0:04:08.795 ******* 2026-04-08 00:46:15.388695 | orchestrator | ok: [testbed-manager] 2026-04-08 00:46:15.388699 | orchestrator | 2026-04-08 00:46:15.388702 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-04-08 00:46:15.388710 | orchestrator | Wednesday 08 April 2026 00:45:54 +0000 (0:00:00.148) 0:04:08.944 ******* 2026-04-08 00:46:15.388714 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-04-08 00:46:15.388718 | orchestrator | 2026-04-08 00:46:15.388721 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-04-08 00:46:15.388725 | orchestrator | Wednesday 08 April 2026 00:45:54 +0000 (0:00:00.197) 0:04:09.141 ******* 2026-04-08 00:46:15.388729 | orchestrator | changed: [testbed-manager] 2026-04-08 00:46:15.388732 | orchestrator | 2026-04-08 00:46:15.388736 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-04-08 00:46:15.388740 | orchestrator | 2026-04-08 00:46:15.388744 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-04-08 00:46:15.388747 | orchestrator | Wednesday 08 April 2026 00:46:00 +0000 (0:00:05.853) 0:04:14.994 ******* 2026-04-08 00:46:15.388751 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:46:15.388755 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:46:15.388759 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:46:15.388763 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:46:15.388767 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:46:15.388770 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:46:15.388774 | orchestrator | 2026-04-08 00:46:15.388778 | orchestrator | TASK [Manage labels] *********************************************************** 2026-04-08 00:46:15.388782 | orchestrator | Wednesday 08 April 2026 00:46:01 +0000 (0:00:00.655) 0:04:15.650 ******* 2026-04-08 00:46:15.388786 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-08 00:46:15.388790 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-08 00:46:15.388794 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-08 00:46:15.388798 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-08 00:46:15.388802 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-08 00:46:15.388805 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-08 00:46:15.388809 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-08 00:46:15.388813 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-08 00:46:15.388824 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-08 00:46:15.388828 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-08 00:46:15.388847 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-08 00:46:15.388851 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-08 00:46:15.388855 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-08 00:46:15.388859 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-08 00:46:15.388862 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-08 00:46:15.388866 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-08 00:46:15.388870 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-08 00:46:15.388873 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-08 00:46:15.388877 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-08 00:46:15.388881 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-08 00:46:15.388885 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-08 00:46:15.388888 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-08 00:46:15.388892 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-08 00:46:15.388896 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-08 00:46:15.388901 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-08 00:46:15.388908 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-08 00:46:15.388913 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-08 00:46:15.388919 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-08 00:46:15.388926 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-08 00:46:15.388933 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-08 00:46:15.388939 | orchestrator | 2026-04-08 00:46:15.388945 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-04-08 00:46:15.388951 | orchestrator | Wednesday 08 April 2026 00:46:12 +0000 (0:00:10.712) 0:04:26.362 ******* 2026-04-08 00:46:15.388957 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:46:15.388966 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:46:15.388974 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:46:15.388982 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:46:15.388989 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:46:15.388995 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:46:15.389002 | orchestrator | 2026-04-08 00:46:15.389013 | orchestrator | TASK [Manage taints] *********************************************************** 2026-04-08 00:46:15.389020 | orchestrator | Wednesday 08 April 2026 00:46:12 +0000 (0:00:00.573) 0:04:26.936 ******* 2026-04-08 00:46:15.389027 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:46:15.389031 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:46:15.389035 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:46:15.389039 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:46:15.389043 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:46:15.389046 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:46:15.389050 | orchestrator | 2026-04-08 00:46:15.389054 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:46:15.389063 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:46:15.389069 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-08 00:46:15.389073 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-08 00:46:15.389078 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-08 00:46:15.389081 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-08 00:46:15.389085 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-08 00:46:15.389089 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-08 00:46:15.389093 | orchestrator | 2026-04-08 00:46:15.389097 | orchestrator | 2026-04-08 00:46:15.389101 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:46:15.389108 | orchestrator | Wednesday 08 April 2026 00:46:12 +0000 (0:00:00.321) 0:04:27.257 ******* 2026-04-08 00:46:15.389112 | orchestrator | =============================================================================== 2026-04-08 00:46:15.389116 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 53.50s 2026-04-08 00:46:15.389120 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 44.48s 2026-04-08 00:46:15.389124 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.90s 2026-04-08 00:46:15.389128 | orchestrator | kubectl : Install required packages ------------------------------------ 11.64s 2026-04-08 00:46:15.389132 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.78s 2026-04-08 00:46:15.389136 | orchestrator | Manage labels ---------------------------------------------------------- 10.71s 2026-04-08 00:46:15.389140 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.81s 2026-04-08 00:46:15.389144 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.62s 2026-04-08 00:46:15.389148 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.85s 2026-04-08 00:46:15.389151 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.61s 2026-04-08 00:46:15.389155 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.89s 2026-04-08 00:46:15.389159 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.84s 2026-04-08 00:46:15.389163 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.84s 2026-04-08 00:46:15.389167 | orchestrator | k3s_download : Download k3s binary arm64 -------------------------------- 2.77s 2026-04-08 00:46:15.389171 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.59s 2026-04-08 00:46:15.389174 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.36s 2026-04-08 00:46:15.389178 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.03s 2026-04-08 00:46:15.389182 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.85s 2026-04-08 00:46:15.389186 | orchestrator | k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries --- 1.77s 2026-04-08 00:46:15.389190 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 1.77s 2026-04-08 00:46:15.389198 | orchestrator | 2026-04-08 00:46:15 | INFO  | Task 25987425-6dc3-4ebb-927a-6dfdbdcbfc5c is in state STARTED 2026-04-08 00:46:15.389202 | orchestrator | 2026-04-08 00:46:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:46:18.428224 | orchestrator | 2026-04-08 00:46:18 | INFO  | Task dcfd88e9-df0b-4986-b4e9-5576620d0706 is in state STARTED 2026-04-08 00:46:18.429674 | orchestrator | 2026-04-08 00:46:18 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:46:18.431126 | orchestrator | 2026-04-08 00:46:18 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:46:18.432785 | orchestrator | 2026-04-08 00:46:18 | INFO  | Task 25987425-6dc3-4ebb-927a-6dfdbdcbfc5c is in state STARTED 2026-04-08 00:46:18.432901 | orchestrator | 2026-04-08 00:46:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:46:21.477366 | orchestrator | 2026-04-08 00:46:21 | INFO  | Task dcfd88e9-df0b-4986-b4e9-5576620d0706 is in state STARTED 2026-04-08 00:46:21.477709 | orchestrator | 2026-04-08 00:46:21 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:46:21.481078 | orchestrator | 2026-04-08 00:46:21 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:46:21.481176 | orchestrator | 2026-04-08 00:46:21 | INFO  | Task 25987425-6dc3-4ebb-927a-6dfdbdcbfc5c is in state SUCCESS 2026-04-08 00:46:21.481184 | orchestrator | 2026-04-08 00:46:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:46:24.509574 | orchestrator | 2026-04-08 00:46:24 | INFO  | Task dcfd88e9-df0b-4986-b4e9-5576620d0706 is in state SUCCESS 2026-04-08 00:46:24.509657 | orchestrator | 2026-04-08 00:46:24 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:46:24.510255 | orchestrator | 2026-04-08 00:46:24 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:46:24.510365 | orchestrator | 2026-04-08 00:46:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:46:27.547627 | orchestrator | 2026-04-08 00:46:27 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:46:27.549287 | orchestrator | 2026-04-08 00:46:27 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:46:27.549319 | orchestrator | 2026-04-08 00:46:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:46:30.591425 | orchestrator | 2026-04-08 00:46:30 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:46:30.591526 | orchestrator | 2026-04-08 00:46:30 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:46:30.591634 | orchestrator | 2026-04-08 00:46:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:46:33.632044 | orchestrator | 2026-04-08 00:46:33 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:46:33.633539 | orchestrator | 2026-04-08 00:46:33 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:46:33.633602 | orchestrator | 2026-04-08 00:46:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:46:36.670255 | orchestrator | 2026-04-08 00:46:36 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:46:36.674330 | orchestrator | 2026-04-08 00:46:36 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:46:36.674408 | orchestrator | 2026-04-08 00:46:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:46:39.718164 | orchestrator | 2026-04-08 00:46:39 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:46:39.720255 | orchestrator | 2026-04-08 00:46:39 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:46:39.720295 | orchestrator | 2026-04-08 00:46:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:46:42.775392 | orchestrator | 2026-04-08 00:46:42 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:46:42.777437 | orchestrator | 2026-04-08 00:46:42 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:46:42.777645 | orchestrator | 2026-04-08 00:46:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:46:45.823495 | orchestrator | 2026-04-08 00:46:45 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:46:45.826132 | orchestrator | 2026-04-08 00:46:45 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:46:45.826240 | orchestrator | 2026-04-08 00:46:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:46:48.863030 | orchestrator | 2026-04-08 00:46:48 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:46:48.865062 | orchestrator | 2026-04-08 00:46:48 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:46:48.865126 | orchestrator | 2026-04-08 00:46:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:46:51.911774 | orchestrator | 2026-04-08 00:46:51 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:46:51.913546 | orchestrator | 2026-04-08 00:46:51 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:46:51.913613 | orchestrator | 2026-04-08 00:46:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:46:54.943740 | orchestrator | 2026-04-08 00:46:54 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:46:54.945061 | orchestrator | 2026-04-08 00:46:54 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:46:54.945110 | orchestrator | 2026-04-08 00:46:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:46:57.969230 | orchestrator | 2026-04-08 00:46:57 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:46:57.971534 | orchestrator | 2026-04-08 00:46:57 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:46:57.971580 | orchestrator | 2026-04-08 00:46:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:47:00.996910 | orchestrator | 2026-04-08 00:47:00 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:47:00.997115 | orchestrator | 2026-04-08 00:47:00 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:47:00.997127 | orchestrator | 2026-04-08 00:47:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:47:04.031051 | orchestrator | 2026-04-08 00:47:04 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:47:04.033404 | orchestrator | 2026-04-08 00:47:04 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:47:04.033481 | orchestrator | 2026-04-08 00:47:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:47:07.069490 | orchestrator | 2026-04-08 00:47:07 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:47:07.071829 | orchestrator | 2026-04-08 00:47:07 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:47:07.071939 | orchestrator | 2026-04-08 00:47:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:47:10.110066 | orchestrator | 2026-04-08 00:47:10 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:47:10.112822 | orchestrator | 2026-04-08 00:47:10 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:47:10.112923 | orchestrator | 2026-04-08 00:47:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:47:13.140690 | orchestrator | 2026-04-08 00:47:13 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:47:13.140967 | orchestrator | 2026-04-08 00:47:13 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:47:13.141463 | orchestrator | 2026-04-08 00:47:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:47:16.181415 | orchestrator | 2026-04-08 00:47:16 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:47:16.182785 | orchestrator | 2026-04-08 00:47:16 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:47:16.182819 | orchestrator | 2026-04-08 00:47:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:47:19.220730 | orchestrator | 2026-04-08 00:47:19 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:47:19.221554 | orchestrator | 2026-04-08 00:47:19 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:47:19.221603 | orchestrator | 2026-04-08 00:47:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:47:22.263751 | orchestrator | 2026-04-08 00:47:22 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:47:22.263831 | orchestrator | 2026-04-08 00:47:22 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:47:22.263842 | orchestrator | 2026-04-08 00:47:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:47:25.313419 | orchestrator | 2026-04-08 00:47:25 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:47:25.314122 | orchestrator | 2026-04-08 00:47:25 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:47:25.314160 | orchestrator | 2026-04-08 00:47:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:47:28.364753 | orchestrator | 2026-04-08 00:47:28 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:47:28.367240 | orchestrator | 2026-04-08 00:47:28 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:47:28.367298 | orchestrator | 2026-04-08 00:47:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:47:31.415969 | orchestrator | 2026-04-08 00:47:31 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:47:31.417520 | orchestrator | 2026-04-08 00:47:31 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:47:31.417574 | orchestrator | 2026-04-08 00:47:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:47:34.457199 | orchestrator | 2026-04-08 00:47:34 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:47:34.459260 | orchestrator | 2026-04-08 00:47:34 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:47:34.459345 | orchestrator | 2026-04-08 00:47:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:47:37.510480 | orchestrator | 2026-04-08 00:47:37 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:47:37.514357 | orchestrator | 2026-04-08 00:47:37 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:47:37.514415 | orchestrator | 2026-04-08 00:47:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:47:40.555690 | orchestrator | 2026-04-08 00:47:40 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:47:40.557214 | orchestrator | 2026-04-08 00:47:40 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:47:40.557685 | orchestrator | 2026-04-08 00:47:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:47:43.606835 | orchestrator | 2026-04-08 00:47:43 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:47:43.609259 | orchestrator | 2026-04-08 00:47:43 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:47:43.609347 | orchestrator | 2026-04-08 00:47:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:47:46.648179 | orchestrator | 2026-04-08 00:47:46 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:47:46.649966 | orchestrator | 2026-04-08 00:47:46 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:47:46.650050 | orchestrator | 2026-04-08 00:47:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:47:49.682418 | orchestrator | 2026-04-08 00:47:49 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:47:49.684992 | orchestrator | 2026-04-08 00:47:49 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:47:49.685298 | orchestrator | 2026-04-08 00:47:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:47:52.724233 | orchestrator | 2026-04-08 00:47:52 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:47:52.725515 | orchestrator | 2026-04-08 00:47:52 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:47:52.725911 | orchestrator | 2026-04-08 00:47:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:47:55.770090 | orchestrator | 2026-04-08 00:47:55 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:47:55.773429 | orchestrator | 2026-04-08 00:47:55 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:47:55.773482 | orchestrator | 2026-04-08 00:47:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:47:58.819588 | orchestrator | 2026-04-08 00:47:58 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:47:58.820687 | orchestrator | 2026-04-08 00:47:58 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:47:58.820724 | orchestrator | 2026-04-08 00:47:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:48:01.872702 | orchestrator | 2026-04-08 00:48:01 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:48:01.873673 | orchestrator | 2026-04-08 00:48:01 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:48:01.873720 | orchestrator | 2026-04-08 00:48:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:48:04.906566 | orchestrator | 2026-04-08 00:48:04 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:48:04.906872 | orchestrator | 2026-04-08 00:48:04 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:48:04.906903 | orchestrator | 2026-04-08 00:48:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:48:07.958301 | orchestrator | 2026-04-08 00:48:07 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:48:07.958477 | orchestrator | 2026-04-08 00:48:07 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:48:07.958493 | orchestrator | 2026-04-08 00:48:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:48:11.003523 | orchestrator | 2026-04-08 00:48:11 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:48:11.005102 | orchestrator | 2026-04-08 00:48:11 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:48:11.005153 | orchestrator | 2026-04-08 00:48:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:48:14.053798 | orchestrator | 2026-04-08 00:48:14 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:48:14.055641 | orchestrator | 2026-04-08 00:48:14 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:48:14.055964 | orchestrator | 2026-04-08 00:48:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:48:17.086477 | orchestrator | 2026-04-08 00:48:17 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:48:17.088399 | orchestrator | 2026-04-08 00:48:17 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:48:17.088438 | orchestrator | 2026-04-08 00:48:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:48:20.115829 | orchestrator | 2026-04-08 00:48:20 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:48:20.116643 | orchestrator | 2026-04-08 00:48:20 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:48:20.116843 | orchestrator | 2026-04-08 00:48:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:48:23.151743 | orchestrator | 2026-04-08 00:48:23 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state STARTED 2026-04-08 00:48:23.151923 | orchestrator | 2026-04-08 00:48:23 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:48:23.151937 | orchestrator | 2026-04-08 00:48:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:48:26.194355 | orchestrator | 2026-04-08 00:48:26 | INFO  | Task f157936e-1dae-4e2b-b755-a86927bef45f is in state STARTED 2026-04-08 00:48:26.202528 | orchestrator | 2026-04-08 00:48:26 | INFO  | Task d5954de4-762e-491b-b125-514293696267 is in state SUCCESS 2026-04-08 00:48:26.204833 | orchestrator | 2026-04-08 00:48:26.204904 | orchestrator | 2026-04-08 00:48:26.204912 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-04-08 00:48:26.204932 | orchestrator | 2026-04-08 00:48:26.204939 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-08 00:48:26.204953 | orchestrator | Wednesday 08 April 2026 00:46:16 +0000 (0:00:00.230) 0:00:00.230 ******* 2026-04-08 00:48:26.204980 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-08 00:48:26.204991 | orchestrator | 2026-04-08 00:48:26.205000 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-08 00:48:26.205008 | orchestrator | Wednesday 08 April 2026 00:46:17 +0000 (0:00:01.081) 0:00:01.311 ******* 2026-04-08 00:48:26.205017 | orchestrator | changed: [testbed-manager] 2026-04-08 00:48:26.205026 | orchestrator | 2026-04-08 00:48:26.205034 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-04-08 00:48:26.205042 | orchestrator | Wednesday 08 April 2026 00:46:18 +0000 (0:00:01.504) 0:00:02.815 ******* 2026-04-08 00:48:26.205051 | orchestrator | changed: [testbed-manager] 2026-04-08 00:48:26.205211 | orchestrator | 2026-04-08 00:48:26.205225 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:48:26.205236 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:48:26.205272 | orchestrator | 2026-04-08 00:48:26.205282 | orchestrator | 2026-04-08 00:48:26.205293 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:48:26.205303 | orchestrator | Wednesday 08 April 2026 00:46:19 +0000 (0:00:00.467) 0:00:03.282 ******* 2026-04-08 00:48:26.205313 | orchestrator | =============================================================================== 2026-04-08 00:48:26.205323 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.50s 2026-04-08 00:48:26.205333 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.08s 2026-04-08 00:48:26.205343 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.47s 2026-04-08 00:48:26.205353 | orchestrator | 2026-04-08 00:48:26.205363 | orchestrator | 2026-04-08 00:48:26.205373 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-08 00:48:26.205383 | orchestrator | 2026-04-08 00:48:26.205393 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-08 00:48:26.205403 | orchestrator | Wednesday 08 April 2026 00:46:16 +0000 (0:00:00.261) 0:00:00.261 ******* 2026-04-08 00:48:26.205414 | orchestrator | ok: [testbed-manager] 2026-04-08 00:48:26.205425 | orchestrator | 2026-04-08 00:48:26.205435 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-08 00:48:26.205458 | orchestrator | Wednesday 08 April 2026 00:46:17 +0000 (0:00:00.824) 0:00:01.085 ******* 2026-04-08 00:48:26.205469 | orchestrator | ok: [testbed-manager] 2026-04-08 00:48:26.205479 | orchestrator | 2026-04-08 00:48:26.205490 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-08 00:48:26.205500 | orchestrator | Wednesday 08 April 2026 00:46:17 +0000 (0:00:00.585) 0:00:01.671 ******* 2026-04-08 00:48:26.205510 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-08 00:48:26.205520 | orchestrator | 2026-04-08 00:48:26.205531 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-08 00:48:26.205541 | orchestrator | Wednesday 08 April 2026 00:46:18 +0000 (0:00:01.040) 0:00:02.711 ******* 2026-04-08 00:48:26.205550 | orchestrator | changed: [testbed-manager] 2026-04-08 00:48:26.205611 | orchestrator | 2026-04-08 00:48:26.205622 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-08 00:48:26.205631 | orchestrator | Wednesday 08 April 2026 00:46:19 +0000 (0:00:01.142) 0:00:03.853 ******* 2026-04-08 00:48:26.205641 | orchestrator | changed: [testbed-manager] 2026-04-08 00:48:26.205651 | orchestrator | 2026-04-08 00:48:26.205658 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-08 00:48:26.205665 | orchestrator | Wednesday 08 April 2026 00:46:20 +0000 (0:00:00.582) 0:00:04.436 ******* 2026-04-08 00:48:26.205671 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-08 00:48:26.205677 | orchestrator | 2026-04-08 00:48:26.205683 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-08 00:48:26.205690 | orchestrator | Wednesday 08 April 2026 00:46:22 +0000 (0:00:01.715) 0:00:06.152 ******* 2026-04-08 00:48:26.205697 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-08 00:48:26.205703 | orchestrator | 2026-04-08 00:48:26.205709 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-08 00:48:26.205715 | orchestrator | Wednesday 08 April 2026 00:46:22 +0000 (0:00:00.776) 0:00:06.928 ******* 2026-04-08 00:48:26.205997 | orchestrator | ok: [testbed-manager] 2026-04-08 00:48:26.206007 | orchestrator | 2026-04-08 00:48:26.206053 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-08 00:48:26.206061 | orchestrator | Wednesday 08 April 2026 00:46:23 +0000 (0:00:00.398) 0:00:07.326 ******* 2026-04-08 00:48:26.206068 | orchestrator | ok: [testbed-manager] 2026-04-08 00:48:26.206074 | orchestrator | 2026-04-08 00:48:26.206080 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:48:26.206087 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:48:26.206104 | orchestrator | 2026-04-08 00:48:26.206111 | orchestrator | 2026-04-08 00:48:26.206117 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:48:26.206123 | orchestrator | Wednesday 08 April 2026 00:46:23 +0000 (0:00:00.279) 0:00:07.606 ******* 2026-04-08 00:48:26.206129 | orchestrator | =============================================================================== 2026-04-08 00:48:26.206137 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.72s 2026-04-08 00:48:26.206146 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.14s 2026-04-08 00:48:26.206156 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.04s 2026-04-08 00:48:26.206181 | orchestrator | Get home directory of operator user ------------------------------------- 0.82s 2026-04-08 00:48:26.206192 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.78s 2026-04-08 00:48:26.206203 | orchestrator | Create .kube directory -------------------------------------------------- 0.59s 2026-04-08 00:48:26.206268 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.58s 2026-04-08 00:48:26.206278 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.40s 2026-04-08 00:48:26.206285 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.28s 2026-04-08 00:48:26.206291 | orchestrator | 2026-04-08 00:48:26.206297 | orchestrator | 2026-04-08 00:48:26.206304 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-08 00:48:26.206310 | orchestrator | 2026-04-08 00:48:26.206316 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-08 00:48:26.206364 | orchestrator | Wednesday 08 April 2026 00:42:59 +0000 (0:00:00.606) 0:00:00.606 ******* 2026-04-08 00:48:26.206373 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:48:26.206380 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:48:26.206386 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:48:26.206392 | orchestrator | 2026-04-08 00:48:26.206398 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-08 00:48:26.206405 | orchestrator | Wednesday 08 April 2026 00:43:00 +0000 (0:00:00.911) 0:00:01.517 ******* 2026-04-08 00:48:26.206411 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-04-08 00:48:26.206420 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-04-08 00:48:26.206725 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-04-08 00:48:26.206736 | orchestrator | 2026-04-08 00:48:26.206747 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-04-08 00:48:26.206806 | orchestrator | 2026-04-08 00:48:26.206818 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-08 00:48:26.206829 | orchestrator | Wednesday 08 April 2026 00:43:01 +0000 (0:00:01.113) 0:00:02.630 ******* 2026-04-08 00:48:26.206880 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-2, testbed-node-1 2026-04-08 00:48:26.206894 | orchestrator | 2026-04-08 00:48:26.206905 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-04-08 00:48:26.206915 | orchestrator | Wednesday 08 April 2026 00:43:03 +0000 (0:00:01.616) 0:00:04.247 ******* 2026-04-08 00:48:26.206925 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:48:26.206936 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:48:26.206989 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:48:26.207000 | orchestrator | 2026-04-08 00:48:26.207011 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-08 00:48:26.207021 | orchestrator | Wednesday 08 April 2026 00:43:05 +0000 (0:00:02.567) 0:00:06.815 ******* 2026-04-08 00:48:26.207032 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:48:26.207042 | orchestrator | 2026-04-08 00:48:26.207052 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-04-08 00:48:26.207063 | orchestrator | Wednesday 08 April 2026 00:43:06 +0000 (0:00:00.710) 0:00:07.525 ******* 2026-04-08 00:48:26.207160 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:48:26.207172 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:48:26.207182 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:48:26.207192 | orchestrator | 2026-04-08 00:48:26.207202 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-04-08 00:48:26.207212 | orchestrator | Wednesday 08 April 2026 00:43:07 +0000 (0:00:01.173) 0:00:08.699 ******* 2026-04-08 00:48:26.207223 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-08 00:48:26.207233 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-08 00:48:26.207243 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-08 00:48:26.207312 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-08 00:48:26.207323 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-08 00:48:26.207390 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-08 00:48:26.207402 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-08 00:48:26.207413 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-08 00:48:26.207423 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-08 00:48:26.207434 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-08 00:48:26.207445 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-08 00:48:26.207455 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-08 00:48:26.208535 | orchestrator | 2026-04-08 00:48:26.208555 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-08 00:48:26.208586 | orchestrator | Wednesday 08 April 2026 00:43:11 +0000 (0:00:03.992) 0:00:12.692 ******* 2026-04-08 00:48:26.208592 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-08 00:48:26.208599 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-08 00:48:26.208605 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-08 00:48:26.208610 | orchestrator | 2026-04-08 00:48:26.208645 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-08 00:48:26.208707 | orchestrator | Wednesday 08 April 2026 00:43:13 +0000 (0:00:01.594) 0:00:14.286 ******* 2026-04-08 00:48:26.208716 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-08 00:48:26.208721 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-08 00:48:26.208727 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-08 00:48:26.208733 | orchestrator | 2026-04-08 00:48:26.208738 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-08 00:48:26.208744 | orchestrator | Wednesday 08 April 2026 00:43:15 +0000 (0:00:02.241) 0:00:16.527 ******* 2026-04-08 00:48:26.208750 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-04-08 00:48:26.208756 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.208761 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-04-08 00:48:26.208767 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.208773 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-04-08 00:48:26.208778 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.208784 | orchestrator | 2026-04-08 00:48:26.208789 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-04-08 00:48:26.208795 | orchestrator | Wednesday 08 April 2026 00:43:16 +0000 (0:00:01.358) 0:00:17.886 ******* 2026-04-08 00:48:26.208820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-08 00:48:26.208841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-08 00:48:26.208847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-08 00:48:26.208853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-08 00:48:26.208860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-08 00:48:26.208882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-08 00:48:26.208890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-08 00:48:26.208904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-08 00:48:26.208992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-08 00:48:26.209002 | orchestrator | 2026-04-08 00:48:26.209011 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-04-08 00:48:26.209019 | orchestrator | Wednesday 08 April 2026 00:43:18 +0000 (0:00:01.785) 0:00:19.671 ******* 2026-04-08 00:48:26.209028 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:26.209037 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:26.209045 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:26.209408 | orchestrator | 2026-04-08 00:48:26.209431 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-04-08 00:48:26.209437 | orchestrator | Wednesday 08 April 2026 00:43:19 +0000 (0:00:01.167) 0:00:20.839 ******* 2026-04-08 00:48:26.209445 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-04-08 00:48:26.209455 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-04-08 00:48:26.209465 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-04-08 00:48:26.209473 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-04-08 00:48:26.209483 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-04-08 00:48:26.209550 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-04-08 00:48:26.209563 | orchestrator | 2026-04-08 00:48:26.209571 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-04-08 00:48:26.209580 | orchestrator | Wednesday 08 April 2026 00:43:22 +0000 (0:00:02.882) 0:00:23.722 ******* 2026-04-08 00:48:26.209588 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:26.209597 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:26.209606 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:26.209615 | orchestrator | 2026-04-08 00:48:26.209624 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-04-08 00:48:26.209633 | orchestrator | Wednesday 08 April 2026 00:43:23 +0000 (0:00:01.343) 0:00:25.065 ******* 2026-04-08 00:48:26.209704 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:48:26.209715 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:48:26.209724 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:48:26.209734 | orchestrator | 2026-04-08 00:48:26.209744 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-04-08 00:48:26.209753 | orchestrator | Wednesday 08 April 2026 00:43:25 +0000 (0:00:01.501) 0:00:26.567 ******* 2026-04-08 00:48:26.209829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-08 00:48:26.209853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-08 00:48:26.209865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-08 00:48:26.209885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__6e97a89a2eac81bcd78dc3ec713e28418bfcb857', '__omit_place_holder__6e97a89a2eac81bcd78dc3ec713e28418bfcb857'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-08 00:48:26.209895 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.209905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-08 00:48:26.209918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-08 00:48:26.209928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-08 00:48:26.210248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__6e97a89a2eac81bcd78dc3ec713e28418bfcb857', '__omit_place_holder__6e97a89a2eac81bcd78dc3ec713e28418bfcb857'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-08 00:48:26.210275 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.210285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-08 00:48:26.210303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-08 00:48:26.210312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-08 00:48:26.210321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__6e97a89a2eac81bcd78dc3ec713e28418bfcb857', '__omit_place_holder__6e97a89a2eac81bcd78dc3ec713e28418bfcb857'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-08 00:48:26.210330 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.210339 | orchestrator | 2026-04-08 00:48:26.210347 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-04-08 00:48:26.210355 | orchestrator | Wednesday 08 April 2026 00:43:26 +0000 (0:00:01.135) 0:00:27.703 ******* 2026-04-08 00:48:26.210364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-08 00:48:26.210467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-08 00:48:26.210481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-08 00:48:26.210498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-08 00:48:26.210504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-08 00:48:26.210510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__6e97a89a2eac81bcd78dc3ec713e28418bfcb857', '__omit_place_holder__6e97a89a2eac81bcd78dc3ec713e28418bfcb857'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-08 00:48:26.210516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-08 00:48:26.210573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-08 00:48:26.210581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__6e97a89a2eac81bcd78dc3ec713e28418bfcb857', '__omit_place_holder__6e97a89a2eac81bcd78dc3ec713e28418bfcb857'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-08 00:48:26.210587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-08 00:48:26.210596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-08 00:48:26.210602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__6e97a89a2eac81bcd78dc3ec713e28418bfcb857', '__omit_place_holder__6e97a89a2eac81bcd78dc3ec713e28418bfcb857'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-08 00:48:26.210608 | orchestrator | 2026-04-08 00:48:26.210613 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-04-08 00:48:26.210619 | orchestrator | Wednesday 08 April 2026 00:43:32 +0000 (0:00:05.734) 0:00:33.437 ******* 2026-04-08 00:48:26.210625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-08 00:48:26.210682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-08 00:48:26.210697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-08 00:48:26.210705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-08 00:48:26.210720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-08 00:48:26.210729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-08 00:48:26.210739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-08 00:48:26.210755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-08 00:48:26.210822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-08 00:48:26.210834 | orchestrator | 2026-04-08 00:48:26.210844 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-04-08 00:48:26.210853 | orchestrator | Wednesday 08 April 2026 00:43:37 +0000 (0:00:05.280) 0:00:38.717 ******* 2026-04-08 00:48:26.210862 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-08 00:48:26.210872 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-08 00:48:26.210881 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-08 00:48:26.210890 | orchestrator | 2026-04-08 00:48:26.210898 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-04-08 00:48:26.210907 | orchestrator | Wednesday 08 April 2026 00:43:39 +0000 (0:00:02.133) 0:00:40.850 ******* 2026-04-08 00:48:26.210917 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-08 00:48:26.210926 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-08 00:48:26.210935 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-08 00:48:26.210944 | orchestrator | 2026-04-08 00:48:26.210953 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-04-08 00:48:26.210981 | orchestrator | Wednesday 08 April 2026 00:43:44 +0000 (0:00:04.706) 0:00:45.557 ******* 2026-04-08 00:48:26.210990 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.211000 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.211009 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.211018 | orchestrator | 2026-04-08 00:48:26.211027 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-04-08 00:48:26.211036 | orchestrator | Wednesday 08 April 2026 00:43:45 +0000 (0:00:00.682) 0:00:46.240 ******* 2026-04-08 00:48:26.211045 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-08 00:48:26.211060 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-08 00:48:26.211070 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-08 00:48:26.211086 | orchestrator | 2026-04-08 00:48:26.211095 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-04-08 00:48:26.211104 | orchestrator | Wednesday 08 April 2026 00:43:47 +0000 (0:00:02.564) 0:00:48.805 ******* 2026-04-08 00:48:26.211113 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-08 00:48:26.211122 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-08 00:48:26.211131 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-08 00:48:26.211141 | orchestrator | 2026-04-08 00:48:26.211150 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-08 00:48:26.211159 | orchestrator | Wednesday 08 April 2026 00:43:49 +0000 (0:00:01.862) 0:00:50.667 ******* 2026-04-08 00:48:26.211169 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:48:26.211178 | orchestrator | 2026-04-08 00:48:26.211187 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-04-08 00:48:26.211196 | orchestrator | Wednesday 08 April 2026 00:43:50 +0000 (0:00:00.633) 0:00:51.301 ******* 2026-04-08 00:48:26.211206 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-04-08 00:48:26.211215 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-04-08 00:48:26.211224 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-04-08 00:48:26.211234 | orchestrator | 2026-04-08 00:48:26.211242 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-04-08 00:48:26.211252 | orchestrator | Wednesday 08 April 2026 00:43:52 +0000 (0:00:02.611) 0:00:53.912 ******* 2026-04-08 00:48:26.211261 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-04-08 00:48:26.211270 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-04-08 00:48:26.211279 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-04-08 00:48:26.211288 | orchestrator | 2026-04-08 00:48:26.211297 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-04-08 00:48:26.211305 | orchestrator | Wednesday 08 April 2026 00:43:55 +0000 (0:00:02.206) 0:00:56.119 ******* 2026-04-08 00:48:26.211314 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.211323 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.211331 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.211340 | orchestrator | 2026-04-08 00:48:26.211348 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-04-08 00:48:26.211358 | orchestrator | Wednesday 08 April 2026 00:43:55 +0000 (0:00:00.457) 0:00:56.577 ******* 2026-04-08 00:48:26.211367 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.211376 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.211385 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.211394 | orchestrator | 2026-04-08 00:48:26.211467 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-08 00:48:26.211478 | orchestrator | Wednesday 08 April 2026 00:43:55 +0000 (0:00:00.287) 0:00:56.864 ******* 2026-04-08 00:48:26.211488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-08 00:48:26.211499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-08 00:48:26.211520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-08 00:48:26.211530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-08 00:48:26.211540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-08 00:48:26.211549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-08 00:48:26.211611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-08 00:48:26.211624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-08 00:48:26.211640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-08 00:48:26.211649 | orchestrator | 2026-04-08 00:48:26.211659 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-08 00:48:26.211668 | orchestrator | Wednesday 08 April 2026 00:43:59 +0000 (0:00:03.780) 0:01:00.645 ******* 2026-04-08 00:48:26.211686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-08 00:48:26.211696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-08 00:48:26.211705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-08 00:48:26.211715 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.211774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-08 00:48:26.211785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-08 00:48:26.211801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-08 00:48:26.211811 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.211825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-08 00:48:26.211836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-08 00:48:26.211846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-08 00:48:26.211856 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.211865 | orchestrator | 2026-04-08 00:48:26.211875 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-08 00:48:26.211885 | orchestrator | Wednesday 08 April 2026 00:44:00 +0000 (0:00:00.568) 0:01:01.213 ******* 2026-04-08 00:48:26.211895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-08 00:48:26.211954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-08 00:48:26.212029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-08 00:48:26.212039 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.212048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-08 00:48:26.212059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-08 00:48:26.212065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-08 00:48:26.212071 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.212076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-08 00:48:26.212129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-08 00:48:26.212149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-08 00:48:26.212159 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.212168 | orchestrator | 2026-04-08 00:48:26.212177 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-04-08 00:48:26.212187 | orchestrator | Wednesday 08 April 2026 00:44:01 +0000 (0:00:00.954) 0:01:02.168 ******* 2026-04-08 00:48:26.212196 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-08 00:48:26.212207 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-08 00:48:26.212216 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-08 00:48:26.212225 | orchestrator | 2026-04-08 00:48:26.212231 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-04-08 00:48:26.212237 | orchestrator | Wednesday 08 April 2026 00:44:02 +0000 (0:00:01.568) 0:01:03.736 ******* 2026-04-08 00:48:26.212242 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-08 00:48:26.212248 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-08 00:48:26.212253 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-08 00:48:26.212258 | orchestrator | 2026-04-08 00:48:26.212268 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-04-08 00:48:26.212273 | orchestrator | Wednesday 08 April 2026 00:44:04 +0000 (0:00:01.660) 0:01:05.397 ******* 2026-04-08 00:48:26.212279 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-08 00:48:26.212284 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-08 00:48:26.212290 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-08 00:48:26.212295 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-08 00:48:26.212300 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.212306 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-08 00:48:26.212311 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.212316 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-08 00:48:26.212322 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.212327 | orchestrator | 2026-04-08 00:48:26.212332 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-04-08 00:48:26.212338 | orchestrator | Wednesday 08 April 2026 00:44:05 +0000 (0:00:00.780) 0:01:06.178 ******* 2026-04-08 00:48:26.212343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-08 00:48:26.212389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-08 00:48:26.212396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-08 00:48:26.212402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-08 00:48:26.212411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-08 00:48:26.212417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-08 00:48:26.212423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-08 00:48:26.212432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-08 00:48:26.212452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-08 00:48:26.212458 | orchestrator | 2026-04-08 00:48:26.212464 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-04-08 00:48:26.212469 | orchestrator | Wednesday 08 April 2026 00:44:07 +0000 (0:00:01.992) 0:01:08.170 ******* 2026-04-08 00:48:26.212475 | orchestrator | changed: [testbed-node-0] => { 2026-04-08 00:48:26.212480 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:48:26.212486 | orchestrator | } 2026-04-08 00:48:26.212492 | orchestrator | changed: [testbed-node-1] => { 2026-04-08 00:48:26.212497 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:48:26.212502 | orchestrator | } 2026-04-08 00:48:26.212508 | orchestrator | changed: [testbed-node-2] => { 2026-04-08 00:48:26.212513 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:48:26.212519 | orchestrator | } 2026-04-08 00:48:26.212524 | orchestrator | 2026-04-08 00:48:26.212529 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-08 00:48:26.212534 | orchestrator | Wednesday 08 April 2026 00:44:07 +0000 (0:00:00.282) 0:01:08.453 ******* 2026-04-08 00:48:26.212539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-08 00:48:26.212548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-08 00:48:26.212553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-08 00:48:26.212563 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.212568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-08 00:48:26.212573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-08 00:48:26.212597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-08 00:48:26.212603 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.212608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-08 00:48:26.212613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-08 00:48:26.212621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-08 00:48:26.212630 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.212635 | orchestrator | 2026-04-08 00:48:26.212640 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-04-08 00:48:26.212645 | orchestrator | Wednesday 08 April 2026 00:44:08 +0000 (0:00:01.033) 0:01:09.487 ******* 2026-04-08 00:48:26.212650 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:48:26.212654 | orchestrator | 2026-04-08 00:48:26.212659 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-04-08 00:48:26.212664 | orchestrator | Wednesday 08 April 2026 00:44:08 +0000 (0:00:00.618) 0:01:10.106 ******* 2026-04-08 00:48:26.212670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:48:26.212689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-08 00:48:26.212696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.212701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:48:26.212716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.212721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-08 00:48:26.212727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.212744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.212750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:48:26.212755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-08 00:48:26.212767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.212772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.212777 | orchestrator | 2026-04-08 00:48:26.212782 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-04-08 00:48:26.212786 | orchestrator | Wednesday 08 April 2026 00:44:11 +0000 (0:00:02.936) 0:01:13.043 ******* 2026-04-08 00:48:26.212792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:48:26.212809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-08 00:48:26.212815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.212820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.212833 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.212838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:48:26.212843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-08 00:48:26.212867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.212885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.212891 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.212896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:48:26.212908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-08 00:48:26.212914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.212919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.212924 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.212928 | orchestrator | 2026-04-08 00:48:26.212934 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-04-08 00:48:26.212938 | orchestrator | Wednesday 08 April 2026 00:44:12 +0000 (0:00:00.611) 0:01:13.655 ******* 2026-04-08 00:48:26.212944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.212952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.212981 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.213009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.213019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.213026 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.213035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.213049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.213058 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.213064 | orchestrator | 2026-04-08 00:48:26.213070 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-04-08 00:48:26.213076 | orchestrator | Wednesday 08 April 2026 00:44:13 +0000 (0:00:00.950) 0:01:14.605 ******* 2026-04-08 00:48:26.213082 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:26.213088 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:26.213094 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:26.213099 | orchestrator | 2026-04-08 00:48:26.213105 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-04-08 00:48:26.213110 | orchestrator | Wednesday 08 April 2026 00:44:14 +0000 (0:00:01.127) 0:01:15.733 ******* 2026-04-08 00:48:26.213116 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:26.213122 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:26.213127 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:26.213133 | orchestrator | 2026-04-08 00:48:26.213138 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-04-08 00:48:26.213144 | orchestrator | Wednesday 08 April 2026 00:44:16 +0000 (0:00:01.734) 0:01:17.467 ******* 2026-04-08 00:48:26.213149 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:48:26.213158 | orchestrator | 2026-04-08 00:48:26.213163 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-04-08 00:48:26.213169 | orchestrator | Wednesday 08 April 2026 00:44:16 +0000 (0:00:00.545) 0:01:18.013 ******* 2026-04-08 00:48:26.213176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:48:26.213182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.213208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.213223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:48:26.213235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.213243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.213252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:48:26.213279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.213294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.213303 | orchestrator | 2026-04-08 00:48:26.213310 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-04-08 00:48:26.213318 | orchestrator | Wednesday 08 April 2026 00:44:19 +0000 (0:00:03.016) 0:01:21.029 ******* 2026-04-08 00:48:26.213331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:48:26.213340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.213348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.213355 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.213381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:48:26.213397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.213407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.213416 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.213429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:48:26.213439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.213448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.213483 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.213492 | orchestrator | 2026-04-08 00:48:26.213500 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-04-08 00:48:26.213508 | orchestrator | Wednesday 08 April 2026 00:44:20 +0000 (0:00:00.766) 0:01:21.796 ******* 2026-04-08 00:48:26.213517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.213526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.213536 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.213545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.213554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.213563 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.213572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.213585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.213594 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.213603 | orchestrator | 2026-04-08 00:48:26.213612 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-04-08 00:48:26.213621 | orchestrator | Wednesday 08 April 2026 00:44:21 +0000 (0:00:00.722) 0:01:22.518 ******* 2026-04-08 00:48:26.213630 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:26.213639 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:26.213648 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:26.213658 | orchestrator | 2026-04-08 00:48:26.213667 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-04-08 00:48:26.213675 | orchestrator | Wednesday 08 April 2026 00:44:22 +0000 (0:00:01.081) 0:01:23.600 ******* 2026-04-08 00:48:26.213684 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:26.213693 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:26.213702 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:26.213711 | orchestrator | 2026-04-08 00:48:26.213720 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-04-08 00:48:26.213729 | orchestrator | Wednesday 08 April 2026 00:44:24 +0000 (0:00:01.751) 0:01:25.351 ******* 2026-04-08 00:48:26.213738 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.213747 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.213762 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.213772 | orchestrator | 2026-04-08 00:48:26.213780 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-04-08 00:48:26.213789 | orchestrator | Wednesday 08 April 2026 00:44:24 +0000 (0:00:00.252) 0:01:25.603 ******* 2026-04-08 00:48:26.213799 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:48:26.213807 | orchestrator | 2026-04-08 00:48:26.213816 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-04-08 00:48:26.213825 | orchestrator | Wednesday 08 April 2026 00:44:25 +0000 (0:00:00.768) 0:01:26.371 ******* 2026-04-08 00:48:26.213835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-08 00:48:26.213865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-08 00:48:26.213875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-08 00:48:26.213885 | orchestrator | 2026-04-08 00:48:26.213893 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-04-08 00:48:26.213902 | orchestrator | Wednesday 08 April 2026 00:44:27 +0000 (0:00:02.680) 0:01:29.051 ******* 2026-04-08 00:48:26.213915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-08 00:48:26.213930 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.213939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-08 00:48:26.213948 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.214065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-08 00:48:26.214078 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.214086 | orchestrator | 2026-04-08 00:48:26.214095 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-04-08 00:48:26.214104 | orchestrator | Wednesday 08 April 2026 00:44:30 +0000 (0:00:02.176) 0:01:31.228 ******* 2026-04-08 00:48:26.214112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-08 00:48:26.214123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-08 00:48:26.214133 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.214141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-08 00:48:26.214155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-08 00:48:26.214169 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.214178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-08 00:48:26.214185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-08 00:48:26.214193 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.214201 | orchestrator | 2026-04-08 00:48:26.214209 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-04-08 00:48:26.214217 | orchestrator | Wednesday 08 April 2026 00:44:32 +0000 (0:00:02.111) 0:01:33.339 ******* 2026-04-08 00:48:26.214224 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.214233 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.214240 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.214248 | orchestrator | 2026-04-08 00:48:26.214256 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-04-08 00:48:26.214263 | orchestrator | Wednesday 08 April 2026 00:44:32 +0000 (0:00:00.387) 0:01:33.727 ******* 2026-04-08 00:48:26.214272 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.214280 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.214287 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.214294 | orchestrator | 2026-04-08 00:48:26.214299 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-04-08 00:48:26.214304 | orchestrator | Wednesday 08 April 2026 00:44:33 +0000 (0:00:01.172) 0:01:34.899 ******* 2026-04-08 00:48:26.214310 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:48:26.214318 | orchestrator | 2026-04-08 00:48:26.214326 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-04-08 00:48:26.214333 | orchestrator | Wednesday 08 April 2026 00:44:34 +0000 (0:00:00.875) 0:01:35.774 ******* 2026-04-08 00:48:26.214365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:48:26.214377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.214397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.214406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.214415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:48:26.214440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.214446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.214460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.214465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:48:26.214470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.214489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.214496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.214504 | orchestrator | 2026-04-08 00:48:26.214509 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-04-08 00:48:26.214514 | orchestrator | Wednesday 08 April 2026 00:44:38 +0000 (0:00:03.962) 0:01:39.737 ******* 2026-04-08 00:48:26.214522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:48:26.214528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.214533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.214549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.214555 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.214560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:48:26.214572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.214577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.214582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.214587 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.214604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:48:26.214610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.214621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.214629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.214633 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.214638 | orchestrator | 2026-04-08 00:48:26.214643 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-04-08 00:48:26.214647 | orchestrator | Wednesday 08 April 2026 00:44:39 +0000 (0:00:00.601) 0:01:40.338 ******* 2026-04-08 00:48:26.214652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.214658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.214663 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.214668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.214673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.214677 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.214682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.214699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.214712 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.214720 | orchestrator | 2026-04-08 00:48:26.214727 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-04-08 00:48:26.214736 | orchestrator | Wednesday 08 April 2026 00:44:40 +0000 (0:00:00.799) 0:01:41.137 ******* 2026-04-08 00:48:26.214744 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:26.214752 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:26.214761 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:26.214769 | orchestrator | 2026-04-08 00:48:26.214778 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-04-08 00:48:26.214786 | orchestrator | Wednesday 08 April 2026 00:44:41 +0000 (0:00:01.432) 0:01:42.570 ******* 2026-04-08 00:48:26.214795 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:26.214800 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:26.214805 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:26.214810 | orchestrator | 2026-04-08 00:48:26.214814 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-04-08 00:48:26.214819 | orchestrator | Wednesday 08 April 2026 00:44:43 +0000 (0:00:01.753) 0:01:44.323 ******* 2026-04-08 00:48:26.214823 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.214828 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.214833 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.214838 | orchestrator | 2026-04-08 00:48:26.214842 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-04-08 00:48:26.214847 | orchestrator | Wednesday 08 April 2026 00:44:43 +0000 (0:00:00.263) 0:01:44.587 ******* 2026-04-08 00:48:26.214852 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.214856 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.214861 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.214865 | orchestrator | 2026-04-08 00:48:26.214870 | orchestrator | TASK [include_role : designate] ************************************************ 2026-04-08 00:48:26.214874 | orchestrator | Wednesday 08 April 2026 00:44:43 +0000 (0:00:00.266) 0:01:44.853 ******* 2026-04-08 00:48:26.214879 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:48:26.214884 | orchestrator | 2026-04-08 00:48:26.214892 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-04-08 00:48:26.214899 | orchestrator | Wednesday 08 April 2026 00:44:44 +0000 (0:00:01.014) 0:01:45.868 ******* 2026-04-08 00:48:26.214912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:48:26.214921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-08 00:48:26.214935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.214982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.214992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.215004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.215012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.215021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:48:26.215054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-08 00:48:26.215062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.215070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.215082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.215090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.215098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.215129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:48:26.215138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-08 00:48:26.215147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.215158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.215167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.215181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.215189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.215197 | orchestrator | 2026-04-08 00:48:26.215205 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-04-08 00:48:26.215213 | orchestrator | Wednesday 08 April 2026 00:44:48 +0000 (0:00:03.426) 0:01:49.294 ******* 2026-04-08 00:48:26.215239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:48:26.215248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-08 00:48:26.215260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.215268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.215281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.215307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.215315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.215323 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.215331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:48:26.215342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:48:26.215356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-08 00:48:26.215365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-08 00:48:26.215376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.215384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.215393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.215407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.215420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.215428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.215436 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.215451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.215459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.215467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.215475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.215488 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.215495 | orchestrator | 2026-04-08 00:48:26.215503 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-04-08 00:48:26.215511 | orchestrator | Wednesday 08 April 2026 00:44:49 +0000 (0:00:00.982) 0:01:50.277 ******* 2026-04-08 00:48:26.215519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.215528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.215536 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.215544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.215556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.215564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.215572 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.215603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.215611 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.215620 | orchestrator | 2026-04-08 00:48:26.215628 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-04-08 00:48:26.215637 | orchestrator | Wednesday 08 April 2026 00:44:50 +0000 (0:00:01.175) 0:01:51.452 ******* 2026-04-08 00:48:26.215645 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:26.215657 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:26.215665 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:26.215673 | orchestrator | 2026-04-08 00:48:26.215681 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-04-08 00:48:26.215689 | orchestrator | Wednesday 08 April 2026 00:44:51 +0000 (0:00:01.132) 0:01:52.585 ******* 2026-04-08 00:48:26.215696 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:26.215704 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:26.215711 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:26.215719 | orchestrator | 2026-04-08 00:48:26.215726 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-04-08 00:48:26.215734 | orchestrator | Wednesday 08 April 2026 00:44:53 +0000 (0:00:01.734) 0:01:54.319 ******* 2026-04-08 00:48:26.215741 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.215749 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.215756 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.215763 | orchestrator | 2026-04-08 00:48:26.215770 | orchestrator | TASK [include_role : glance] *************************************************** 2026-04-08 00:48:26.215778 | orchestrator | Wednesday 08 April 2026 00:44:53 +0000 (0:00:00.267) 0:01:54.587 ******* 2026-04-08 00:48:26.215791 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:48:26.215798 | orchestrator | 2026-04-08 00:48:26.215805 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-04-08 00:48:26.215813 | orchestrator | Wednesday 08 April 2026 00:44:54 +0000 (0:00:00.715) 0:01:55.302 ******* 2026-04-08 00:48:26.215826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-08 00:48:26.215842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-08 00:48:26.215862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-08 00:48:26.215876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-08 00:48:26.215888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-08 00:48:26.215901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-08 00:48:26.215909 | orchestrator | 2026-04-08 00:48:26.215917 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-04-08 00:48:26.215928 | orchestrator | Wednesday 08 April 2026 00:44:59 +0000 (0:00:05.189) 0:02:00.492 ******* 2026-04-08 00:48:26.215936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-08 00:48:26.215952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-08 00:48:26.215999 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.216012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-08 00:48:26.216025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-08 00:48:26.216030 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.216040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-08 00:48:26.216057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-08 00:48:26.216062 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.216067 | orchestrator | 2026-04-08 00:48:26.216072 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-04-08 00:48:26.216077 | orchestrator | Wednesday 08 April 2026 00:45:03 +0000 (0:00:03.918) 0:02:04.411 ******* 2026-04-08 00:48:26.216082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-08 00:48:26.216091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-08 00:48:26.216101 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.216107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-08 00:48:26.216112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-08 00:48:26.216120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-08 00:48:26.216125 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.216130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-08 00:48:26.216135 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.216140 | orchestrator | 2026-04-08 00:48:26.216145 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-04-08 00:48:26.216150 | orchestrator | Wednesday 08 April 2026 00:45:07 +0000 (0:00:03.841) 0:02:08.253 ******* 2026-04-08 00:48:26.216154 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:26.216159 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:26.216164 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:26.216168 | orchestrator | 2026-04-08 00:48:26.216173 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-04-08 00:48:26.216178 | orchestrator | Wednesday 08 April 2026 00:45:08 +0000 (0:00:01.198) 0:02:09.451 ******* 2026-04-08 00:48:26.216259 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:26.216270 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:26.216277 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:26.216283 | orchestrator | 2026-04-08 00:48:26.216290 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-04-08 00:48:26.216297 | orchestrator | Wednesday 08 April 2026 00:45:10 +0000 (0:00:01.811) 0:02:11.263 ******* 2026-04-08 00:48:26.216304 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.216311 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.216317 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.216324 | orchestrator | 2026-04-08 00:48:26.216331 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-04-08 00:48:26.216344 | orchestrator | Wednesday 08 April 2026 00:45:10 +0000 (0:00:00.269) 0:02:11.532 ******* 2026-04-08 00:48:26.216351 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:48:26.216358 | orchestrator | 2026-04-08 00:48:26.216366 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-04-08 00:48:26.216373 | orchestrator | Wednesday 08 April 2026 00:45:11 +0000 (0:00:00.758) 0:02:12.291 ******* 2026-04-08 00:48:26.216387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:48:26.216394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:48:26.216405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:48:26.216599 | orchestrator | 2026-04-08 00:48:26.216608 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-04-08 00:48:26.216615 | orchestrator | Wednesday 08 April 2026 00:45:14 +0000 (0:00:03.188) 0:02:15.479 ******* 2026-04-08 00:48:26.216623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:48:26.216630 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.216644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:48:26.216651 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.216664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:48:26.216671 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.216678 | orchestrator | 2026-04-08 00:48:26.216685 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-04-08 00:48:26.216692 | orchestrator | Wednesday 08 April 2026 00:45:14 +0000 (0:00:00.361) 0:02:15.840 ******* 2026-04-08 00:48:26.216700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.216708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.216716 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.216758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.216766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.216773 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.216785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.216793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.216801 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.216808 | orchestrator | 2026-04-08 00:48:26.216814 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-04-08 00:48:26.216821 | orchestrator | Wednesday 08 April 2026 00:45:15 +0000 (0:00:00.652) 0:02:16.493 ******* 2026-04-08 00:48:26.216827 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:26.216839 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:26.216847 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:26.216853 | orchestrator | 2026-04-08 00:48:26.216861 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-04-08 00:48:26.216867 | orchestrator | Wednesday 08 April 2026 00:45:16 +0000 (0:00:01.512) 0:02:18.005 ******* 2026-04-08 00:48:26.216874 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:26.216880 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:26.216886 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:26.216893 | orchestrator | 2026-04-08 00:48:26.216899 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-04-08 00:48:26.216906 | orchestrator | Wednesday 08 April 2026 00:45:18 +0000 (0:00:01.883) 0:02:19.888 ******* 2026-04-08 00:48:26.216912 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.216918 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.216924 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.216931 | orchestrator | 2026-04-08 00:48:26.216939 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-04-08 00:48:26.216945 | orchestrator | Wednesday 08 April 2026 00:45:19 +0000 (0:00:00.519) 0:02:20.408 ******* 2026-04-08 00:48:26.216952 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:48:26.216980 | orchestrator | 2026-04-08 00:48:26.216987 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-04-08 00:48:26.216993 | orchestrator | Wednesday 08 April 2026 00:45:20 +0000 (0:00:00.875) 0:02:21.284 ******* 2026-04-08 00:48:26.217006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-08 00:48:26.217016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-08 00:48:26.217034 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-08 00:48:26.217044 | orchestrator | 2026-04-08 00:48:26.217048 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-04-08 00:48:26.217056 | orchestrator | Wednesday 08 April 2026 00:45:23 +0000 (0:00:03.766) 0:02:25.050 ******* 2026-04-08 00:48:26.217160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-08 00:48:26.217177 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.217191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-08 00:48:26.217206 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.217218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-08 00:48:26.217226 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.217233 | orchestrator | 2026-04-08 00:48:26.217241 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-04-08 00:48:26.217249 | orchestrator | Wednesday 08 April 2026 00:45:24 +0000 (0:00:00.933) 0:02:25.984 ******* 2026-04-08 00:48:26.217258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-08 00:48:26.217503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-08 00:48:26.217529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-08 00:48:26.217537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-08 00:48:26.217542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-08 00:48:26.217548 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.217553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-08 00:48:26.217559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-08 00:48:26.217563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-08 00:48:26.217567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-08 00:48:26.217576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-08 00:48:26.217581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-08 00:48:26.217585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-08 00:48:26.217590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-08 00:48:26.217597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-08 00:48:26.217602 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.217606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-08 00:48:26.217610 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.217614 | orchestrator | 2026-04-08 00:48:26.217619 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-04-08 00:48:26.217623 | orchestrator | Wednesday 08 April 2026 00:45:25 +0000 (0:00:00.970) 0:02:26.955 ******* 2026-04-08 00:48:26.217628 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:26.217635 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:26.217641 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:26.217648 | orchestrator | 2026-04-08 00:48:26.217658 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-04-08 00:48:26.217664 | orchestrator | Wednesday 08 April 2026 00:45:27 +0000 (0:00:01.245) 0:02:28.200 ******* 2026-04-08 00:48:26.217670 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:26.217677 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:26.217684 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:26.217691 | orchestrator | 2026-04-08 00:48:26.217697 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-04-08 00:48:26.217703 | orchestrator | Wednesday 08 April 2026 00:45:29 +0000 (0:00:02.136) 0:02:30.336 ******* 2026-04-08 00:48:26.217709 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.217716 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.217723 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.217729 | orchestrator | 2026-04-08 00:48:26.217735 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-04-08 00:48:26.217742 | orchestrator | Wednesday 08 April 2026 00:45:29 +0000 (0:00:00.538) 0:02:30.875 ******* 2026-04-08 00:48:26.217748 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.217755 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.217761 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.217768 | orchestrator | 2026-04-08 00:48:26.217774 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-04-08 00:48:26.217780 | orchestrator | Wednesday 08 April 2026 00:45:30 +0000 (0:00:00.321) 0:02:31.197 ******* 2026-04-08 00:48:26.217787 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:48:26.217794 | orchestrator | 2026-04-08 00:48:26.217800 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-04-08 00:48:26.217806 | orchestrator | Wednesday 08 April 2026 00:45:31 +0000 (0:00:00.952) 0:02:32.150 ******* 2026-04-08 00:48:26.217820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-08 00:48:26.217835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-08 00:48:26.217843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-08 00:48:26.217852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-08 00:48:26.217857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-08 00:48:26.217862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-08 00:48:26.217870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-08 00:48:26.217878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-08 00:48:26.217886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-08 00:48:26.217891 | orchestrator | 2026-04-08 00:48:26.217895 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-04-08 00:48:26.217899 | orchestrator | Wednesday 08 April 2026 00:45:34 +0000 (0:00:03.570) 0:02:35.721 ******* 2026-04-08 00:48:26.217904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-08 00:48:26.217908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-08 00:48:26.217920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-08 00:48:26.217925 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.217929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-08 00:48:26.217937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-08 00:48:26.217942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-08 00:48:26.217946 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.217950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-08 00:48:26.217978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-08 00:48:26.217983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-08 00:48:26.217987 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.217992 | orchestrator | 2026-04-08 00:48:26.217996 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-04-08 00:48:26.218000 | orchestrator | Wednesday 08 April 2026 00:45:35 +0000 (0:00:00.551) 0:02:36.272 ******* 2026-04-08 00:48:26.218005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-08 00:48:26.218009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-08 00:48:26.218043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-08 00:48:26.218047 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.218052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-08 00:48:26.218056 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.218060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-08 00:48:26.218065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-08 00:48:26.218073 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.218077 | orchestrator | 2026-04-08 00:48:26.218081 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-04-08 00:48:26.218085 | orchestrator | Wednesday 08 April 2026 00:45:35 +0000 (0:00:00.819) 0:02:37.091 ******* 2026-04-08 00:48:26.218090 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:26.218094 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:26.218098 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:26.218102 | orchestrator | 2026-04-08 00:48:26.218106 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-04-08 00:48:26.218111 | orchestrator | Wednesday 08 April 2026 00:45:37 +0000 (0:00:01.253) 0:02:38.344 ******* 2026-04-08 00:48:26.218118 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:26.218124 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:26.218131 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:26.218137 | orchestrator | 2026-04-08 00:48:26.218144 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-04-08 00:48:26.218151 | orchestrator | Wednesday 08 April 2026 00:45:38 +0000 (0:00:01.681) 0:02:40.026 ******* 2026-04-08 00:48:26.218157 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.218164 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.218171 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.218177 | orchestrator | 2026-04-08 00:48:26.218183 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-04-08 00:48:26.218190 | orchestrator | Wednesday 08 April 2026 00:45:39 +0000 (0:00:00.446) 0:02:40.472 ******* 2026-04-08 00:48:26.218196 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:48:26.218203 | orchestrator | 2026-04-08 00:48:26.218213 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-04-08 00:48:26.218220 | orchestrator | Wednesday 08 April 2026 00:45:40 +0000 (0:00:00.911) 0:02:41.384 ******* 2026-04-08 00:48:26.218227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:48:26.218239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.218245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:48:26.218255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.218263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:48:26.218267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.218272 | orchestrator | 2026-04-08 00:48:26.218276 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-04-08 00:48:26.218280 | orchestrator | Wednesday 08 April 2026 00:45:43 +0000 (0:00:03.571) 0:02:44.955 ******* 2026-04-08 00:48:26.218288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:48:26.218302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.218307 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.218314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:48:26.218319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.218323 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.218331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:48:26.218339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.218343 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.218347 | orchestrator | 2026-04-08 00:48:26.218351 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-04-08 00:48:26.218356 | orchestrator | Wednesday 08 April 2026 00:45:44 +0000 (0:00:00.806) 0:02:45.762 ******* 2026-04-08 00:48:26.218360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.218365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.218484 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.218490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.218498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.218502 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.218540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.218546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.218550 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.218554 | orchestrator | 2026-04-08 00:48:26.218559 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-04-08 00:48:26.218563 | orchestrator | Wednesday 08 April 2026 00:45:45 +0000 (0:00:00.831) 0:02:46.594 ******* 2026-04-08 00:48:26.218567 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:26.218571 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:26.218575 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:26.218580 | orchestrator | 2026-04-08 00:48:26.218584 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-04-08 00:48:26.218588 | orchestrator | Wednesday 08 April 2026 00:45:46 +0000 (0:00:01.062) 0:02:47.656 ******* 2026-04-08 00:48:26.218592 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:26.218601 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:26.218606 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:26.218610 | orchestrator | 2026-04-08 00:48:26.218614 | orchestrator | TASK [include_role : manila] *************************************************** 2026-04-08 00:48:26.218618 | orchestrator | Wednesday 08 April 2026 00:45:48 +0000 (0:00:01.801) 0:02:49.457 ******* 2026-04-08 00:48:26.218622 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:48:26.218626 | orchestrator | 2026-04-08 00:48:26.218630 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-04-08 00:48:26.218634 | orchestrator | Wednesday 08 April 2026 00:45:49 +0000 (0:00:01.089) 0:02:50.547 ******* 2026-04-08 00:48:26.218643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release//manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:48:26.218648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release//manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:48:26.219338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release//manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:48:26.219360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release//manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.219375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release//manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.219388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release//manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.219395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release//manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.219401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release//manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.219412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release//manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.219418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release//manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.219428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release//manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.219435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release//manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.219442 | orchestrator | 2026-04-08 00:48:26.219470 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-04-08 00:48:26.219477 | orchestrator | Wednesday 08 April 2026 00:45:52 +0000 (0:00:03.170) 0:02:53.717 ******* 2026-04-08 00:48:26.219484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release//manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:48:26.219491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release//manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.219500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release//manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:48:26.219512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release//manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.219523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release//manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.219530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release//manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.219539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release//manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:48:26.219546 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.219555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release//manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.219567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release//manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.219574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release//manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.219584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release//manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.219590 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.219597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release//manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.219603 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.219704 | orchestrator | 2026-04-08 00:48:26.219752 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-04-08 00:48:26.219759 | orchestrator | Wednesday 08 April 2026 00:45:53 +0000 (0:00:00.695) 0:02:54.413 ******* 2026-04-08 00:48:26.219766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.219773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.219780 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.219787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.219793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.219804 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.219814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.219821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.219827 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.219834 | orchestrator | 2026-04-08 00:48:26.219840 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-04-08 00:48:26.219847 | orchestrator | Wednesday 08 April 2026 00:45:54 +0000 (0:00:01.128) 0:02:55.542 ******* 2026-04-08 00:48:26.219853 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:26.219860 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:26.219866 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:26.219873 | orchestrator | 2026-04-08 00:48:26.219879 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-04-08 00:48:26.219885 | orchestrator | Wednesday 08 April 2026 00:45:55 +0000 (0:00:01.181) 0:02:56.724 ******* 2026-04-08 00:48:26.219892 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:26.219898 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:26.219904 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:26.219911 | orchestrator | 2026-04-08 00:48:26.219917 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-04-08 00:48:26.219923 | orchestrator | Wednesday 08 April 2026 00:45:57 +0000 (0:00:02.039) 0:02:58.764 ******* 2026-04-08 00:48:26.219930 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:48:26.219936 | orchestrator | 2026-04-08 00:48:26.219943 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-04-08 00:48:26.219949 | orchestrator | Wednesday 08 April 2026 00:45:58 +0000 (0:00:01.147) 0:02:59.911 ******* 2026-04-08 00:48:26.219972 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-08 00:48:26.219978 | orchestrator | 2026-04-08 00:48:26.219985 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-04-08 00:48:26.219991 | orchestrator | Wednesday 08 April 2026 00:46:00 +0000 (0:00:01.795) 0:03:01.706 ******* 2026-04-08 00:48:26.220002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-08 00:48:26.220018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-08 00:48:26.220024 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.220038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-08 00:48:26.220047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-08 00:48:26.220054 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.220065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-08 00:48:26.220080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-08 00:48:26.220087 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.220093 | orchestrator | 2026-04-08 00:48:26.220101 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-04-08 00:48:26.220107 | orchestrator | Wednesday 08 April 2026 00:46:03 +0000 (0:00:02.541) 0:03:04.247 ******* 2026-04-08 00:48:26.220117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-08 00:48:26.220130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-08 00:48:26.220137 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.220148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-08 00:48:26.220159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-08 00:48:26.220165 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.220172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-08 00:48:26.220550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-08 00:48:26.220567 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.220571 | orchestrator | 2026-04-08 00:48:26.220575 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-04-08 00:48:26.220579 | orchestrator | Wednesday 08 April 2026 00:46:06 +0000 (0:00:03.801) 0:03:08.049 ******* 2026-04-08 00:48:26.220584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-08 00:48:26.220588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-08 00:48:26.220598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-08 00:48:26.220608 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.220612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-08 00:48:26.220616 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.220620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-08 00:48:26.220627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-08 00:48:26.220631 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.220635 | orchestrator | 2026-04-08 00:48:26.220639 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-04-08 00:48:26.220643 | orchestrator | Wednesday 08 April 2026 00:46:09 +0000 (0:00:02.803) 0:03:10.852 ******* 2026-04-08 00:48:26.220647 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:26.220650 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:26.220654 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:26.220658 | orchestrator | 2026-04-08 00:48:26.220662 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-04-08 00:48:26.220665 | orchestrator | Wednesday 08 April 2026 00:46:11 +0000 (0:00:01.794) 0:03:12.647 ******* 2026-04-08 00:48:26.220669 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.220673 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.220677 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.220680 | orchestrator | 2026-04-08 00:48:26.220684 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-04-08 00:48:26.220688 | orchestrator | Wednesday 08 April 2026 00:46:12 +0000 (0:00:01.241) 0:03:13.889 ******* 2026-04-08 00:48:26.220692 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.220695 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.220699 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.220703 | orchestrator | 2026-04-08 00:48:26.220706 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-04-08 00:48:26.220710 | orchestrator | Wednesday 08 April 2026 00:46:13 +0000 (0:00:00.264) 0:03:14.153 ******* 2026-04-08 00:48:26.220714 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:48:26.220718 | orchestrator | 2026-04-08 00:48:26.220721 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-04-08 00:48:26.220725 | orchestrator | Wednesday 08 April 2026 00:46:13 +0000 (0:00:00.827) 0:03:14.981 ******* 2026-04-08 00:48:26.220737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release//memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-08 00:48:26.220741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release//memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-08 00:48:26.220745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release//memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-08 00:48:26.220749 | orchestrator | 2026-04-08 00:48:26.220753 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-04-08 00:48:26.220757 | orchestrator | Wednesday 08 April 2026 00:46:15 +0000 (0:00:01.504) 0:03:16.485 ******* 2026-04-08 00:48:26.220763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release//memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-08 00:48:26.220768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release//memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-08 00:48:26.220775 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.220778 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.220785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release//memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-08 00:48:26.220789 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.220793 | orchestrator | 2026-04-08 00:48:26.220797 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-04-08 00:48:26.220800 | orchestrator | Wednesday 08 April 2026 00:46:15 +0000 (0:00:00.336) 0:03:16.822 ******* 2026-04-08 00:48:26.220804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-08 00:48:26.220808 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.220812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-08 00:48:26.220816 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.220820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-08 00:48:26.220824 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.220828 | orchestrator | 2026-04-08 00:48:26.220831 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-04-08 00:48:26.220835 | orchestrator | Wednesday 08 April 2026 00:46:16 +0000 (0:00:00.653) 0:03:17.475 ******* 2026-04-08 00:48:26.220839 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.220843 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.220846 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.220850 | orchestrator | 2026-04-08 00:48:26.220854 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-04-08 00:48:26.220857 | orchestrator | Wednesday 08 April 2026 00:46:17 +0000 (0:00:00.765) 0:03:18.241 ******* 2026-04-08 00:48:26.220861 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.220865 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.220869 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.220872 | orchestrator | 2026-04-08 00:48:26.220876 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-04-08 00:48:26.220880 | orchestrator | Wednesday 08 April 2026 00:46:18 +0000 (0:00:01.246) 0:03:19.488 ******* 2026-04-08 00:48:26.220883 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.220887 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.220893 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.220897 | orchestrator | 2026-04-08 00:48:26.220900 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-04-08 00:48:26.220908 | orchestrator | Wednesday 08 April 2026 00:46:18 +0000 (0:00:00.302) 0:03:19.790 ******* 2026-04-08 00:48:26.220912 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:48:26.220916 | orchestrator | 2026-04-08 00:48:26.220919 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-04-08 00:48:26.220923 | orchestrator | Wednesday 08 April 2026 00:46:19 +0000 (0:00:01.151) 0:03:20.941 ******* 2026-04-08 00:48:26.220927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release//neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:48:26.220934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release//neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:48:26.220939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release//neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.220944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release//neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.220977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release//neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release//neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-08 00:48:26.220987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release//neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release//neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-08 00:48:26.220995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release//neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release//neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-08 00:48:26.221001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release//neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release//neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-08 00:48:26.221013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release//neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.221018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release//neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.221023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release//neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-08 00:48:26.221032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release//neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-08 00:48:26.221038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release//neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-08 00:48:26.221045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release//neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-08 00:48:26.221051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-08 00:48:26.221065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-08 00:48:26.221072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-08 00:48:26.221082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-08 00:48:26.221088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release//neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.221095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release//neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:48:26.221110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release//neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.221117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release//neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-08 00:48:26.221124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release//neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-08 00:48:26.221134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-08 00:48:26.221140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release//neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.221146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release//ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.221157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-08 00:48:26.221780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-08 00:48:26.221814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release//neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release//neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-08 00:48:26.221823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release//ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.221830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release//neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-08 00:48:26.221853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-08 00:48:26.221860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release//neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release//neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-08 00:48:26.221870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release//neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-08 00:48:26.221877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release//neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.221886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release//neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-08 00:48:26.221894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release//neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-08 00:48:26.221901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-08 00:48:26.221906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-08 00:48:26.221910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release//neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.221917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release//neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-08 00:48:26.221921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-08 00:48:26.221925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release//ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.221937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-08 00:48:26.221944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release//neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-08 00:48:26.221950 | orchestrator | 2026-04-08 00:48:26.221970 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-04-08 00:48:26.221977 | orchestrator | Wednesday 08 April 2026 00:46:23 +0000 (0:00:04.093) 0:03:25.035 ******* 2026-04-08 00:48:26.221990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release//neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:48:26.221996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release//neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.222007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release//neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release//neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-08 00:48:26.222051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release//neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release//neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-08 00:48:26.222060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release//neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.222071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release//neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-08 00:48:26.222078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release//neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-08 00:48:26.222091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release//neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:48:26.222101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-08 00:48:26.222109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release//neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.222115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-08 00:48:26.222125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release//neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release//neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-08 00:48:26.222137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release//neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.222148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release//neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release//neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-08 00:48:26.222156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release//neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-08 00:48:26.222163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release//neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.222173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-08 00:48:26.222181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release//neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-08 00:48:26.222285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release//ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.222293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release//neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-08 00:48:26.222303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-08 00:48:26.222308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-08 00:48:26.222316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release//neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-08 00:48:26.222324 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.222328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-08 00:48:26.222332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release//neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:48:26.223171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release//neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.223200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release//neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.223214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release//neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-08 00:48:26.223234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release//neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release//neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-08 00:48:26.223243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-08 00:48:26.223257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release//neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release//neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-08 00:48:26.223265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release//ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.223272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release//neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.223288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-08 00:48:26.223297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release//neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-08 00:48:26.223304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release//neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-08 00:48:26.223310 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.223321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release//neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-08 00:48:26.223327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-08 00:48:26.223333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-08 00:48:26.223352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release//neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.223359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release//neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-08 00:48:26.223365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-08 00:48:26.223374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release//ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.223380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-08 00:48:26.223387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release//neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-08 00:48:26.223399 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.223404 | orchestrator | 2026-04-08 00:48:26.223411 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-04-08 00:48:26.223417 | orchestrator | Wednesday 08 April 2026 00:46:25 +0000 (0:00:01.226) 0:03:26.262 ******* 2026-04-08 00:48:26.223424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.223431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.223438 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.223445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.223451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.223457 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.223463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.223469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.223475 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.223481 | orchestrator | 2026-04-08 00:48:26.223486 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-04-08 00:48:26.223493 | orchestrator | Wednesday 08 April 2026 00:46:26 +0000 (0:00:01.326) 0:03:27.588 ******* 2026-04-08 00:48:26.223499 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:26.223688 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:26.223695 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:26.223699 | orchestrator | 2026-04-08 00:48:26.223703 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-04-08 00:48:26.223707 | orchestrator | Wednesday 08 April 2026 00:46:27 +0000 (0:00:01.246) 0:03:28.834 ******* 2026-04-08 00:48:26.223711 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:26.223718 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:26.223722 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:26.223726 | orchestrator | 2026-04-08 00:48:26.223798 | orchestrator | TASK [include_role : placement] ************************************************ 2026-04-08 00:48:26.223821 | orchestrator | Wednesday 08 April 2026 00:46:29 +0000 (0:00:01.883) 0:03:30.718 ******* 2026-04-08 00:48:26.223827 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:48:26.223832 | orchestrator | 2026-04-08 00:48:26.223838 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-04-08 00:48:26.223844 | orchestrator | Wednesday 08 April 2026 00:46:30 +0000 (0:00:01.085) 0:03:31.803 ******* 2026-04-08 00:48:26.223859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release//placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-08 00:48:26.223869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release//placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-08 00:48:26.223877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release//placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-08 00:48:26.223884 | orchestrator | 2026-04-08 00:48:26.223890 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-04-08 00:48:26.223897 | orchestrator | Wednesday 08 April 2026 00:46:33 +0000 (0:00:02.872) 0:03:34.676 ******* 2026-04-08 00:48:26.223912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release//placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-08 00:48:26.223923 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.223932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release//placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-08 00:48:26.223939 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.223946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release//placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-08 00:48:26.223953 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.223996 | orchestrator | 2026-04-08 00:48:26.224004 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-04-08 00:48:26.224010 | orchestrator | Wednesday 08 April 2026 00:46:34 +0000 (0:00:01.016) 0:03:35.692 ******* 2026-04-08 00:48:26.224018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-08 00:48:26.224025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-08 00:48:26.224033 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.224040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-08 00:48:26.224056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-08 00:48:26.224065 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.224072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-08 00:48:26.224079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-08 00:48:26.224087 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.224094 | orchestrator | 2026-04-08 00:48:26.224101 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-04-08 00:48:26.224108 | orchestrator | Wednesday 08 April 2026 00:46:35 +0000 (0:00:00.759) 0:03:36.452 ******* 2026-04-08 00:48:26.224115 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:26.224122 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:26.224216 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:26.224224 | orchestrator | 2026-04-08 00:48:26.224228 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-04-08 00:48:26.224232 | orchestrator | Wednesday 08 April 2026 00:46:36 +0000 (0:00:01.218) 0:03:37.670 ******* 2026-04-08 00:48:26.224236 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:26.224240 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:26.224244 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:26.224248 | orchestrator | 2026-04-08 00:48:26.224253 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-04-08 00:48:26.224259 | orchestrator | Wednesday 08 April 2026 00:46:38 +0000 (0:00:02.023) 0:03:39.693 ******* 2026-04-08 00:48:26.224264 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:48:26.224270 | orchestrator | 2026-04-08 00:48:26.224276 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-04-08 00:48:26.224282 | orchestrator | Wednesday 08 April 2026 00:46:40 +0000 (0:00:01.479) 0:03:41.173 ******* 2026-04-08 00:48:26.224294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release//nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:48:26.224301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release//nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:48:26.224321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release//nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:48:26.224332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release//nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:48:26.224340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release//nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.224347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.224361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release//nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:48:26.224365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release//nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.224370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.224377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release//nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:48:26.224381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release//nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.224388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.224392 | orchestrator | 2026-04-08 00:48:26.224396 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-04-08 00:48:26.224400 | orchestrator | Wednesday 08 April 2026 00:46:45 +0000 (0:00:05.138) 0:03:46.311 ******* 2026-04-08 00:48:26.224408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release//nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:48:26.224415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release//nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:48:26.224419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release//nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.224423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.224430 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.224437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release//nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:48:26.224441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release//nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:48:26.224450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release//nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.224454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.224458 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.224465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release//nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:48:26.224474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release//nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:48:26.224479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release//nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.224485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.224489 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.224493 | orchestrator | 2026-04-08 00:48:26.224497 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-04-08 00:48:26.224501 | orchestrator | Wednesday 08 April 2026 00:46:45 +0000 (0:00:00.760) 0:03:47.071 ******* 2026-04-08 00:48:26.224505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.224513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.224517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.224521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.224525 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.224529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.224533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.224537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.224544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.224548 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.224551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.224555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.224559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.224563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.224567 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.224571 | orchestrator | 2026-04-08 00:48:26.224575 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-04-08 00:48:26.224579 | orchestrator | Wednesday 08 April 2026 00:46:47 +0000 (0:00:01.281) 0:03:48.353 ******* 2026-04-08 00:48:26.224582 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:26.224586 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:26.224590 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:26.224594 | orchestrator | 2026-04-08 00:48:26.224598 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-04-08 00:48:26.224601 | orchestrator | Wednesday 08 April 2026 00:46:48 +0000 (0:00:01.110) 0:03:49.464 ******* 2026-04-08 00:48:26.224608 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:26.224614 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:26.224620 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:26.224626 | orchestrator | 2026-04-08 00:48:26.224634 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-04-08 00:48:26.224640 | orchestrator | Wednesday 08 April 2026 00:46:50 +0000 (0:00:01.854) 0:03:51.319 ******* 2026-04-08 00:48:26.224645 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:48:26.224651 | orchestrator | 2026-04-08 00:48:26.224656 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-04-08 00:48:26.224661 | orchestrator | Wednesday 08 April 2026 00:46:51 +0000 (0:00:01.257) 0:03:52.576 ******* 2026-04-08 00:48:26.224668 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-04-08 00:48:26.224674 | orchestrator | 2026-04-08 00:48:26.224679 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-04-08 00:48:26.224684 | orchestrator | Wednesday 08 April 2026 00:46:52 +0000 (0:00:01.145) 0:03:53.722 ******* 2026-04-08 00:48:26.224690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-08 00:48:26.224697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-08 00:48:26.224704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-08 00:48:26.224710 | orchestrator | 2026-04-08 00:48:26.224720 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-04-08 00:48:26.224728 | orchestrator | Wednesday 08 April 2026 00:46:56 +0000 (0:00:03.388) 0:03:57.111 ******* 2026-04-08 00:48:26.224734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-08 00:48:26.224746 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.224751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-08 00:48:26.224763 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.224769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-08 00:48:26.224776 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.224930 | orchestrator | 2026-04-08 00:48:26.224941 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-04-08 00:48:26.224947 | orchestrator | Wednesday 08 April 2026 00:46:57 +0000 (0:00:01.098) 0:03:58.209 ******* 2026-04-08 00:48:26.224954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-08 00:48:26.224985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-08 00:48:26.224991 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.224998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-08 00:48:26.225003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-08 00:48:26.225007 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.225011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-08 00:48:26.225015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-08 00:48:26.225019 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.225023 | orchestrator | 2026-04-08 00:48:26.225027 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-08 00:48:26.225102 | orchestrator | Wednesday 08 April 2026 00:46:58 +0000 (0:00:01.397) 0:03:59.607 ******* 2026-04-08 00:48:26.225108 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:26.225112 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:26.225116 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:26.225120 | orchestrator | 2026-04-08 00:48:26.225124 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-08 00:48:26.225127 | orchestrator | Wednesday 08 April 2026 00:47:00 +0000 (0:00:02.190) 0:04:01.797 ******* 2026-04-08 00:48:26.225131 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:26.225135 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:26.225139 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:26.225142 | orchestrator | 2026-04-08 00:48:26.225151 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-04-08 00:48:26.225155 | orchestrator | Wednesday 08 April 2026 00:47:03 +0000 (0:00:02.702) 0:04:04.500 ******* 2026-04-08 00:48:26.225166 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-04-08 00:48:26.225170 | orchestrator | 2026-04-08 00:48:26.225174 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-04-08 00:48:26.225178 | orchestrator | Wednesday 08 April 2026 00:47:04 +0000 (0:00:00.804) 0:04:05.304 ******* 2026-04-08 00:48:26.225182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-08 00:48:26.225188 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.225192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-08 00:48:26.225196 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.225204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-08 00:48:26.225208 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.225212 | orchestrator | 2026-04-08 00:48:26.225216 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-04-08 00:48:26.225219 | orchestrator | Wednesday 08 April 2026 00:47:05 +0000 (0:00:01.263) 0:04:06.567 ******* 2026-04-08 00:48:26.225223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-08 00:48:26.225227 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.225231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-08 00:48:26.225235 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.225239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-08 00:48:26.225247 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.225251 | orchestrator | 2026-04-08 00:48:26.225258 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-04-08 00:48:26.225262 | orchestrator | Wednesday 08 April 2026 00:47:06 +0000 (0:00:01.174) 0:04:07.742 ******* 2026-04-08 00:48:26.225266 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.225270 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.225273 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.225277 | orchestrator | 2026-04-08 00:48:26.225281 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-08 00:48:26.225285 | orchestrator | Wednesday 08 April 2026 00:47:08 +0000 (0:00:01.388) 0:04:09.130 ******* 2026-04-08 00:48:26.225289 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:48:26.225293 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:48:26.225296 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:48:26.225300 | orchestrator | 2026-04-08 00:48:26.225304 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-08 00:48:26.225308 | orchestrator | Wednesday 08 April 2026 00:47:10 +0000 (0:00:02.208) 0:04:11.339 ******* 2026-04-08 00:48:26.225312 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:48:26.225315 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:48:26.225334 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:48:26.225338 | orchestrator | 2026-04-08 00:48:26.225342 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-04-08 00:48:26.225345 | orchestrator | Wednesday 08 April 2026 00:47:12 +0000 (0:00:02.512) 0:04:13.851 ******* 2026-04-08 00:48:26.225349 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-04-08 00:48:26.225354 | orchestrator | 2026-04-08 00:48:26.225357 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-04-08 00:48:26.225361 | orchestrator | Wednesday 08 April 2026 00:47:13 +0000 (0:00:00.991) 0:04:14.843 ******* 2026-04-08 00:48:26.225365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-08 00:48:26.225372 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.225376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-08 00:48:26.225380 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.225384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-08 00:48:26.225391 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.225395 | orchestrator | 2026-04-08 00:48:26.225399 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-04-08 00:48:26.225403 | orchestrator | Wednesday 08 April 2026 00:47:14 +0000 (0:00:00.993) 0:04:15.837 ******* 2026-04-08 00:48:26.225407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-08 00:48:26.225411 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.225445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-08 00:48:26.225451 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.225455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-08 00:48:26.225459 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.225463 | orchestrator | 2026-04-08 00:48:26.225466 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-04-08 00:48:26.225470 | orchestrator | Wednesday 08 April 2026 00:47:15 +0000 (0:00:01.086) 0:04:16.923 ******* 2026-04-08 00:48:26.225474 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.225478 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.225481 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.225485 | orchestrator | 2026-04-08 00:48:26.225489 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-08 00:48:26.225493 | orchestrator | Wednesday 08 April 2026 00:47:17 +0000 (0:00:01.595) 0:04:18.519 ******* 2026-04-08 00:48:26.225496 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:48:26.225500 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:48:26.225504 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:48:26.225507 | orchestrator | 2026-04-08 00:48:26.225511 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-08 00:48:26.225515 | orchestrator | Wednesday 08 April 2026 00:47:19 +0000 (0:00:02.293) 0:04:20.812 ******* 2026-04-08 00:48:26.225519 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:48:26.225523 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:48:26.225526 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:48:26.225530 | orchestrator | 2026-04-08 00:48:26.225534 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-04-08 00:48:26.225538 | orchestrator | Wednesday 08 April 2026 00:47:23 +0000 (0:00:03.421) 0:04:24.234 ******* 2026-04-08 00:48:26.225545 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:48:26.225579 | orchestrator | 2026-04-08 00:48:26.225586 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-04-08 00:48:26.225592 | orchestrator | Wednesday 08 April 2026 00:47:24 +0000 (0:00:01.409) 0:04:25.643 ******* 2026-04-08 00:48:26.225749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-08 00:48:26.225761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-08 00:48:26.225770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-08 00:48:26.225775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-08 00:48:26.225780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.225792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-08 00:48:26.225803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-08 00:48:26.225808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-08 00:48:26.225815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-08 00:48:26.225820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-08 00:48:26.225825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.225836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-08 00:48:26.225841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-08 00:48:26.225846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-08 00:48:26.225850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.225855 | orchestrator | 2026-04-08 00:48:26.225861 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-04-08 00:48:26.225866 | orchestrator | Wednesday 08 April 2026 00:47:28 +0000 (0:00:04.051) 0:04:29.695 ******* 2026-04-08 00:48:26.225871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-08 00:48:26.225875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-08 00:48:26.225886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-08 00:48:26.225891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-08 00:48:26.225896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-08 00:48:26.225903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.225908 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.225912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-08 00:48:26.225920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-08 00:48:26.225928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-08 00:48:26.225933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.225937 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.225941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-08 00:48:26.225948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-08 00:48:26.225952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-08 00:48:26.225990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-08 00:48:26.225998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-08 00:48:26.226002 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.226006 | orchestrator | 2026-04-08 00:48:26.226098 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-04-08 00:48:26.226109 | orchestrator | Wednesday 08 April 2026 00:47:29 +0000 (0:00:00.868) 0:04:30.564 ******* 2026-04-08 00:48:26.226116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-08 00:48:26.226122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-08 00:48:26.226129 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.226136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-08 00:48:26.226142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-08 00:48:26.226148 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.226155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-08 00:48:26.226159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-08 00:48:26.226163 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.226167 | orchestrator | 2026-04-08 00:48:26.226212 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-04-08 00:48:26.226221 | orchestrator | Wednesday 08 April 2026 00:47:30 +0000 (0:00:00.962) 0:04:31.527 ******* 2026-04-08 00:48:26.226225 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:26.226229 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:26.226233 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:26.226244 | orchestrator | 2026-04-08 00:48:26.226251 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-04-08 00:48:26.226257 | orchestrator | Wednesday 08 April 2026 00:47:31 +0000 (0:00:01.223) 0:04:32.750 ******* 2026-04-08 00:48:26.226264 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:26.226270 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:26.226276 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:26.226282 | orchestrator | 2026-04-08 00:48:26.226288 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-04-08 00:48:26.226295 | orchestrator | Wednesday 08 April 2026 00:47:33 +0000 (0:00:01.893) 0:04:34.644 ******* 2026-04-08 00:48:26.226302 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:48:26.226310 | orchestrator | 2026-04-08 00:48:26.226316 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-04-08 00:48:26.226323 | orchestrator | Wednesday 08 April 2026 00:47:35 +0000 (0:00:01.554) 0:04:36.198 ******* 2026-04-08 00:48:26.226330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:48:26.226344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:48:26.226351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:48:26.226362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-08 00:48:26.226375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-08 00:48:26.226385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-08 00:48:26.226393 | orchestrator | 2026-04-08 00:48:26.226399 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-04-08 00:48:26.226406 | orchestrator | Wednesday 08 April 2026 00:47:39 +0000 (0:00:04.487) 0:04:40.686 ******* 2026-04-08 00:48:26.226413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:48:26.226428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-08 00:48:26.226435 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.226448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:48:26.226456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-08 00:48:26.226463 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.226473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:48:26.226485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-08 00:48:26.226492 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.226498 | orchestrator | 2026-04-08 00:48:26.226505 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-04-08 00:48:26.226512 | orchestrator | Wednesday 08 April 2026 00:47:40 +0000 (0:00:00.884) 0:04:41.570 ******* 2026-04-08 00:48:26.226519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.226529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-08 00:48:26.226538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-08 00:48:26.226546 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.226553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.226560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-08 00:48:26.226567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-08 00:48:26.226578 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.226584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.226590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-08 00:48:26.226599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-08 00:48:26.226606 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.226613 | orchestrator | 2026-04-08 00:48:26.226620 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-04-08 00:48:26.226627 | orchestrator | Wednesday 08 April 2026 00:47:41 +0000 (0:00:01.022) 0:04:42.593 ******* 2026-04-08 00:48:26.226633 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.226904 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.226913 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.226919 | orchestrator | 2026-04-08 00:48:26.226926 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-04-08 00:48:26.226932 | orchestrator | Wednesday 08 April 2026 00:47:41 +0000 (0:00:00.381) 0:04:42.975 ******* 2026-04-08 00:48:26.226938 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.226944 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.226950 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.226955 | orchestrator | 2026-04-08 00:48:26.226982 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-04-08 00:48:26.226987 | orchestrator | Wednesday 08 April 2026 00:47:43 +0000 (0:00:01.175) 0:04:44.150 ******* 2026-04-08 00:48:26.226993 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:48:26.226999 | orchestrator | 2026-04-08 00:48:26.227006 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-04-08 00:48:26.227011 | orchestrator | Wednesday 08 April 2026 00:47:44 +0000 (0:00:01.532) 0:04:45.683 ******* 2026-04-08 00:48:26.227024 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-08 00:48:26.227031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-08 00:48:26.227046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:48:26.227052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:48:26.227066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-08 00:48:26.227071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-08 00:48:26.227078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-08 00:48:26.227082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:48:26.227090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:48:26.227094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-08 00:48:26.227101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-08 00:48:26.227105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-08 00:48:26.227109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:48:26.227116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:48:26.227124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-08 00:48:26.227129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:48:26.227137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release//prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-08 00:48:26.227142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:48:26.227152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:48:26.227212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release//prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-08 00:48:26.227221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:48:26.227232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:48:26.227238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-08 00:48:26.227266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:48:26.227279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:48:26.227288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-08 00:48:26.227292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release//prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-08 00:48:26.227299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:48:26.227303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:48:26.227307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-08 00:48:26.227311 | orchestrator | 2026-04-08 00:48:26.227314 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-04-08 00:48:26.227318 | orchestrator | Wednesday 08 April 2026 00:47:48 +0000 (0:00:03.612) 0:04:49.296 ******* 2026-04-08 00:48:26.227325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-08 00:48:26.227335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-08 00:48:26.227339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:48:26.227343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:48:26.227480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-08 00:48:26.227491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:48:26.227512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-08 00:48:26.227521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release//prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-08 00:48:26.227528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-08 00:48:26.227538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:48:26.227543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:48:26.227547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:48:26.227555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:48:26.227559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-08 00:48:26.227563 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.227567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-08 00:48:26.227600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:48:26.227605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-08 00:48:26.227616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release//prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-08 00:48:26.227620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-08 00:48:26.227624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:48:26.227630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:48:26.227640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:48:26.227646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:48:26.227657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-08 00:48:26.227663 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.227673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-08 00:48:26.227679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:48:26.227686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release//prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-08 00:48:26.227697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:48:26.227704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:48:26.227714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-08 00:48:26.227720 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.227727 | orchestrator | 2026-04-08 00:48:26.227734 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-04-08 00:48:26.227740 | orchestrator | Wednesday 08 April 2026 00:47:48 +0000 (0:00:00.718) 0:04:50.014 ******* 2026-04-08 00:48:26.227751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-08 00:48:26.227759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-08 00:48:26.227767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-08 00:48:26.227856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.227863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-08 00:48:26.227870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.227877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.227887 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.227894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.227906 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.227912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-08 00:48:26.227919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-08 00:48:26.227925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.227936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-08 00:48:26.227942 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.227949 | orchestrator | 2026-04-08 00:48:26.227955 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-04-08 00:48:26.227979 | orchestrator | Wednesday 08 April 2026 00:47:49 +0000 (0:00:01.093) 0:04:51.107 ******* 2026-04-08 00:48:26.227986 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.227992 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.227999 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.228005 | orchestrator | 2026-04-08 00:48:26.228011 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-04-08 00:48:26.228018 | orchestrator | Wednesday 08 April 2026 00:47:50 +0000 (0:00:00.403) 0:04:51.511 ******* 2026-04-08 00:48:26.228024 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.228030 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.228037 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.228043 | orchestrator | 2026-04-08 00:48:26.228050 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-04-08 00:48:26.228056 | orchestrator | Wednesday 08 April 2026 00:47:51 +0000 (0:00:01.137) 0:04:52.649 ******* 2026-04-08 00:48:26.228063 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:48:26.228069 | orchestrator | 2026-04-08 00:48:26.228076 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-04-08 00:48:26.228082 | orchestrator | Wednesday 08 April 2026 00:47:52 +0000 (0:00:01.291) 0:04:53.940 ******* 2026-04-08 00:48:26.228089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-08 00:48:26.228106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-08 00:48:26.228113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-08 00:48:26.228120 | orchestrator | 2026-04-08 00:48:26.228130 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-04-08 00:48:26.228137 | orchestrator | Wednesday 08 April 2026 00:47:55 +0000 (0:00:02.607) 0:04:56.548 ******* 2026-04-08 00:48:26.228143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-08 00:48:26.228150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-08 00:48:26.228165 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.228172 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.228181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-08 00:48:26.228188 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.228193 | orchestrator | 2026-04-08 00:48:26.228199 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-04-08 00:48:26.228205 | orchestrator | Wednesday 08 April 2026 00:47:55 +0000 (0:00:00.422) 0:04:56.970 ******* 2026-04-08 00:48:26.228211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-08 00:48:26.228217 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.228224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-08 00:48:26.228230 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.228237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-08 00:48:26.228244 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.228250 | orchestrator | 2026-04-08 00:48:26.228256 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-04-08 00:48:26.228266 | orchestrator | Wednesday 08 April 2026 00:47:56 +0000 (0:00:00.623) 0:04:57.594 ******* 2026-04-08 00:48:26.228273 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.228279 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.228286 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.228292 | orchestrator | 2026-04-08 00:48:26.228298 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-04-08 00:48:26.228305 | orchestrator | Wednesday 08 April 2026 00:47:56 +0000 (0:00:00.486) 0:04:58.080 ******* 2026-04-08 00:48:26.228311 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.228316 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.228322 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.228328 | orchestrator | 2026-04-08 00:48:26.228334 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-04-08 00:48:26.228340 | orchestrator | Wednesday 08 April 2026 00:47:58 +0000 (0:00:01.404) 0:04:59.484 ******* 2026-04-08 00:48:26.228351 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:48:26.228357 | orchestrator | 2026-04-08 00:48:26.228365 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-04-08 00:48:26.228371 | orchestrator | Wednesday 08 April 2026 00:48:00 +0000 (0:00:01.782) 0:05:01.267 ******* 2026-04-08 00:48:26.228377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-08 00:48:26.228391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-08 00:48:26.228399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-08 00:48:26.228410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-08 00:48:26.228422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-08 00:48:26.228434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-08 00:48:26.228441 | orchestrator | 2026-04-08 00:48:26.228448 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-04-08 00:48:26.228456 | orchestrator | Wednesday 08 April 2026 00:48:06 +0000 (0:00:06.213) 0:05:07.481 ******* 2026-04-08 00:48:26.228466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-08 00:48:26.228473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-08 00:48:26.228486 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.228493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-08 00:48:26.228504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-08 00:48:26.228511 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.228521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-08 00:48:26.228532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-08 00:48:26.228539 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.228545 | orchestrator | 2026-04-08 00:48:26.228550 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-04-08 00:48:26.228556 | orchestrator | Wednesday 08 April 2026 00:48:07 +0000 (0:00:01.107) 0:05:08.589 ******* 2026-04-08 00:48:26.228563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-08 00:48:26.228570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-08 00:48:26.228580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-08 00:48:26.228587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-08 00:48:26.228593 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.228599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-08 00:48:26.228605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-08 00:48:26.228612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-08 00:48:26.228618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-08 00:48:26.228629 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.228637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-08 00:48:26.228647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-08 00:48:26.228653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-08 00:48:26.228659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-08 00:48:26.228665 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.228670 | orchestrator | 2026-04-08 00:48:26.228676 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-04-08 00:48:26.228682 | orchestrator | Wednesday 08 April 2026 00:48:08 +0000 (0:00:01.150) 0:05:09.740 ******* 2026-04-08 00:48:26.228689 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:26.228695 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:26.228701 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:26.228707 | orchestrator | 2026-04-08 00:48:26.228714 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-04-08 00:48:26.228720 | orchestrator | Wednesday 08 April 2026 00:48:09 +0000 (0:00:01.086) 0:05:10.826 ******* 2026-04-08 00:48:26.228726 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:26.228731 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:26.228738 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:26.228744 | orchestrator | 2026-04-08 00:48:26.228750 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-04-08 00:48:26.228756 | orchestrator | Wednesday 08 April 2026 00:48:11 +0000 (0:00:01.850) 0:05:12.677 ******* 2026-04-08 00:48:26.228762 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.228766 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.228770 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.228773 | orchestrator | 2026-04-08 00:48:26.228777 | orchestrator | TASK [include_role : trove] **************************************************** 2026-04-08 00:48:26.228781 | orchestrator | Wednesday 08 April 2026 00:48:11 +0000 (0:00:00.264) 0:05:12.942 ******* 2026-04-08 00:48:26.228785 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.228788 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.228792 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.228796 | orchestrator | 2026-04-08 00:48:26.228799 | orchestrator | TASK [include_role : venus] **************************************************** 2026-04-08 00:48:26.228803 | orchestrator | Wednesday 08 April 2026 00:48:12 +0000 (0:00:00.473) 0:05:13.415 ******* 2026-04-08 00:48:26.228807 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.228811 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.228814 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.228818 | orchestrator | 2026-04-08 00:48:26.228825 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-04-08 00:48:26.228828 | orchestrator | Wednesday 08 April 2026 00:48:12 +0000 (0:00:00.328) 0:05:13.743 ******* 2026-04-08 00:48:26.228832 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.228836 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.228840 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.228848 | orchestrator | 2026-04-08 00:48:26.228852 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-04-08 00:48:26.228855 | orchestrator | Wednesday 08 April 2026 00:48:12 +0000 (0:00:00.257) 0:05:14.001 ******* 2026-04-08 00:48:26.228859 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.228863 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.228867 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.228870 | orchestrator | 2026-04-08 00:48:26.228874 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-04-08 00:48:26.228878 | orchestrator | Wednesday 08 April 2026 00:48:13 +0000 (0:00:00.266) 0:05:14.267 ******* 2026-04-08 00:48:26.228882 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:48:26.228885 | orchestrator | 2026-04-08 00:48:26.228889 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-04-08 00:48:26.228893 | orchestrator | Wednesday 08 April 2026 00:48:14 +0000 (0:00:01.655) 0:05:15.923 ******* 2026-04-08 00:48:26.228897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-08 00:48:26.228905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-08 00:48:26.228909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-08 00:48:26.228913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-08 00:48:26.228919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-08 00:48:26.228927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-08 00:48:26.228931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-08 00:48:26.228935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-08 00:48:26.228942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-08 00:48:26.228946 | orchestrator | 2026-04-08 00:48:26.228950 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-04-08 00:48:26.228954 | orchestrator | Wednesday 08 April 2026 00:48:16 +0000 (0:00:02.070) 0:05:17.994 ******* 2026-04-08 00:48:26.228977 | orchestrator | changed: [testbed-node-0] => { 2026-04-08 00:48:26.228981 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:48:26.228985 | orchestrator | } 2026-04-08 00:48:26.228989 | orchestrator | changed: [testbed-node-1] => { 2026-04-08 00:48:26.228993 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:48:26.228997 | orchestrator | } 2026-04-08 00:48:26.229001 | orchestrator | changed: [testbed-node-2] => { 2026-04-08 00:48:26.229004 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:48:26.229008 | orchestrator | } 2026-04-08 00:48:26.229012 | orchestrator | 2026-04-08 00:48:26.229016 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-08 00:48:26.229020 | orchestrator | Wednesday 08 April 2026 00:48:17 +0000 (0:00:00.295) 0:05:18.289 ******* 2026-04-08 00:48:26.229024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-08 00:48:26.229036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-08 00:48:26.229041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-08 00:48:26.229045 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:26.229049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-08 00:48:26.229056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-08 00:48:26.229060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-08 00:48:26.229064 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:26.229068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-08 00:48:26.229075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-08 00:48:26.229082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-08 00:48:26.229086 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:26.229090 | orchestrator | 2026-04-08 00:48:26.229094 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-04-08 00:48:26.229098 | orchestrator | Wednesday 08 April 2026 00:48:18 +0000 (0:00:01.400) 0:05:19.689 ******* 2026-04-08 00:48:26.229102 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:48:26.229106 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:48:26.229109 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:48:26.229113 | orchestrator | 2026-04-08 00:48:26.229117 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-04-08 00:48:26.229121 | orchestrator | Wednesday 08 April 2026 00:48:19 +0000 (0:00:00.790) 0:05:20.480 ******* 2026-04-08 00:48:26.229125 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:48:26.229129 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:48:26.229132 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:48:26.229136 | orchestrator | 2026-04-08 00:48:26.229140 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-04-08 00:48:26.229144 | orchestrator | Wednesday 08 April 2026 00:48:19 +0000 (0:00:00.307) 0:05:20.788 ******* 2026-04-08 00:48:26.229149 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:48:26.229156 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:48:26.229162 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:48:26.229169 | orchestrator | 2026-04-08 00:48:26.229175 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-04-08 00:48:26.229181 | orchestrator | Wednesday 08 April 2026 00:48:20 +0000 (0:00:00.785) 0:05:21.574 ******* 2026-04-08 00:48:26.229188 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:48:26.229194 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:48:26.229200 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:48:26.229206 | orchestrator | 2026-04-08 00:48:26.229213 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-04-08 00:48:26.229219 | orchestrator | Wednesday 08 April 2026 00:48:21 +0000 (0:00:00.746) 0:05:22.320 ******* 2026-04-08 00:48:26.229226 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:48:26.229233 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:48:26.229240 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:48:26.229245 | orchestrator | 2026-04-08 00:48:26.229252 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-04-08 00:48:26.229258 | orchestrator | Wednesday 08 April 2026 00:48:22 +0000 (0:00:00.972) 0:05:23.292 ******* 2026-04-08 00:48:26.229272 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=2.8.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fhaproxy\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_1dy0xrqp/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_1dy0xrqp/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_1dy0xrqp/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_1dy0xrqp/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=2.8.16.2026032026-04-08 00:48:26 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:48:26.229285 | orchestrator | 2026-04-08 00:48:26 | INFO  | Task 2a54e49a-fc1c-479c-908c-7cd3b27e2331 is in state STARTED 2026-04-08 00:48:26.229292 | orchestrator | 2026-04-08 00:48:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:48:26.229299 | orchestrator | 28&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fhaproxy: Bad Request (\"invalid reference format\")\\n'"} 2026-04-08 00:48:26.229309 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=2.8.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fhaproxy\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_6ezkgu5z/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_6ezkgu5z/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_6ezkgu5z/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_6ezkgu5z/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=2.8.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fhaproxy: Bad Request (\"invalid reference format\")\\n'"} 2026-04-08 00:48:26.229324 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=2.8.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fhaproxy\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_mv19_utp/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_mv19_utp/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_mv19_utp/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_mv19_utp/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=2.8.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fhaproxy: Bad Request (\"invalid reference format\")\\n'"} 2026-04-08 00:48:26.229332 | orchestrator | 2026-04-08 00:48:26.229338 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:48:26.229344 | orchestrator | testbed-node-0 : ok=120  changed=76  unreachable=0 failed=1  skipped=88  rescued=0 ignored=0 2026-04-08 00:48:26.229351 | orchestrator | testbed-node-1 : ok=119  changed=76  unreachable=0 failed=1  skipped=88  rescued=0 ignored=0 2026-04-08 00:48:26.229364 | orchestrator | testbed-node-2 : ok=119  changed=76  unreachable=0 failed=1  skipped=88  rescued=0 ignored=0 2026-04-08 00:48:26.229370 | orchestrator | 2026-04-08 00:48:26.229376 | orchestrator | 2026-04-08 00:48:26.229382 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:48:26.229388 | orchestrator | Wednesday 08 April 2026 00:48:24 +0000 (0:00:02.309) 0:05:25.602 ******* 2026-04-08 00:48:26.229394 | orchestrator | =============================================================================== 2026-04-08 00:48:26.229400 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.21s 2026-04-08 00:48:26.229406 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 5.73s 2026-04-08 00:48:26.229412 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 5.28s 2026-04-08 00:48:26.229419 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.19s 2026-04-08 00:48:26.229425 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 5.14s 2026-04-08 00:48:26.229431 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 4.71s 2026-04-08 00:48:26.229437 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 4.49s 2026-04-08 00:48:26.229443 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.09s 2026-04-08 00:48:26.229449 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 4.05s 2026-04-08 00:48:26.229455 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 3.99s 2026-04-08 00:48:26.229461 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.96s 2026-04-08 00:48:26.229468 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 3.92s 2026-04-08 00:48:26.229474 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 3.84s 2026-04-08 00:48:26.229480 | orchestrator | haproxy-config : Add configuration for mariadb when using single external frontend --- 3.80s 2026-04-08 00:48:26.229486 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.78s 2026-04-08 00:48:26.229493 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 3.77s 2026-04-08 00:48:26.229499 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 3.61s 2026-04-08 00:48:26.229505 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 3.57s 2026-04-08 00:48:26.229511 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 3.57s 2026-04-08 00:48:26.229518 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.43s 2026-04-08 00:48:29.263414 | orchestrator | 2026-04-08 00:48:29 | INFO  | Task f157936e-1dae-4e2b-b755-a86927bef45f is in state STARTED 2026-04-08 00:48:29.266730 | orchestrator | 2026-04-08 00:48:29 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:48:29.269773 | orchestrator | 2026-04-08 00:48:29 | INFO  | Task 2a54e49a-fc1c-479c-908c-7cd3b27e2331 is in state STARTED 2026-04-08 00:48:29.269833 | orchestrator | 2026-04-08 00:48:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:48:32.315014 | orchestrator | 2026-04-08 00:48:32 | INFO  | Task f157936e-1dae-4e2b-b755-a86927bef45f is in state STARTED 2026-04-08 00:48:32.317155 | orchestrator | 2026-04-08 00:48:32 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:48:32.318954 | orchestrator | 2026-04-08 00:48:32 | INFO  | Task 2a54e49a-fc1c-479c-908c-7cd3b27e2331 is in state STARTED 2026-04-08 00:48:32.319622 | orchestrator | 2026-04-08 00:48:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:48:35.351050 | orchestrator | 2026-04-08 00:48:35 | INFO  | Task f157936e-1dae-4e2b-b755-a86927bef45f is in state STARTED 2026-04-08 00:48:35.352414 | orchestrator | 2026-04-08 00:48:35 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:48:35.352459 | orchestrator | 2026-04-08 00:48:35 | INFO  | Task 2a54e49a-fc1c-479c-908c-7cd3b27e2331 is in state STARTED 2026-04-08 00:48:35.352469 | orchestrator | 2026-04-08 00:48:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:48:38.381508 | orchestrator | 2026-04-08 00:48:38 | INFO  | Task f157936e-1dae-4e2b-b755-a86927bef45f is in state STARTED 2026-04-08 00:48:38.381749 | orchestrator | 2026-04-08 00:48:38 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:48:38.382608 | orchestrator | 2026-04-08 00:48:38 | INFO  | Task 2a54e49a-fc1c-479c-908c-7cd3b27e2331 is in state STARTED 2026-04-08 00:48:38.382642 | orchestrator | 2026-04-08 00:48:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:48:41.418352 | orchestrator | 2026-04-08 00:48:41 | INFO  | Task f157936e-1dae-4e2b-b755-a86927bef45f is in state STARTED 2026-04-08 00:48:41.419083 | orchestrator | 2026-04-08 00:48:41 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:48:41.421211 | orchestrator | 2026-04-08 00:48:41 | INFO  | Task 2a54e49a-fc1c-479c-908c-7cd3b27e2331 is in state STARTED 2026-04-08 00:48:41.421284 | orchestrator | 2026-04-08 00:48:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:48:44.461385 | orchestrator | 2026-04-08 00:48:44 | INFO  | Task f157936e-1dae-4e2b-b755-a86927bef45f is in state STARTED 2026-04-08 00:48:44.461661 | orchestrator | 2026-04-08 00:48:44 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:48:44.462627 | orchestrator | 2026-04-08 00:48:44 | INFO  | Task 2a54e49a-fc1c-479c-908c-7cd3b27e2331 is in state STARTED 2026-04-08 00:48:44.462654 | orchestrator | 2026-04-08 00:48:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:48:47.491815 | orchestrator | 2026-04-08 00:48:47 | INFO  | Task f157936e-1dae-4e2b-b755-a86927bef45f is in state STARTED 2026-04-08 00:48:47.492080 | orchestrator | 2026-04-08 00:48:47 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:48:47.494844 | orchestrator | 2026-04-08 00:48:47 | INFO  | Task 2a54e49a-fc1c-479c-908c-7cd3b27e2331 is in state STARTED 2026-04-08 00:48:47.494906 | orchestrator | 2026-04-08 00:48:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:48:50.520285 | orchestrator | 2026-04-08 00:48:50.520367 | orchestrator | 2026-04-08 00:48:50 | INFO  | Task f157936e-1dae-4e2b-b755-a86927bef45f is in state SUCCESS 2026-04-08 00:48:50.521715 | orchestrator | 2026-04-08 00:48:50.521758 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-08 00:48:50.521764 | orchestrator | 2026-04-08 00:48:50.521768 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-08 00:48:50.521773 | orchestrator | Wednesday 08 April 2026 00:48:28 +0000 (0:00:00.336) 0:00:00.336 ******* 2026-04-08 00:48:50.521777 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:48:50.521782 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:48:50.521786 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:48:50.521790 | orchestrator | 2026-04-08 00:48:50.521794 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-08 00:48:50.521798 | orchestrator | Wednesday 08 April 2026 00:48:28 +0000 (0:00:00.291) 0:00:00.627 ******* 2026-04-08 00:48:50.521803 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-04-08 00:48:50.521807 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-04-08 00:48:50.521811 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-04-08 00:48:50.521815 | orchestrator | 2026-04-08 00:48:50.521819 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-04-08 00:48:50.521840 | orchestrator | 2026-04-08 00:48:50.521844 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-08 00:48:50.521848 | orchestrator | Wednesday 08 April 2026 00:48:28 +0000 (0:00:00.290) 0:00:00.917 ******* 2026-04-08 00:48:50.521852 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:48:50.521856 | orchestrator | 2026-04-08 00:48:50.521860 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-04-08 00:48:50.521864 | orchestrator | Wednesday 08 April 2026 00:48:29 +0000 (0:00:00.636) 0:00:01.554 ******* 2026-04-08 00:48:50.521868 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-08 00:48:50.521872 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-08 00:48:50.521876 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-08 00:48:50.521880 | orchestrator | 2026-04-08 00:48:50.521884 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-04-08 00:48:50.521888 | orchestrator | Wednesday 08 April 2026 00:48:30 +0000 (0:00:01.004) 0:00:02.559 ******* 2026-04-08 00:48:50.521895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:48:50.521917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:48:50.521936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:48:50.521946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-08 00:48:50.521960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-08 00:48:50.521972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-08 00:48:50.522001 | orchestrator | 2026-04-08 00:48:50.522008 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-08 00:48:50.522044 | orchestrator | Wednesday 08 April 2026 00:48:31 +0000 (0:00:01.269) 0:00:03.828 ******* 2026-04-08 00:48:50.522050 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:48:50.522054 | orchestrator | 2026-04-08 00:48:50.522063 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-04-08 00:48:50.522071 | orchestrator | Wednesday 08 April 2026 00:48:32 +0000 (0:00:00.461) 0:00:04.290 ******* 2026-04-08 00:48:50.522075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:48:50.522080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:48:50.522084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:48:50.522091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-08 00:48:50.522107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-08 00:48:50.522114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-08 00:48:50.522121 | orchestrator | 2026-04-08 00:48:50.522128 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-04-08 00:48:50.522135 | orchestrator | Wednesday 08 April 2026 00:48:34 +0000 (0:00:02.368) 0:00:06.658 ******* 2026-04-08 00:48:50.522145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:48:50.522158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-08 00:48:50.522166 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:50.522170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:48:50.522175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-08 00:48:50.522179 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:50.522185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:48:50.522194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-08 00:48:50.522201 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:50.522205 | orchestrator | 2026-04-08 00:48:50.522209 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-04-08 00:48:50.522213 | orchestrator | Wednesday 08 April 2026 00:48:35 +0000 (0:00:01.063) 0:00:07.722 ******* 2026-04-08 00:48:50.522217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:48:50.522223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-08 00:48:50.522229 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:50.522238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:48:50.522254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-08 00:48:50.522260 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:50.522267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:48:50.522388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-08 00:48:50.522401 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:50.522560 | orchestrator | 2026-04-08 00:48:50.522570 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-04-08 00:48:50.522584 | orchestrator | Wednesday 08 April 2026 00:48:36 +0000 (0:00:00.864) 0:00:08.586 ******* 2026-04-08 00:48:50.522595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:48:50.522602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:48:50.522609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:48:50.522620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-08 00:48:50.522637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-08 00:48:50.522644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-08 00:48:50.522648 | orchestrator | 2026-04-08 00:48:50.522652 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-04-08 00:48:50.522656 | orchestrator | Wednesday 08 April 2026 00:48:38 +0000 (0:00:02.248) 0:00:10.835 ******* 2026-04-08 00:48:50.522659 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:50.522663 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:50.522667 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:50.522671 | orchestrator | 2026-04-08 00:48:50.522675 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-04-08 00:48:50.522678 | orchestrator | Wednesday 08 April 2026 00:48:41 +0000 (0:00:02.582) 0:00:13.418 ******* 2026-04-08 00:48:50.522682 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:48:50.522686 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:48:50.522692 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:48:50.522698 | orchestrator | 2026-04-08 00:48:50.522704 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-04-08 00:48:50.522710 | orchestrator | Wednesday 08 April 2026 00:48:42 +0000 (0:00:01.382) 0:00:14.800 ******* 2026-04-08 00:48:50.522717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:48:50.522733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:48:50.522743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:48:50.522751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-08 00:48:50.522755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-08 00:48:50.522769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-08 00:48:50.522773 | orchestrator | 2026-04-08 00:48:50.522777 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-04-08 00:48:50.522781 | orchestrator | Wednesday 08 April 2026 00:48:44 +0000 (0:00:01.941) 0:00:16.742 ******* 2026-04-08 00:48:50.522785 | orchestrator | changed: [testbed-node-0] => { 2026-04-08 00:48:50.522790 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:48:50.522796 | orchestrator | } 2026-04-08 00:48:50.522802 | orchestrator | changed: [testbed-node-1] => { 2026-04-08 00:48:50.522806 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:48:50.522812 | orchestrator | } 2026-04-08 00:48:50.522818 | orchestrator | changed: [testbed-node-2] => { 2026-04-08 00:48:50.522835 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:48:50.522848 | orchestrator | } 2026-04-08 00:48:50.522853 | orchestrator | 2026-04-08 00:48:50.522857 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-08 00:48:50.522860 | orchestrator | Wednesday 08 April 2026 00:48:45 +0000 (0:00:00.444) 0:00:17.186 ******* 2026-04-08 00:48:50.522864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:48:50.522870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-08 00:48:50.522878 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:50.522885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:48:50.522894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-08 00:48:50.522898 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:50.522902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:48:50.522910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-08 00:48:50.522914 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:50.522918 | orchestrator | 2026-04-08 00:48:50.522925 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-08 00:48:50.522929 | orchestrator | Wednesday 08 April 2026 00:48:45 +0000 (0:00:00.726) 0:00:17.913 ******* 2026-04-08 00:48:50.522933 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:50.522936 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:48:50.522940 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:48:50.522944 | orchestrator | 2026-04-08 00:48:50.522948 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-08 00:48:50.522951 | orchestrator | Wednesday 08 April 2026 00:48:46 +0000 (0:00:00.251) 0:00:18.164 ******* 2026-04-08 00:48:50.522957 | orchestrator | 2026-04-08 00:48:50.522963 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-08 00:48:50.522970 | orchestrator | Wednesday 08 April 2026 00:48:46 +0000 (0:00:00.062) 0:00:18.226 ******* 2026-04-08 00:48:50.523027 | orchestrator | 2026-04-08 00:48:50.523036 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-08 00:48:50.523042 | orchestrator | Wednesday 08 April 2026 00:48:46 +0000 (0:00:00.081) 0:00:18.308 ******* 2026-04-08 00:48:50.523048 | orchestrator | 2026-04-08 00:48:50.523054 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-04-08 00:48:50.523061 | orchestrator | Wednesday 08 April 2026 00:48:46 +0000 (0:00:00.060) 0:00:18.369 ******* 2026-04-08 00:48:50.523067 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:50.523074 | orchestrator | 2026-04-08 00:48:50.523078 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-04-08 00:48:50.523085 | orchestrator | Wednesday 08 April 2026 00:48:46 +0000 (0:00:00.490) 0:00:18.859 ******* 2026-04-08 00:48:50.523089 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:48:50.523093 | orchestrator | 2026-04-08 00:48:50.523097 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-04-08 00:48:50.523101 | orchestrator | Wednesday 08 April 2026 00:48:47 +0000 (0:00:00.194) 0:00:19.054 ******* 2026-04-08 00:48:50.523105 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=2.19.5.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopensearch\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_xo0si6j_/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_xo0si6j_/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_xo0si6j_/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_xo0si6j_/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=2.19.5.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopensearch: Bad Request (\"invalid reference format\")\\n'"} 2026-04-08 00:48:50.523121 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=2.19.5.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopensearch\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_t1pbs39r/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_t1pbs39r/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_t1pbs39r/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_t1pbs39r/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=2.19.5.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopensearch: Bad Request (\"invalid reference format\")\\n'"} 2026-04-08 00:48:50.523133 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=2.19.5.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopensearch\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_r_w25ckj/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_r_w25ckj/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_r_w25ckj/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_r_w25ckj/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=2.19.5.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopensearch: Bad Request (\"invalid reference format\")\\n'"} 2026-04-08 00:48:50.523137 | orchestrator | 2026-04-08 00:48:50.523141 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:48:50.523146 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-08 00:48:50.523152 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-08 00:48:50.523156 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-08 00:48:50.523160 | orchestrator | 2026-04-08 00:48:50.523164 | orchestrator | 2026-04-08 00:48:50.523170 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:48:50.523177 | orchestrator | Wednesday 08 April 2026 00:48:49 +0000 (0:00:02.811) 0:00:21.866 ******* 2026-04-08 00:48:50.523181 | orchestrator | =============================================================================== 2026-04-08 00:48:50.523185 | orchestrator | opensearch : Restart opensearch container ------------------------------- 2.81s 2026-04-08 00:48:50.523189 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.58s 2026-04-08 00:48:50.523192 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.37s 2026-04-08 00:48:50.523196 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.25s 2026-04-08 00:48:50.523205 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 1.94s 2026-04-08 00:48:50.523209 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.38s 2026-04-08 00:48:50.523213 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.27s 2026-04-08 00:48:50.523216 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.06s 2026-04-08 00:48:50.523220 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.00s 2026-04-08 00:48:50.523224 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.86s 2026-04-08 00:48:50.523227 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.73s 2026-04-08 00:48:50.523231 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.64s 2026-04-08 00:48:50.523235 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 0.49s 2026-04-08 00:48:50.523240 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.46s 2026-04-08 00:48:50.523246 | orchestrator | service-check-containers : opensearch | Notify handlers to restart containers --- 0.44s 2026-04-08 00:48:50.523252 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2026-04-08 00:48:50.523258 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.29s 2026-04-08 00:48:50.523264 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.25s 2026-04-08 00:48:50.523267 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.20s 2026-04-08 00:48:50.523271 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.19s 2026-04-08 00:48:50.525888 | orchestrator | 2026-04-08 00:48:50 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:48:50.525934 | orchestrator | 2026-04-08 00:48:50 | INFO  | Task 2a54e49a-fc1c-479c-908c-7cd3b27e2331 is in state STARTED 2026-04-08 00:48:50.525943 | orchestrator | 2026-04-08 00:48:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:48:53.574473 | orchestrator | 2026-04-08 00:48:53 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:48:53.576020 | orchestrator | 2026-04-08 00:48:53 | INFO  | Task 2a54e49a-fc1c-479c-908c-7cd3b27e2331 is in state STARTED 2026-04-08 00:48:53.576063 | orchestrator | 2026-04-08 00:48:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:48:56.619656 | orchestrator | 2026-04-08 00:48:56 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:48:56.621590 | orchestrator | 2026-04-08 00:48:56 | INFO  | Task 2a54e49a-fc1c-479c-908c-7cd3b27e2331 is in state STARTED 2026-04-08 00:48:56.621638 | orchestrator | 2026-04-08 00:48:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:48:59.667946 | orchestrator | 2026-04-08 00:48:59 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:48:59.670840 | orchestrator | 2026-04-08 00:48:59 | INFO  | Task 2a54e49a-fc1c-479c-908c-7cd3b27e2331 is in state STARTED 2026-04-08 00:48:59.670930 | orchestrator | 2026-04-08 00:48:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:49:02.709602 | orchestrator | 2026-04-08 00:49:02 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:49:02.712720 | orchestrator | 2026-04-08 00:49:02 | INFO  | Task 2a54e49a-fc1c-479c-908c-7cd3b27e2331 is in state STARTED 2026-04-08 00:49:02.712765 | orchestrator | 2026-04-08 00:49:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:49:05.765348 | orchestrator | 2026-04-08 00:49:05 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:49:05.766235 | orchestrator | 2026-04-08 00:49:05 | INFO  | Task 2a54e49a-fc1c-479c-908c-7cd3b27e2331 is in state STARTED 2026-04-08 00:49:05.766286 | orchestrator | 2026-04-08 00:49:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:49:08.813078 | orchestrator | 2026-04-08 00:49:08 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:49:08.813299 | orchestrator | 2026-04-08 00:49:08 | INFO  | Task 2a54e49a-fc1c-479c-908c-7cd3b27e2331 is in state STARTED 2026-04-08 00:49:08.813315 | orchestrator | 2026-04-08 00:49:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:49:11.867835 | orchestrator | 2026-04-08 00:49:11 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:49:11.870122 | orchestrator | 2026-04-08 00:49:11 | INFO  | Task 2a54e49a-fc1c-479c-908c-7cd3b27e2331 is in state STARTED 2026-04-08 00:49:11.870403 | orchestrator | 2026-04-08 00:49:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:49:14.914327 | orchestrator | 2026-04-08 00:49:14 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:49:14.915700 | orchestrator | 2026-04-08 00:49:14 | INFO  | Task 2a54e49a-fc1c-479c-908c-7cd3b27e2331 is in state STARTED 2026-04-08 00:49:14.915837 | orchestrator | 2026-04-08 00:49:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:49:17.960068 | orchestrator | 2026-04-08 00:49:17 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:49:17.961685 | orchestrator | 2026-04-08 00:49:17 | INFO  | Task 2a54e49a-fc1c-479c-908c-7cd3b27e2331 is in state STARTED 2026-04-08 00:49:17.961940 | orchestrator | 2026-04-08 00:49:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:49:21.026126 | orchestrator | 2026-04-08 00:49:21 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:49:21.028167 | orchestrator | 2026-04-08 00:49:21 | INFO  | Task 2a54e49a-fc1c-479c-908c-7cd3b27e2331 is in state STARTED 2026-04-08 00:49:21.028232 | orchestrator | 2026-04-08 00:49:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:49:24.069829 | orchestrator | 2026-04-08 00:49:24 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:49:24.070111 | orchestrator | 2026-04-08 00:49:24 | INFO  | Task 2a54e49a-fc1c-479c-908c-7cd3b27e2331 is in state STARTED 2026-04-08 00:49:24.070134 | orchestrator | 2026-04-08 00:49:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:49:27.118426 | orchestrator | 2026-04-08 00:49:27 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:49:27.120393 | orchestrator | 2026-04-08 00:49:27 | INFO  | Task 2a54e49a-fc1c-479c-908c-7cd3b27e2331 is in state STARTED 2026-04-08 00:49:27.120470 | orchestrator | 2026-04-08 00:49:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:49:30.169120 | orchestrator | 2026-04-08 00:49:30 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:49:30.170284 | orchestrator | 2026-04-08 00:49:30 | INFO  | Task 2a54e49a-fc1c-479c-908c-7cd3b27e2331 is in state STARTED 2026-04-08 00:49:30.170389 | orchestrator | 2026-04-08 00:49:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:49:33.224975 | orchestrator | 2026-04-08 00:49:33 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:49:33.226345 | orchestrator | 2026-04-08 00:49:33 | INFO  | Task 2a54e49a-fc1c-479c-908c-7cd3b27e2331 is in state STARTED 2026-04-08 00:49:33.227377 | orchestrator | 2026-04-08 00:49:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:49:36.292851 | orchestrator | 2026-04-08 00:49:36 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:49:36.295494 | orchestrator | 2026-04-08 00:49:36 | INFO  | Task 2a54e49a-fc1c-479c-908c-7cd3b27e2331 is in state STARTED 2026-04-08 00:49:36.295861 | orchestrator | 2026-04-08 00:49:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:49:39.359808 | orchestrator | 2026-04-08 00:49:39 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:49:39.363038 | orchestrator | 2026-04-08 00:49:39 | INFO  | Task 2a54e49a-fc1c-479c-908c-7cd3b27e2331 is in state STARTED 2026-04-08 00:49:39.363664 | orchestrator | 2026-04-08 00:49:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:49:42.416431 | orchestrator | 2026-04-08 00:49:42 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:49:42.418704 | orchestrator | 2026-04-08 00:49:42 | INFO  | Task 2a54e49a-fc1c-479c-908c-7cd3b27e2331 is in state SUCCESS 2026-04-08 00:49:42.421207 | orchestrator | 2026-04-08 00:49:42.421273 | orchestrator | 2026-04-08 00:49:42.421297 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-04-08 00:49:42.421322 | orchestrator | 2026-04-08 00:49:42.421338 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-04-08 00:49:42.421354 | orchestrator | Wednesday 08 April 2026 00:48:28 +0000 (0:00:00.105) 0:00:00.105 ******* 2026-04-08 00:49:42.421370 | orchestrator | ok: [localhost] => { 2026-04-08 00:49:42.421388 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-04-08 00:49:42.421403 | orchestrator | } 2026-04-08 00:49:42.421419 | orchestrator | 2026-04-08 00:49:42.421434 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-04-08 00:49:42.421451 | orchestrator | Wednesday 08 April 2026 00:48:28 +0000 (0:00:00.053) 0:00:00.158 ******* 2026-04-08 00:49:42.421467 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-04-08 00:49:42.421486 | orchestrator | ...ignoring 2026-04-08 00:49:42.421505 | orchestrator | 2026-04-08 00:49:42.421522 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-04-08 00:49:42.421538 | orchestrator | Wednesday 08 April 2026 00:48:31 +0000 (0:00:02.997) 0:00:03.156 ******* 2026-04-08 00:49:42.421555 | orchestrator | skipping: [localhost] 2026-04-08 00:49:42.421571 | orchestrator | 2026-04-08 00:49:42.421590 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-04-08 00:49:42.421605 | orchestrator | Wednesday 08 April 2026 00:48:31 +0000 (0:00:00.050) 0:00:03.206 ******* 2026-04-08 00:49:42.421623 | orchestrator | ok: [localhost] 2026-04-08 00:49:42.421639 | orchestrator | 2026-04-08 00:49:42.421649 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-08 00:49:42.421659 | orchestrator | 2026-04-08 00:49:42.421670 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-08 00:49:42.421679 | orchestrator | Wednesday 08 April 2026 00:48:31 +0000 (0:00:00.208) 0:00:03.415 ******* 2026-04-08 00:49:42.421689 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:49:42.421699 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:49:42.421709 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:49:42.421719 | orchestrator | 2026-04-08 00:49:42.421729 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-08 00:49:42.421739 | orchestrator | Wednesday 08 April 2026 00:48:31 +0000 (0:00:00.273) 0:00:03.688 ******* 2026-04-08 00:49:42.421748 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-08 00:49:42.421759 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-08 00:49:42.421769 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-08 00:49:42.422331 | orchestrator | 2026-04-08 00:49:42.422347 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-08 00:49:42.422386 | orchestrator | 2026-04-08 00:49:42.422397 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-08 00:49:42.422407 | orchestrator | Wednesday 08 April 2026 00:48:31 +0000 (0:00:00.342) 0:00:04.031 ******* 2026-04-08 00:49:42.422417 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-08 00:49:42.422426 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-08 00:49:42.422436 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-08 00:49:42.422446 | orchestrator | 2026-04-08 00:49:42.422455 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-08 00:49:42.422465 | orchestrator | Wednesday 08 April 2026 00:48:32 +0000 (0:00:00.341) 0:00:04.373 ******* 2026-04-08 00:49:42.422475 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:49:42.422486 | orchestrator | 2026-04-08 00:49:42.422495 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-04-08 00:49:42.422505 | orchestrator | Wednesday 08 April 2026 00:48:32 +0000 (0:00:00.611) 0:00:04.984 ******* 2026-04-08 00:49:42.422612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-08 00:49:42.422632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-08 00:49:42.422659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-08 00:49:42.422703 | orchestrator | 2026-04-08 00:49:42.422716 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-04-08 00:49:42.422725 | orchestrator | Wednesday 08 April 2026 00:48:36 +0000 (0:00:03.133) 0:00:08.117 ******* 2026-04-08 00:49:42.422735 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:42.422745 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:42.422755 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:49:42.422765 | orchestrator | 2026-04-08 00:49:42.422775 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-04-08 00:49:42.422784 | orchestrator | Wednesday 08 April 2026 00:48:36 +0000 (0:00:00.531) 0:00:08.649 ******* 2026-04-08 00:49:42.422794 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:42.422803 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:42.422813 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:49:42.422823 | orchestrator | 2026-04-08 00:49:42.422832 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-04-08 00:49:42.422842 | orchestrator | Wednesday 08 April 2026 00:48:37 +0000 (0:00:01.231) 0:00:09.880 ******* 2026-04-08 00:49:42.422852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-08 00:49:42.422883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-08 00:49:42.422895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-08 00:49:42.422911 | orchestrator | 2026-04-08 00:49:42.422921 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-04-08 00:49:42.422935 | orchestrator | Wednesday 08 April 2026 00:48:41 +0000 (0:00:03.372) 0:00:13.253 ******* 2026-04-08 00:49:42.422953 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:42.422970 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:42.422985 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:49:42.422999 | orchestrator | 2026-04-08 00:49:42.423096 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-04-08 00:49:42.423116 | orchestrator | Wednesday 08 April 2026 00:48:42 +0000 (0:00:00.944) 0:00:14.198 ******* 2026-04-08 00:49:42.423130 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:49:42.423146 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:49:42.423162 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:49:42.423177 | orchestrator | 2026-04-08 00:49:42.423194 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-08 00:49:42.423210 | orchestrator | Wednesday 08 April 2026 00:48:46 +0000 (0:00:03.863) 0:00:18.061 ******* 2026-04-08 00:49:42.423227 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:49:42.423243 | orchestrator | 2026-04-08 00:49:42.423260 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-08 00:49:42.423276 | orchestrator | Wednesday 08 April 2026 00:48:46 +0000 (0:00:00.440) 0:00:18.501 ******* 2026-04-08 00:49:42.423317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-08 00:49:42.423347 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:42.423365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-08 00:49:42.423382 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:42.423418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-08 00:49:42.423448 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:42.423466 | orchestrator | 2026-04-08 00:49:42.423483 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-08 00:49:42.423500 | orchestrator | Wednesday 08 April 2026 00:48:49 +0000 (0:00:02.567) 0:00:21.069 ******* 2026-04-08 00:49:42.423518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-08 00:49:42.423535 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:42.423571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-08 00:49:42.423600 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:42.423618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-08 00:49:42.423636 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:42.423653 | orchestrator | 2026-04-08 00:49:42.423669 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-08 00:49:42.423686 | orchestrator | Wednesday 08 April 2026 00:48:51 +0000 (0:00:02.086) 0:00:23.156 ******* 2026-04-08 00:49:42.423728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-08 00:49:42.423760 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:42.423779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-08 00:49:42.423795 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:42.423818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-08 00:49:42.423845 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:42.423867 | orchestrator | 2026-04-08 00:49:42.423885 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-04-08 00:49:42.423911 | orchestrator | Wednesday 08 April 2026 00:48:53 +0000 (0:00:02.252) 0:00:25.408 ******* 2026-04-08 00:49:42.423930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-08 00:49:42.423955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-08 00:49:42.423996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-08 00:49:42.424078 | orchestrator | 2026-04-08 00:49:42.424100 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-04-08 00:49:42.424117 | orchestrator | Wednesday 08 April 2026 00:48:56 +0000 (0:00:02.690) 0:00:28.099 ******* 2026-04-08 00:49:42.424134 | orchestrator | changed: [testbed-node-0] => { 2026-04-08 00:49:42.424151 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:49:42.424168 | orchestrator | } 2026-04-08 00:49:42.424185 | orchestrator | changed: [testbed-node-1] => { 2026-04-08 00:49:42.424200 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:49:42.424217 | orchestrator | } 2026-04-08 00:49:42.424233 | orchestrator | changed: [testbed-node-2] => { 2026-04-08 00:49:42.424250 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:49:42.424266 | orchestrator | } 2026-04-08 00:49:42.424282 | orchestrator | 2026-04-08 00:49:42.424297 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-08 00:49:42.424315 | orchestrator | Wednesday 08 April 2026 00:48:56 +0000 (0:00:00.358) 0:00:28.457 ******* 2026-04-08 00:49:42.424341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-08 00:49:42.424371 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:42.424401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-08 00:49:42.424419 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:42.424442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-08 00:49:42.424471 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:42.424488 | orchestrator | 2026-04-08 00:49:42.424505 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-04-08 00:49:42.424522 | orchestrator | Wednesday 08 April 2026 00:48:58 +0000 (0:00:02.304) 0:00:30.762 ******* 2026-04-08 00:49:42.424538 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:42.424554 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:42.424569 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:42.424585 | orchestrator | 2026-04-08 00:49:42.424601 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-04-08 00:49:42.424617 | orchestrator | Wednesday 08 April 2026 00:48:59 +0000 (0:00:00.463) 0:00:31.226 ******* 2026-04-08 00:49:42.424633 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:42.424649 | orchestrator | 2026-04-08 00:49:42.424672 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-04-08 00:49:42.424688 | orchestrator | Wednesday 08 April 2026 00:48:59 +0000 (0:00:00.091) 0:00:31.317 ******* 2026-04-08 00:49:42.424704 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:42.424720 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:42.424737 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:42.424752 | orchestrator | 2026-04-08 00:49:42.424768 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-04-08 00:49:42.424784 | orchestrator | Wednesday 08 April 2026 00:48:59 +0000 (0:00:00.298) 0:00:31.616 ******* 2026-04-08 00:49:42.424802 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:42.424818 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:42.424834 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:42.424852 | orchestrator | 2026-04-08 00:49:42.424868 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-04-08 00:49:42.424883 | orchestrator | Wednesday 08 April 2026 00:48:59 +0000 (0:00:00.310) 0:00:31.926 ******* 2026-04-08 00:49:42.424899 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:42.424915 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:42.424932 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:42.424949 | orchestrator | 2026-04-08 00:49:42.424965 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-04-08 00:49:42.424980 | orchestrator | Wednesday 08 April 2026 00:49:00 +0000 (0:00:00.288) 0:00:32.214 ******* 2026-04-08 00:49:42.424997 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:42.425035 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:42.425053 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:42.425069 | orchestrator | 2026-04-08 00:49:42.425086 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-04-08 00:49:42.425103 | orchestrator | Wednesday 08 April 2026 00:49:00 +0000 (0:00:00.565) 0:00:32.780 ******* 2026-04-08 00:49:42.425119 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:42.425135 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:42.425149 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:42.425159 | orchestrator | 2026-04-08 00:49:42.425168 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-04-08 00:49:42.425178 | orchestrator | Wednesday 08 April 2026 00:49:01 +0000 (0:00:00.393) 0:00:33.174 ******* 2026-04-08 00:49:42.425187 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:42.425197 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:42.425206 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:42.425225 | orchestrator | 2026-04-08 00:49:42.425234 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-04-08 00:49:42.425244 | orchestrator | Wednesday 08 April 2026 00:49:01 +0000 (0:00:00.329) 0:00:33.503 ******* 2026-04-08 00:49:42.425254 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-08 00:49:42.425264 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-08 00:49:42.425355 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-08 00:49:42.425368 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:42.425378 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-08 00:49:42.425387 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-08 00:49:42.425397 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-08 00:49:42.425406 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:42.425416 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-08 00:49:42.425425 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-08 00:49:42.425435 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-08 00:49:42.425444 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:42.425454 | orchestrator | 2026-04-08 00:49:42.425464 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-04-08 00:49:42.425473 | orchestrator | Wednesday 08 April 2026 00:49:01 +0000 (0:00:00.344) 0:00:33.848 ******* 2026-04-08 00:49:42.425483 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:42.425492 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:42.425502 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:42.425511 | orchestrator | 2026-04-08 00:49:42.425521 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-04-08 00:49:42.425530 | orchestrator | Wednesday 08 April 2026 00:49:02 +0000 (0:00:00.498) 0:00:34.346 ******* 2026-04-08 00:49:42.425540 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:42.425549 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:42.425559 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:42.425568 | orchestrator | 2026-04-08 00:49:42.425577 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-04-08 00:49:42.425587 | orchestrator | Wednesday 08 April 2026 00:49:02 +0000 (0:00:00.288) 0:00:34.635 ******* 2026-04-08 00:49:42.425599 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:42.425615 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:42.425638 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:42.425658 | orchestrator | 2026-04-08 00:49:42.425674 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-04-08 00:49:42.425694 | orchestrator | Wednesday 08 April 2026 00:49:02 +0000 (0:00:00.301) 0:00:34.937 ******* 2026-04-08 00:49:42.425707 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:42.425718 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:42.425730 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:42.425742 | orchestrator | 2026-04-08 00:49:42.425755 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-04-08 00:49:42.425767 | orchestrator | Wednesday 08 April 2026 00:49:03 +0000 (0:00:00.317) 0:00:35.255 ******* 2026-04-08 00:49:42.425781 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:42.425795 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:42.425809 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:42.425821 | orchestrator | 2026-04-08 00:49:42.425835 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-04-08 00:49:42.425843 | orchestrator | Wednesday 08 April 2026 00:49:03 +0000 (0:00:00.300) 0:00:35.555 ******* 2026-04-08 00:49:42.425851 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:42.425859 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:42.425876 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:42.425884 | orchestrator | 2026-04-08 00:49:42.425892 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-04-08 00:49:42.425908 | orchestrator | Wednesday 08 April 2026 00:49:04 +0000 (0:00:00.536) 0:00:36.091 ******* 2026-04-08 00:49:42.425916 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:42.425929 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:42.425941 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:42.425953 | orchestrator | 2026-04-08 00:49:42.425965 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-04-08 00:49:42.425979 | orchestrator | Wednesday 08 April 2026 00:49:04 +0000 (0:00:00.335) 0:00:36.426 ******* 2026-04-08 00:49:42.425992 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:42.426005 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:42.426096 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:42.426106 | orchestrator | 2026-04-08 00:49:42.426113 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-04-08 00:49:42.426122 | orchestrator | Wednesday 08 April 2026 00:49:04 +0000 (0:00:00.282) 0:00:36.709 ******* 2026-04-08 00:49:42.426132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-08 00:49:42.426142 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:42.426165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-08 00:49:42.426181 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:42.426190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-08 00:49:42.426199 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:42.426207 | orchestrator | 2026-04-08 00:49:42.426215 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-04-08 00:49:42.426222 | orchestrator | Wednesday 08 April 2026 00:49:06 +0000 (0:00:02.218) 0:00:38.928 ******* 2026-04-08 00:49:42.426230 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:42.426238 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:42.426246 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:42.426254 | orchestrator | 2026-04-08 00:49:42.426262 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-04-08 00:49:42.426269 | orchestrator | Wednesday 08 April 2026 00:49:07 +0000 (0:00:00.536) 0:00:39.465 ******* 2026-04-08 00:49:42.426294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-08 00:49:42.426308 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:42.426317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-08 00:49:42.426326 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:42.426344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-08 00:49:42.426358 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:42.426366 | orchestrator | 2026-04-08 00:49:42.426374 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-04-08 00:49:42.426382 | orchestrator | Wednesday 08 April 2026 00:49:09 +0000 (0:00:02.229) 0:00:41.695 ******* 2026-04-08 00:49:42.426390 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:42.426398 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:42.426406 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:42.426414 | orchestrator | 2026-04-08 00:49:42.426422 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-04-08 00:49:42.426430 | orchestrator | Wednesday 08 April 2026 00:49:09 +0000 (0:00:00.303) 0:00:41.998 ******* 2026-04-08 00:49:42.426437 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:42.426445 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:42.426453 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:42.426461 | orchestrator | 2026-04-08 00:49:42.426469 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-04-08 00:49:42.426477 | orchestrator | Wednesday 08 April 2026 00:49:10 +0000 (0:00:00.320) 0:00:42.319 ******* 2026-04-08 00:49:42.426485 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:42.426492 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:42.426500 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:42.426510 | orchestrator | 2026-04-08 00:49:42.426523 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-04-08 00:49:42.426536 | orchestrator | Wednesday 08 April 2026 00:49:10 +0000 (0:00:00.519) 0:00:42.838 ******* 2026-04-08 00:49:42.426549 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:42.426561 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:42.426573 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:42.426586 | orchestrator | 2026-04-08 00:49:42.426599 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-04-08 00:49:42.426612 | orchestrator | Wednesday 08 April 2026 00:49:11 +0000 (0:00:00.531) 0:00:43.370 ******* 2026-04-08 00:49:42.426626 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:42.426639 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:42.426653 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:42.426666 | orchestrator | 2026-04-08 00:49:42.426679 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-04-08 00:49:42.426688 | orchestrator | Wednesday 08 April 2026 00:49:11 +0000 (0:00:00.300) 0:00:43.671 ******* 2026-04-08 00:49:42.426696 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:49:42.426703 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:49:42.426711 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:49:42.426719 | orchestrator | 2026-04-08 00:49:42.426727 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-04-08 00:49:42.426735 | orchestrator | Wednesday 08 April 2026 00:49:12 +0000 (0:00:01.018) 0:00:44.689 ******* 2026-04-08 00:49:42.426743 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:49:42.426758 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:49:42.426766 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:49:42.426779 | orchestrator | 2026-04-08 00:49:42.426797 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-04-08 00:49:42.426815 | orchestrator | Wednesday 08 April 2026 00:49:12 +0000 (0:00:00.339) 0:00:45.028 ******* 2026-04-08 00:49:42.426826 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:49:42.426839 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:49:42.426851 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:49:42.426863 | orchestrator | 2026-04-08 00:49:42.426875 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-04-08 00:49:42.426887 | orchestrator | Wednesday 08 April 2026 00:49:13 +0000 (0:00:00.307) 0:00:45.336 ******* 2026-04-08 00:49:42.426901 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-04-08 00:49:42.426915 | orchestrator | ...ignoring 2026-04-08 00:49:42.426928 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-04-08 00:49:42.426941 | orchestrator | ...ignoring 2026-04-08 00:49:42.426956 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-04-08 00:49:42.426968 | orchestrator | ...ignoring 2026-04-08 00:49:42.426982 | orchestrator | 2026-04-08 00:49:42.426991 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-04-08 00:49:42.426999 | orchestrator | Wednesday 08 April 2026 00:49:24 +0000 (0:00:10.732) 0:00:56.068 ******* 2026-04-08 00:49:42.427006 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:49:42.427040 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:49:42.427049 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:49:42.427057 | orchestrator | 2026-04-08 00:49:42.427065 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-04-08 00:49:42.427080 | orchestrator | Wednesday 08 April 2026 00:49:24 +0000 (0:00:00.487) 0:00:56.556 ******* 2026-04-08 00:49:42.427088 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:42.427096 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:42.427104 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:42.427112 | orchestrator | 2026-04-08 00:49:42.427120 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-04-08 00:49:42.427128 | orchestrator | Wednesday 08 April 2026 00:49:24 +0000 (0:00:00.279) 0:00:56.835 ******* 2026-04-08 00:49:42.427136 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:42.427144 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:42.427151 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:42.427159 | orchestrator | 2026-04-08 00:49:42.427167 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-04-08 00:49:42.427175 | orchestrator | Wednesday 08 April 2026 00:49:25 +0000 (0:00:00.304) 0:00:57.139 ******* 2026-04-08 00:49:42.427183 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:42.427198 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:42.427206 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:42.427214 | orchestrator | 2026-04-08 00:49:42.427222 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-04-08 00:49:42.427230 | orchestrator | Wednesday 08 April 2026 00:49:25 +0000 (0:00:00.311) 0:00:57.451 ******* 2026-04-08 00:49:42.427238 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:49:42.427246 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:49:42.427254 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:49:42.427262 | orchestrator | 2026-04-08 00:49:42.427270 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-04-08 00:49:42.427278 | orchestrator | Wednesday 08 April 2026 00:49:25 +0000 (0:00:00.290) 0:00:57.742 ******* 2026-04-08 00:49:42.427286 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:49:42.427294 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:42.427310 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:42.427317 | orchestrator | 2026-04-08 00:49:42.427326 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-08 00:49:42.427415 | orchestrator | Wednesday 08 April 2026 00:49:26 +0000 (0:00:00.506) 0:00:58.249 ******* 2026-04-08 00:49:42.427429 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:42.427441 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:42.427454 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-04-08 00:49:42.427466 | orchestrator | 2026-04-08 00:49:42.427478 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-04-08 00:49:42.427489 | orchestrator | Wednesday 08 April 2026 00:49:26 +0000 (0:00:00.318) 0:00:58.567 ******* 2026-04-08 00:49:42.427503 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=10.11.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fmariadb-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_nd9ru19z/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_nd9ru19z/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_nd9ru19z/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=10.11.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fmariadb-server: Bad Request (\"invalid reference format\")\\n'"} 2026-04-08 00:49:42.427516 | orchestrator | 2026-04-08 00:49:42.427530 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-08 00:49:42.427549 | orchestrator | Wednesday 08 April 2026 00:49:30 +0000 (0:00:03.521) 0:01:02.089 ******* 2026-04-08 00:49:42.427562 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:42.427575 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:42.427587 | orchestrator | 2026-04-08 00:49:42.427600 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-04-08 00:49:42.427613 | orchestrator | Wednesday 08 April 2026 00:49:30 +0000 (0:00:00.603) 0:01:02.692 ******* 2026-04-08 00:49:42.427626 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:49:42.427639 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:49:42.427651 | orchestrator | 2026-04-08 00:49:42.427663 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-04-08 00:49:42.427671 | orchestrator | Wednesday 08 April 2026 00:49:30 +0000 (0:00:00.242) 0:01:02.934 ******* 2026-04-08 00:49:42.427691 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:49:42.427699 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:49:42.427716 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-08 00:49:42.427725 | orchestrator | 2026-04-08 00:49:42.427733 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-08 00:49:42.427740 | orchestrator | skipping: no hosts matched 2026-04-08 00:49:42.427748 | orchestrator | 2026-04-08 00:49:42.427756 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-08 00:49:42.427764 | orchestrator | 2026-04-08 00:49:42.427771 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-08 00:49:42.427779 | orchestrator | Wednesday 08 April 2026 00:49:31 +0000 (0:00:00.289) 0:01:03.224 ******* 2026-04-08 00:49:42.427788 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=10.11.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fmariadb-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_t0w6wrj_/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_t0w6wrj_/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_t0w6wrj_/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_t0w6wrj_/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=10.11.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fmariadb-server: Bad Request (\"invalid reference format\")\\n'"} 2026-04-08 00:49:42.427798 | orchestrator | 2026-04-08 00:49:42.427805 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:49:42.427813 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-08 00:49:42.427822 | orchestrator | testbed-node-0 : ok=20  changed=9  unreachable=0 failed=1  skipped=33  rescued=0 ignored=1  2026-04-08 00:49:42.427839 | orchestrator | testbed-node-1 : ok=16  changed=7  unreachable=0 failed=1  skipped=38  rescued=0 ignored=1  2026-04-08 00:49:42.427860 | orchestrator | testbed-node-2 : ok=16  changed=7  unreachable=0 failed=0 skipped=38  rescued=0 ignored=1  2026-04-08 00:49:42.427877 | orchestrator | 2026-04-08 00:49:42.427891 | orchestrator | 2026-04-08 00:49:42.427903 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:49:42.427918 | orchestrator | Wednesday 08 April 2026 00:49:41 +0000 (0:00:10.517) 0:01:13.742 ******* 2026-04-08 00:49:42.427926 | orchestrator | =============================================================================== 2026-04-08 00:49:42.427934 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.73s 2026-04-08 00:49:42.427942 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 10.52s 2026-04-08 00:49:42.427961 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.86s 2026-04-08 00:49:42.427983 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 3.52s 2026-04-08 00:49:42.427997 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.37s 2026-04-08 00:49:42.428009 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.13s 2026-04-08 00:49:42.428090 | orchestrator | Check MariaDB service --------------------------------------------------- 3.00s 2026-04-08 00:49:42.428104 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 2.69s 2026-04-08 00:49:42.428118 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.57s 2026-04-08 00:49:42.428132 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.30s 2026-04-08 00:49:42.428145 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.25s 2026-04-08 00:49:42.428158 | orchestrator | mariadb : Restart master MariaDB container(s) --------------------------- 2.23s 2026-04-08 00:49:42.428170 | orchestrator | mariadb : Restart slave MariaDB container(s) ---------------------------- 2.22s 2026-04-08 00:49:42.428183 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.09s 2026-04-08 00:49:42.428198 | orchestrator | mariadb : Copying over my.cnf for mariabackup --------------------------- 1.23s 2026-04-08 00:49:42.428211 | orchestrator | mariadb : Create MariaDB volume ----------------------------------------- 1.02s 2026-04-08 00:49:42.428223 | orchestrator | mariadb : Copying over config.json files for mariabackup ---------------- 0.94s 2026-04-08 00:49:42.428237 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.61s 2026-04-08 00:49:42.428245 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.60s 2026-04-08 00:49:42.428253 | orchestrator | mariadb : Get MariaDB wsrep recovery seqno ------------------------------ 0.57s 2026-04-08 00:49:42.428261 | orchestrator | 2026-04-08 00:49:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:49:45.477372 | orchestrator | 2026-04-08 00:49:45 | INFO  | Task f04d5cab-6961-40c0-8749-445009e46469 is in state STARTED 2026-04-08 00:49:45.477690 | orchestrator | 2026-04-08 00:49:45 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:49:45.478526 | orchestrator | 2026-04-08 00:49:45 | INFO  | Task 16a46925-fe2c-4b68-b67e-62ecbae8c6eb is in state STARTED 2026-04-08 00:49:45.478570 | orchestrator | 2026-04-08 00:49:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:49:48.510818 | orchestrator | 2026-04-08 00:49:48 | INFO  | Task f04d5cab-6961-40c0-8749-445009e46469 is in state STARTED 2026-04-08 00:49:48.510901 | orchestrator | 2026-04-08 00:49:48 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:49:48.510995 | orchestrator | 2026-04-08 00:49:48 | INFO  | Task 16a46925-fe2c-4b68-b67e-62ecbae8c6eb is in state STARTED 2026-04-08 00:49:48.511184 | orchestrator | 2026-04-08 00:49:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:49:51.544812 | orchestrator | 2026-04-08 00:49:51 | INFO  | Task f04d5cab-6961-40c0-8749-445009e46469 is in state STARTED 2026-04-08 00:49:51.547244 | orchestrator | 2026-04-08 00:49:51 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:49:51.549236 | orchestrator | 2026-04-08 00:49:51 | INFO  | Task 16a46925-fe2c-4b68-b67e-62ecbae8c6eb is in state STARTED 2026-04-08 00:49:51.549429 | orchestrator | 2026-04-08 00:49:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:49:54.586208 | orchestrator | 2026-04-08 00:49:54 | INFO  | Task f04d5cab-6961-40c0-8749-445009e46469 is in state STARTED 2026-04-08 00:49:54.586298 | orchestrator | 2026-04-08 00:49:54 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:49:54.588456 | orchestrator | 2026-04-08 00:49:54 | INFO  | Task 16a46925-fe2c-4b68-b67e-62ecbae8c6eb is in state STARTED 2026-04-08 00:49:54.588530 | orchestrator | 2026-04-08 00:49:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:49:57.620948 | orchestrator | 2026-04-08 00:49:57 | INFO  | Task f04d5cab-6961-40c0-8749-445009e46469 is in state STARTED 2026-04-08 00:49:57.621094 | orchestrator | 2026-04-08 00:49:57 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:49:57.621108 | orchestrator | 2026-04-08 00:49:57 | INFO  | Task 16a46925-fe2c-4b68-b67e-62ecbae8c6eb is in state STARTED 2026-04-08 00:49:57.621114 | orchestrator | 2026-04-08 00:49:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:50:00.652673 | orchestrator | 2026-04-08 00:50:00 | INFO  | Task f04d5cab-6961-40c0-8749-445009e46469 is in state STARTED 2026-04-08 00:50:00.653922 | orchestrator | 2026-04-08 00:50:00 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:50:00.657021 | orchestrator | 2026-04-08 00:50:00 | INFO  | Task 16a46925-fe2c-4b68-b67e-62ecbae8c6eb is in state STARTED 2026-04-08 00:50:00.657103 | orchestrator | 2026-04-08 00:50:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:50:03.695428 | orchestrator | 2026-04-08 00:50:03 | INFO  | Task f04d5cab-6961-40c0-8749-445009e46469 is in state STARTED 2026-04-08 00:50:03.697513 | orchestrator | 2026-04-08 00:50:03 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:50:03.697586 | orchestrator | 2026-04-08 00:50:03 | INFO  | Task 16a46925-fe2c-4b68-b67e-62ecbae8c6eb is in state STARTED 2026-04-08 00:50:03.697598 | orchestrator | 2026-04-08 00:50:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:50:06.740603 | orchestrator | 2026-04-08 00:50:06 | INFO  | Task f04d5cab-6961-40c0-8749-445009e46469 is in state STARTED 2026-04-08 00:50:06.740671 | orchestrator | 2026-04-08 00:50:06 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:50:06.741411 | orchestrator | 2026-04-08 00:50:06 | INFO  | Task 16a46925-fe2c-4b68-b67e-62ecbae8c6eb is in state STARTED 2026-04-08 00:50:06.741432 | orchestrator | 2026-04-08 00:50:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:50:09.772712 | orchestrator | 2026-04-08 00:50:09 | INFO  | Task f04d5cab-6961-40c0-8749-445009e46469 is in state STARTED 2026-04-08 00:50:09.773280 | orchestrator | 2026-04-08 00:50:09 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:50:09.774224 | orchestrator | 2026-04-08 00:50:09 | INFO  | Task 16a46925-fe2c-4b68-b67e-62ecbae8c6eb is in state STARTED 2026-04-08 00:50:09.774273 | orchestrator | 2026-04-08 00:50:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:50:12.810705 | orchestrator | 2026-04-08 00:50:12 | INFO  | Task f04d5cab-6961-40c0-8749-445009e46469 is in state STARTED 2026-04-08 00:50:12.810916 | orchestrator | 2026-04-08 00:50:12 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:50:12.813131 | orchestrator | 2026-04-08 00:50:12 | INFO  | Task 16a46925-fe2c-4b68-b67e-62ecbae8c6eb is in state STARTED 2026-04-08 00:50:12.813238 | orchestrator | 2026-04-08 00:50:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:50:15.860246 | orchestrator | 2026-04-08 00:50:15 | INFO  | Task f04d5cab-6961-40c0-8749-445009e46469 is in state SUCCESS 2026-04-08 00:50:15.861315 | orchestrator | 2026-04-08 00:50:15.861529 | orchestrator | 2026-04-08 00:50:15.861556 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-08 00:50:15.861841 | orchestrator | 2026-04-08 00:50:15.861855 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-08 00:50:15.861865 | orchestrator | Wednesday 08 April 2026 00:49:45 +0000 (0:00:00.327) 0:00:00.327 ******* 2026-04-08 00:50:15.861875 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:50:15.861886 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:50:15.861895 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:50:15.861906 | orchestrator | 2026-04-08 00:50:15.861922 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-08 00:50:15.861938 | orchestrator | Wednesday 08 April 2026 00:49:45 +0000 (0:00:00.291) 0:00:00.619 ******* 2026-04-08 00:50:15.861954 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-04-08 00:50:15.861967 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-04-08 00:50:15.861977 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-04-08 00:50:15.861987 | orchestrator | 2026-04-08 00:50:15.861996 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-04-08 00:50:15.862006 | orchestrator | 2026-04-08 00:50:15.862084 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-08 00:50:15.862096 | orchestrator | Wednesday 08 April 2026 00:49:45 +0000 (0:00:00.305) 0:00:00.924 ******* 2026-04-08 00:50:15.862106 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:50:15.862117 | orchestrator | 2026-04-08 00:50:15.862127 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-04-08 00:50:15.862137 | orchestrator | Wednesday 08 April 2026 00:49:46 +0000 (0:00:00.620) 0:00:01.545 ******* 2026-04-08 00:50:15.862169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-08 00:50:15.862243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-08 00:50:15.862271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-08 00:50:15.862300 | orchestrator | 2026-04-08 00:50:15.862316 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-04-08 00:50:15.862334 | orchestrator | Wednesday 08 April 2026 00:49:48 +0000 (0:00:01.803) 0:00:03.348 ******* 2026-04-08 00:50:15.862351 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:50:15.862367 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:50:15.862381 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:50:15.862391 | orchestrator | 2026-04-08 00:50:15.862411 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-08 00:50:15.862421 | orchestrator | Wednesday 08 April 2026 00:49:48 +0000 (0:00:00.264) 0:00:03.613 ******* 2026-04-08 00:50:15.862431 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-08 00:50:15.862443 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-08 00:50:15.862454 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-04-08 00:50:15.862465 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-04-08 00:50:15.862476 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-04-08 00:50:15.862487 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-04-08 00:50:15.862499 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-04-08 00:50:15.862510 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-04-08 00:50:15.862521 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-08 00:50:15.862532 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-08 00:50:15.862544 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-04-08 00:50:15.862555 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-04-08 00:50:15.862566 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-04-08 00:50:15.862577 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-04-08 00:50:15.862593 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-04-08 00:50:15.862775 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-04-08 00:50:15.862793 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-08 00:50:15.862842 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-08 00:50:15.862859 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-04-08 00:50:15.862876 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-04-08 00:50:15.862912 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-04-08 00:50:15.862928 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-04-08 00:50:15.862941 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-04-08 00:50:15.862951 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-04-08 00:50:15.862962 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-04-08 00:50:15.862975 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-04-08 00:50:15.862992 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-04-08 00:50:15.863022 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-04-08 00:50:15.863059 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-04-08 00:50:15.863077 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-04-08 00:50:15.863092 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-04-08 00:50:15.863108 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-04-08 00:50:15.863125 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-04-08 00:50:15.863143 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-04-08 00:50:15.863160 | orchestrator | 2026-04-08 00:50:15.863177 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-08 00:50:15.863191 | orchestrator | Wednesday 08 April 2026 00:49:49 +0000 (0:00:00.624) 0:00:04.238 ******* 2026-04-08 00:50:15.863201 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:50:15.863211 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:50:15.863230 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:50:15.863240 | orchestrator | 2026-04-08 00:50:15.863250 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-08 00:50:15.863260 | orchestrator | Wednesday 08 April 2026 00:49:49 +0000 (0:00:00.372) 0:00:04.610 ******* 2026-04-08 00:50:15.863269 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:50:15.863280 | orchestrator | 2026-04-08 00:50:15.863290 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-08 00:50:15.863300 | orchestrator | Wednesday 08 April 2026 00:49:49 +0000 (0:00:00.097) 0:00:04.707 ******* 2026-04-08 00:50:15.863309 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:50:15.863319 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:50:15.863329 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:50:15.863338 | orchestrator | 2026-04-08 00:50:15.863348 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-08 00:50:15.863357 | orchestrator | Wednesday 08 April 2026 00:49:49 +0000 (0:00:00.237) 0:00:04.945 ******* 2026-04-08 00:50:15.863367 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:50:15.863376 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:50:15.863386 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:50:15.863395 | orchestrator | 2026-04-08 00:50:15.863405 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-08 00:50:15.863424 | orchestrator | Wednesday 08 April 2026 00:49:50 +0000 (0:00:00.286) 0:00:05.231 ******* 2026-04-08 00:50:15.863434 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:50:15.863443 | orchestrator | 2026-04-08 00:50:15.863453 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-08 00:50:15.863462 | orchestrator | Wednesday 08 April 2026 00:49:50 +0000 (0:00:00.106) 0:00:05.337 ******* 2026-04-08 00:50:15.863472 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:50:15.863482 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:50:15.863491 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:50:15.863500 | orchestrator | 2026-04-08 00:50:15.863510 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-08 00:50:15.863527 | orchestrator | Wednesday 08 April 2026 00:49:50 +0000 (0:00:00.392) 0:00:05.730 ******* 2026-04-08 00:50:15.863537 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:50:15.863549 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:50:15.863565 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:50:15.863581 | orchestrator | 2026-04-08 00:50:15.863595 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-08 00:50:15.863619 | orchestrator | Wednesday 08 April 2026 00:49:50 +0000 (0:00:00.291) 0:00:06.022 ******* 2026-04-08 00:50:15.863637 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:50:15.863652 | orchestrator | 2026-04-08 00:50:15.863667 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-08 00:50:15.863682 | orchestrator | Wednesday 08 April 2026 00:49:50 +0000 (0:00:00.093) 0:00:06.116 ******* 2026-04-08 00:50:15.863698 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:50:15.863713 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:50:15.863727 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:50:15.863743 | orchestrator | 2026-04-08 00:50:15.863759 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-08 00:50:15.863774 | orchestrator | Wednesday 08 April 2026 00:49:51 +0000 (0:00:00.247) 0:00:06.363 ******* 2026-04-08 00:50:15.863790 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:50:15.863805 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:50:15.863821 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:50:15.863836 | orchestrator | 2026-04-08 00:50:15.863853 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-08 00:50:15.863869 | orchestrator | Wednesday 08 April 2026 00:49:51 +0000 (0:00:00.263) 0:00:06.627 ******* 2026-04-08 00:50:15.863885 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:50:15.863902 | orchestrator | 2026-04-08 00:50:15.863913 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-08 00:50:15.863922 | orchestrator | Wednesday 08 April 2026 00:49:51 +0000 (0:00:00.121) 0:00:06.749 ******* 2026-04-08 00:50:15.863932 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:50:15.863941 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:50:15.863951 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:50:15.863960 | orchestrator | 2026-04-08 00:50:15.863970 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-08 00:50:15.863983 | orchestrator | Wednesday 08 April 2026 00:49:51 +0000 (0:00:00.373) 0:00:07.123 ******* 2026-04-08 00:50:15.864002 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:50:15.864026 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:50:15.864115 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:50:15.864131 | orchestrator | 2026-04-08 00:50:15.864147 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-08 00:50:15.864162 | orchestrator | Wednesday 08 April 2026 00:49:52 +0000 (0:00:00.259) 0:00:07.382 ******* 2026-04-08 00:50:15.864176 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:50:15.864189 | orchestrator | 2026-04-08 00:50:15.864202 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-08 00:50:15.864213 | orchestrator | Wednesday 08 April 2026 00:49:52 +0000 (0:00:00.108) 0:00:07.490 ******* 2026-04-08 00:50:15.864232 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:50:15.864240 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:50:15.864248 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:50:15.864255 | orchestrator | 2026-04-08 00:50:15.864263 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-08 00:50:15.864271 | orchestrator | Wednesday 08 April 2026 00:49:52 +0000 (0:00:00.247) 0:00:07.738 ******* 2026-04-08 00:50:15.864279 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:50:15.864287 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:50:15.864295 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:50:15.864302 | orchestrator | 2026-04-08 00:50:15.864310 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-08 00:50:15.864318 | orchestrator | Wednesday 08 April 2026 00:49:52 +0000 (0:00:00.243) 0:00:07.981 ******* 2026-04-08 00:50:15.864326 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:50:15.864334 | orchestrator | 2026-04-08 00:50:15.864342 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-08 00:50:15.864350 | orchestrator | Wednesday 08 April 2026 00:49:52 +0000 (0:00:00.119) 0:00:08.100 ******* 2026-04-08 00:50:15.864366 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:50:15.864374 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:50:15.864382 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:50:15.864390 | orchestrator | 2026-04-08 00:50:15.864398 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-08 00:50:15.864406 | orchestrator | Wednesday 08 April 2026 00:49:53 +0000 (0:00:00.418) 0:00:08.519 ******* 2026-04-08 00:50:15.864414 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:50:15.864422 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:50:15.864430 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:50:15.864437 | orchestrator | 2026-04-08 00:50:15.864445 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-08 00:50:15.864453 | orchestrator | Wednesday 08 April 2026 00:49:53 +0000 (0:00:00.333) 0:00:08.853 ******* 2026-04-08 00:50:15.864461 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:50:15.864469 | orchestrator | 2026-04-08 00:50:15.864477 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-08 00:50:15.864485 | orchestrator | Wednesday 08 April 2026 00:49:53 +0000 (0:00:00.118) 0:00:08.972 ******* 2026-04-08 00:50:15.864492 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:50:15.864500 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:50:15.864508 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:50:15.864516 | orchestrator | 2026-04-08 00:50:15.864524 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-08 00:50:15.864531 | orchestrator | Wednesday 08 April 2026 00:49:54 +0000 (0:00:00.272) 0:00:09.244 ******* 2026-04-08 00:50:15.864539 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:50:15.864547 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:50:15.864555 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:50:15.864563 | orchestrator | 2026-04-08 00:50:15.864570 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-08 00:50:15.864578 | orchestrator | Wednesday 08 April 2026 00:49:54 +0000 (0:00:00.347) 0:00:09.592 ******* 2026-04-08 00:50:15.864586 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:50:15.864594 | orchestrator | 2026-04-08 00:50:15.864602 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-08 00:50:15.864616 | orchestrator | Wednesday 08 April 2026 00:49:54 +0000 (0:00:00.241) 0:00:09.834 ******* 2026-04-08 00:50:15.864624 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:50:15.864631 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:50:15.864639 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:50:15.864647 | orchestrator | 2026-04-08 00:50:15.864655 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-08 00:50:15.864663 | orchestrator | Wednesday 08 April 2026 00:49:54 +0000 (0:00:00.249) 0:00:10.083 ******* 2026-04-08 00:50:15.864671 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:50:15.864684 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:50:15.864692 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:50:15.864700 | orchestrator | 2026-04-08 00:50:15.864708 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-08 00:50:15.864716 | orchestrator | Wednesday 08 April 2026 00:49:55 +0000 (0:00:00.300) 0:00:10.383 ******* 2026-04-08 00:50:15.864724 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:50:15.864731 | orchestrator | 2026-04-08 00:50:15.864739 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-08 00:50:15.864747 | orchestrator | Wednesday 08 April 2026 00:49:55 +0000 (0:00:00.107) 0:00:10.491 ******* 2026-04-08 00:50:15.864755 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:50:15.864763 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:50:15.864771 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:50:15.864778 | orchestrator | 2026-04-08 00:50:15.864786 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-08 00:50:15.864794 | orchestrator | Wednesday 08 April 2026 00:49:55 +0000 (0:00:00.249) 0:00:10.741 ******* 2026-04-08 00:50:15.864802 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:50:15.864810 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:50:15.864818 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:50:15.864825 | orchestrator | 2026-04-08 00:50:15.864833 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-08 00:50:15.864841 | orchestrator | Wednesday 08 April 2026 00:49:56 +0000 (0:00:00.390) 0:00:11.131 ******* 2026-04-08 00:50:15.864849 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:50:15.864857 | orchestrator | 2026-04-08 00:50:15.864867 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-08 00:50:15.864880 | orchestrator | Wednesday 08 April 2026 00:49:56 +0000 (0:00:00.109) 0:00:11.241 ******* 2026-04-08 00:50:15.864892 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:50:15.864905 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:50:15.864918 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:50:15.864931 | orchestrator | 2026-04-08 00:50:15.864944 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-04-08 00:50:15.864956 | orchestrator | Wednesday 08 April 2026 00:49:56 +0000 (0:00:00.246) 0:00:11.487 ******* 2026-04-08 00:50:15.864968 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:50:15.864978 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:50:15.864991 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:50:15.865012 | orchestrator | 2026-04-08 00:50:15.865026 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-04-08 00:50:15.865059 | orchestrator | Wednesday 08 April 2026 00:49:57 +0000 (0:00:01.613) 0:00:13.100 ******* 2026-04-08 00:50:15.865072 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-08 00:50:15.865085 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-08 00:50:15.865097 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-08 00:50:15.865109 | orchestrator | 2026-04-08 00:50:15.865121 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-04-08 00:50:15.865133 | orchestrator | Wednesday 08 April 2026 00:50:00 +0000 (0:00:02.139) 0:00:15.239 ******* 2026-04-08 00:50:15.865146 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-08 00:50:15.865170 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-08 00:50:15.865183 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-08 00:50:15.865198 | orchestrator | 2026-04-08 00:50:15.865211 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-04-08 00:50:15.865225 | orchestrator | Wednesday 08 April 2026 00:50:03 +0000 (0:00:03.015) 0:00:18.255 ******* 2026-04-08 00:50:15.865248 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-08 00:50:15.865261 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-08 00:50:15.865273 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-08 00:50:15.865287 | orchestrator | 2026-04-08 00:50:15.865297 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-04-08 00:50:15.865305 | orchestrator | Wednesday 08 April 2026 00:50:04 +0000 (0:00:01.721) 0:00:19.976 ******* 2026-04-08 00:50:15.865313 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:50:15.865321 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:50:15.865329 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:50:15.865337 | orchestrator | 2026-04-08 00:50:15.865345 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-04-08 00:50:15.865353 | orchestrator | Wednesday 08 April 2026 00:50:05 +0000 (0:00:00.387) 0:00:20.364 ******* 2026-04-08 00:50:15.865361 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:50:15.865368 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:50:15.865376 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:50:15.865384 | orchestrator | 2026-04-08 00:50:15.865392 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-08 00:50:15.865399 | orchestrator | Wednesday 08 April 2026 00:50:05 +0000 (0:00:00.319) 0:00:20.684 ******* 2026-04-08 00:50:15.865414 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:50:15.865422 | orchestrator | 2026-04-08 00:50:15.865430 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-04-08 00:50:15.865438 | orchestrator | Wednesday 08 April 2026 00:50:06 +0000 (0:00:00.623) 0:00:21.307 ******* 2026-04-08 00:50:15.865449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-08 00:50:15.865486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-08 00:50:15.865502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-08 00:50:15.865516 | orchestrator | 2026-04-08 00:50:15.865524 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-04-08 00:50:15.865532 | orchestrator | Wednesday 08 April 2026 00:50:07 +0000 (0:00:01.704) 0:00:23.012 ******* 2026-04-08 00:50:15.865545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-08 00:50:15.865555 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:50:15.865570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-08 00:50:15.865591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-08 00:50:15.865601 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:50:15.865609 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:50:15.865617 | orchestrator | 2026-04-08 00:50:15.865625 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-04-08 00:50:15.865633 | orchestrator | Wednesday 08 April 2026 00:50:08 +0000 (0:00:00.809) 0:00:23.821 ******* 2026-04-08 00:50:15.865649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-08 00:50:15.865663 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:50:15.865676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-08 00:50:15.865690 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:50:15.865711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-08 00:50:15.865720 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:50:15.865728 | orchestrator | 2026-04-08 00:50:15.865736 | orchestrator | TASK [service-check-containers : horizon | Check containers] ******************* 2026-04-08 00:50:15.865744 | orchestrator | Wednesday 08 April 2026 00:50:09 +0000 (0:00:01.180) 0:00:25.002 ******* 2026-04-08 00:50:15.865758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-08 00:50:15.865777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-08 00:50:15.865793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-08 00:50:15.865807 | orchestrator | 2026-04-08 00:50:15.865815 | orchestrator | TASK [service-check-containers : horizon | Notify handlers to restart containers] *** 2026-04-08 00:50:15.865823 | orchestrator | Wednesday 08 April 2026 00:50:11 +0000 (0:00:01.419) 0:00:26.422 ******* 2026-04-08 00:50:15.865831 | orchestrator | changed: [testbed-node-0] => { 2026-04-08 00:50:15.865839 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:50:15.865847 | orchestrator | } 2026-04-08 00:50:15.865855 | orchestrator | changed: [testbed-node-1] => { 2026-04-08 00:50:15.865863 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:50:15.865871 | orchestrator | } 2026-04-08 00:50:15.865879 | orchestrator | changed: [testbed-node-2] => { 2026-04-08 00:50:15.865886 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:50:15.865894 | orchestrator | } 2026-04-08 00:50:15.865902 | orchestrator | 2026-04-08 00:50:15.865910 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-08 00:50:15.865918 | orchestrator | Wednesday 08 April 2026 00:50:11 +0000 (0:00:00.325) 0:00:26.747 ******* 2026-04-08 00:50:15.865931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-08 00:50:15.865944 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:50:15.865967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-08 00:50:15.865977 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:50:15.865985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-08 00:50:15.865999 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:50:15.866007 | orchestrator | 2026-04-08 00:50:15.866063 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-08 00:50:15.866074 | orchestrator | Wednesday 08 April 2026 00:50:12 +0000 (0:00:01.054) 0:00:27.802 ******* 2026-04-08 00:50:15.866082 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:50:15.866090 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:50:15.866098 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:50:15.866106 | orchestrator | 2026-04-08 00:50:15.866120 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-08 00:50:15.866133 | orchestrator | Wednesday 08 April 2026 00:50:12 +0000 (0:00:00.303) 0:00:28.105 ******* 2026-04-08 00:50:15.866146 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:50:15.866160 | orchestrator | 2026-04-08 00:50:15.866172 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-04-08 00:50:15.866180 | orchestrator | Wednesday 08 April 2026 00:50:13 +0000 (0:00:00.536) 0:00:28.642 ******* 2026-04-08 00:50:15.866188 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-08 00:50:15.866196 | orchestrator | 2026-04-08 00:50:15.866204 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:50:15.866213 | orchestrator | testbed-node-0 : ok=34  changed=8  unreachable=0 failed=1  skipped=26  rescued=0 ignored=0 2026-04-08 00:50:15.866221 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-08 00:50:15.866230 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-08 00:50:15.866238 | orchestrator | 2026-04-08 00:50:15.866245 | orchestrator | 2026-04-08 00:50:15.866253 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:50:15.866261 | orchestrator | Wednesday 08 April 2026 00:50:14 +0000 (0:00:00.769) 0:00:29.412 ******* 2026-04-08 00:50:15.866269 | orchestrator | =============================================================================== 2026-04-08 00:50:15.866281 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 3.02s 2026-04-08 00:50:15.866290 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.14s 2026-04-08 00:50:15.866297 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.80s 2026-04-08 00:50:15.866313 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.72s 2026-04-08 00:50:15.866321 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.70s 2026-04-08 00:50:15.866329 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.61s 2026-04-08 00:50:15.866337 | orchestrator | service-check-containers : horizon | Check containers ------------------- 1.42s 2026-04-08 00:50:15.866345 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.18s 2026-04-08 00:50:15.866352 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.05s 2026-04-08 00:50:15.866360 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.81s 2026-04-08 00:50:15.866368 | orchestrator | horizon : Creating Horizon database ------------------------------------- 0.77s 2026-04-08 00:50:15.866375 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.62s 2026-04-08 00:50:15.866383 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.62s 2026-04-08 00:50:15.866391 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.62s 2026-04-08 00:50:15.866399 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.54s 2026-04-08 00:50:15.866406 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.42s 2026-04-08 00:50:15.866414 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.39s 2026-04-08 00:50:15.866422 | orchestrator | horizon : Update policy file name --------------------------------------- 0.39s 2026-04-08 00:50:15.866429 | orchestrator | horizon : Copying over existing policy file ----------------------------- 0.39s 2026-04-08 00:50:15.866437 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.37s 2026-04-08 00:50:15.866445 | orchestrator | 2026-04-08 00:50:15 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:50:15.866453 | orchestrator | 2026-04-08 00:50:15 | INFO  | Task 16a46925-fe2c-4b68-b67e-62ecbae8c6eb is in state STARTED 2026-04-08 00:50:15.866461 | orchestrator | 2026-04-08 00:50:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:50:18.913549 | orchestrator | 2026-04-08 00:50:18 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:50:18.915361 | orchestrator | 2026-04-08 00:50:18 | INFO  | Task 16a46925-fe2c-4b68-b67e-62ecbae8c6eb is in state STARTED 2026-04-08 00:50:18.915439 | orchestrator | 2026-04-08 00:50:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:50:21.958289 | orchestrator | 2026-04-08 00:50:21 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:50:21.960623 | orchestrator | 2026-04-08 00:50:21 | INFO  | Task 16a46925-fe2c-4b68-b67e-62ecbae8c6eb is in state STARTED 2026-04-08 00:50:21.960698 | orchestrator | 2026-04-08 00:50:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:50:24.999735 | orchestrator | 2026-04-08 00:50:24 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:50:24.999956 | orchestrator | 2026-04-08 00:50:24 | INFO  | Task 16a46925-fe2c-4b68-b67e-62ecbae8c6eb is in state STARTED 2026-04-08 00:50:25.000223 | orchestrator | 2026-04-08 00:50:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:50:28.047210 | orchestrator | 2026-04-08 00:50:28 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:50:28.049555 | orchestrator | 2026-04-08 00:50:28 | INFO  | Task 16a46925-fe2c-4b68-b67e-62ecbae8c6eb is in state STARTED 2026-04-08 00:50:28.049608 | orchestrator | 2026-04-08 00:50:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:50:31.142767 | orchestrator | 2026-04-08 00:50:31 | INFO  | Task f8569579-e36f-4843-bde3-af2d8be33392 is in state STARTED 2026-04-08 00:50:31.142909 | orchestrator | 2026-04-08 00:50:31 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:50:31.143624 | orchestrator | 2026-04-08 00:50:31 | INFO  | Task 7bb5fdc1-afbf-42f4-a79e-a215d0b54a58 is in state STARTED 2026-04-08 00:50:31.144608 | orchestrator | 2026-04-08 00:50:31 | INFO  | Task 63708608-5c69-49bd-9e34-57042e1b572f is in state STARTED 2026-04-08 00:50:31.145372 | orchestrator | 2026-04-08 00:50:31 | INFO  | Task 5042d330-8de0-4b09-a2d5-d5ce314a25e1 is in state STARTED 2026-04-08 00:50:31.147993 | orchestrator | 2026-04-08 00:50:31 | INFO  | Task 16a46925-fe2c-4b68-b67e-62ecbae8c6eb is in state SUCCESS 2026-04-08 00:50:31.149515 | orchestrator | 2026-04-08 00:50:31.149548 | orchestrator | 2026-04-08 00:50:31.149557 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-08 00:50:31.149566 | orchestrator | 2026-04-08 00:50:31.149590 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-08 00:50:31.149600 | orchestrator | Wednesday 08 April 2026 00:49:45 +0000 (0:00:00.309) 0:00:00.309 ******* 2026-04-08 00:50:31.149617 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:50:31.149626 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:50:31.149633 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:50:31.149640 | orchestrator | 2026-04-08 00:50:31.149647 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-08 00:50:31.149654 | orchestrator | Wednesday 08 April 2026 00:49:45 +0000 (0:00:00.298) 0:00:00.608 ******* 2026-04-08 00:50:31.149662 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-04-08 00:50:31.149669 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-04-08 00:50:31.149676 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-04-08 00:50:31.149684 | orchestrator | 2026-04-08 00:50:31.149690 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-04-08 00:50:31.149697 | orchestrator | 2026-04-08 00:50:31.149703 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-08 00:50:31.149711 | orchestrator | Wednesday 08 April 2026 00:49:45 +0000 (0:00:00.282) 0:00:00.891 ******* 2026-04-08 00:50:31.149718 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:50:31.149727 | orchestrator | 2026-04-08 00:50:31.149734 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-04-08 00:50:31.149741 | orchestrator | Wednesday 08 April 2026 00:49:46 +0000 (0:00:00.656) 0:00:01.547 ******* 2026-04-08 00:50:31.149754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-08 00:50:31.149765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-08 00:50:31.149807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-08 00:50:31.149816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-08 00:50:31.149825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-08 00:50:31.149833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-08 00:50:31.149840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-08 00:50:31.149854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-08 00:50:31.149866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-08 00:50:31.149874 | orchestrator | 2026-04-08 00:50:31.149885 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-04-08 00:50:31.149893 | orchestrator | Wednesday 08 April 2026 00:49:48 +0000 (0:00:02.197) 0:00:03.747 ******* 2026-04-08 00:50:31.149900 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:50:31.149908 | orchestrator | 2026-04-08 00:50:31.149915 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-04-08 00:50:31.149922 | orchestrator | Wednesday 08 April 2026 00:49:48 +0000 (0:00:00.126) 0:00:03.874 ******* 2026-04-08 00:50:31.149928 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:50:31.149935 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:50:31.149942 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:50:31.149950 | orchestrator | 2026-04-08 00:50:31.149956 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-04-08 00:50:31.149964 | orchestrator | Wednesday 08 April 2026 00:49:49 +0000 (0:00:00.242) 0:00:04.116 ******* 2026-04-08 00:50:31.149971 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-08 00:50:31.149978 | orchestrator | 2026-04-08 00:50:31.149985 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-08 00:50:31.149992 | orchestrator | Wednesday 08 April 2026 00:49:49 +0000 (0:00:00.827) 0:00:04.944 ******* 2026-04-08 00:50:31.150000 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:50:31.150007 | orchestrator | 2026-04-08 00:50:31.150084 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-04-08 00:50:31.150501 | orchestrator | Wednesday 08 April 2026 00:49:50 +0000 (0:00:00.621) 0:00:05.566 ******* 2026-04-08 00:50:31.150525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-08 00:50:31.150545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-08 00:50:31.150571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-08 00:50:31.150580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-08 00:50:31.150588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-08 00:50:31.150600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-08 00:50:31.150608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-08 00:50:31.150616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-08 00:50:31.150624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-08 00:50:31.150637 | orchestrator | 2026-04-08 00:50:31.150645 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-04-08 00:50:31.150653 | orchestrator | Wednesday 08 April 2026 00:49:53 +0000 (0:00:02.718) 0:00:08.285 ******* 2026-04-08 00:50:31.150696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-08 00:50:31.150705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-08 00:50:31.150719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-08 00:50:31.150727 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:50:31.150735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-08 00:50:31.150753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-08 00:50:31.150762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-08 00:50:31.150770 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:50:31.150777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-08 00:50:31.150800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-08 00:50:31.150808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-08 00:50:31.150815 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:50:31.150822 | orchestrator | 2026-04-08 00:50:31.150829 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-04-08 00:50:31.150837 | orchestrator | Wednesday 08 April 2026 00:49:53 +0000 (0:00:00.594) 0:00:08.879 ******* 2026-04-08 00:50:31.150856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-08 00:50:31.150864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-08 00:50:31.150877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-08 00:50:31.150885 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:50:31.150893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-08 00:50:31.150902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-08 00:50:31.150910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-08 00:50:31.150917 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:50:31.150934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-08 00:50:31.150948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-08 00:50:31.150956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-08 00:50:31.150963 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:50:31.150971 | orchestrator | 2026-04-08 00:50:31.150978 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-04-08 00:50:31.150985 | orchestrator | Wednesday 08 April 2026 00:49:54 +0000 (0:00:00.957) 0:00:09.836 ******* 2026-04-08 00:50:31.150993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-08 00:50:31.151010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-08 00:50:31.151025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-08 00:50:31.151034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-08 00:50:31.151059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-08 00:50:31.151067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-08 00:50:31.151080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-08 00:50:31.151092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-08 00:50:31.151105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-08 00:50:31.151113 | orchestrator | 2026-04-08 00:50:31.151121 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-04-08 00:50:31.151128 | orchestrator | Wednesday 08 April 2026 00:49:57 +0000 (0:00:02.869) 0:00:12.706 ******* 2026-04-08 00:50:31.151136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-08 00:50:31.151144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-08 00:50:31.151161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-08 00:50:31.151174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-08 00:50:31.151182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-08 00:50:31.151190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-08 00:50:31.151199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-08 00:50:31.151207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-08 00:50:31.151224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-08 00:50:31.151237 | orchestrator | 2026-04-08 00:50:31.151244 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-04-08 00:50:31.151252 | orchestrator | Wednesday 08 April 2026 00:50:03 +0000 (0:00:05.664) 0:00:18.371 ******* 2026-04-08 00:50:31.151259 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:50:31.151266 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:50:31.151274 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:50:31.151281 | orchestrator | 2026-04-08 00:50:31.151289 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-04-08 00:50:31.151296 | orchestrator | Wednesday 08 April 2026 00:50:04 +0000 (0:00:01.612) 0:00:19.984 ******* 2026-04-08 00:50:31.151304 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:50:31.151312 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:50:31.151319 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:50:31.151326 | orchestrator | 2026-04-08 00:50:31.151334 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-04-08 00:50:31.151341 | orchestrator | Wednesday 08 April 2026 00:50:05 +0000 (0:00:00.733) 0:00:20.718 ******* 2026-04-08 00:50:31.151348 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:50:31.151355 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:50:31.151362 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:50:31.151370 | orchestrator | 2026-04-08 00:50:31.151377 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-04-08 00:50:31.151384 | orchestrator | Wednesday 08 April 2026 00:50:05 +0000 (0:00:00.370) 0:00:21.088 ******* 2026-04-08 00:50:31.151391 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:50:31.151398 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:50:31.151406 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:50:31.151414 | orchestrator | 2026-04-08 00:50:31.151421 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-04-08 00:50:31.151429 | orchestrator | Wednesday 08 April 2026 00:50:06 +0000 (0:00:00.275) 0:00:21.365 ******* 2026-04-08 00:50:31.151436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-08 00:50:31.151445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-08 00:50:31.151458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-08 00:50:31.151477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-08 00:50:31.151484 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:50:31.151491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-08 00:50:31.151498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-08 00:50:31.151505 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:50:31.151513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-08 00:50:31.151528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-08 00:50:31.151541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-08 00:50:31.151549 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:50:31.151555 | orchestrator | 2026-04-08 00:50:31.151561 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-08 00:50:31.151567 | orchestrator | Wednesday 08 April 2026 00:50:07 +0000 (0:00:00.822) 0:00:22.187 ******* 2026-04-08 00:50:31.151573 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:50:31.151579 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:50:31.151586 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:50:31.151591 | orchestrator | 2026-04-08 00:50:31.151599 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-04-08 00:50:31.151605 | orchestrator | Wednesday 08 April 2026 00:50:07 +0000 (0:00:00.283) 0:00:22.470 ******* 2026-04-08 00:50:31.151612 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-08 00:50:31.151620 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-08 00:50:31.151627 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-08 00:50:31.151634 | orchestrator | 2026-04-08 00:50:31.151641 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-04-08 00:50:31.151647 | orchestrator | Wednesday 08 April 2026 00:50:09 +0000 (0:00:02.031) 0:00:24.502 ******* 2026-04-08 00:50:31.151653 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-08 00:50:31.151660 | orchestrator | 2026-04-08 00:50:31.151666 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-04-08 00:50:31.151672 | orchestrator | Wednesday 08 April 2026 00:50:10 +0000 (0:00:01.060) 0:00:25.562 ******* 2026-04-08 00:50:31.151678 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:50:31.151684 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:50:31.151691 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:50:31.151697 | orchestrator | 2026-04-08 00:50:31.151704 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-04-08 00:50:31.151710 | orchestrator | Wednesday 08 April 2026 00:50:10 +0000 (0:00:00.523) 0:00:26.086 ******* 2026-04-08 00:50:31.151717 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-08 00:50:31.151723 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-08 00:50:31.151728 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-08 00:50:31.151741 | orchestrator | 2026-04-08 00:50:31.151748 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-04-08 00:50:31.151755 | orchestrator | Wednesday 08 April 2026 00:50:12 +0000 (0:00:01.309) 0:00:27.396 ******* 2026-04-08 00:50:31.151761 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:50:31.151768 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:50:31.151775 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:50:31.151781 | orchestrator | 2026-04-08 00:50:31.151788 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-04-08 00:50:31.151795 | orchestrator | Wednesday 08 April 2026 00:50:12 +0000 (0:00:00.281) 0:00:27.678 ******* 2026-04-08 00:50:31.151802 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-08 00:50:31.151810 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-08 00:50:31.151816 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-08 00:50:31.151823 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-08 00:50:31.151830 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-08 00:50:31.151836 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-08 00:50:31.151843 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-08 00:50:31.151849 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-08 00:50:31.151855 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-08 00:50:31.151862 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-08 00:50:31.151869 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-08 00:50:31.151875 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-08 00:50:31.151882 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-08 00:50:31.151888 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-08 00:50:31.151899 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-08 00:50:31.151906 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-08 00:50:31.151917 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-08 00:50:31.151925 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-08 00:50:31.151931 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-08 00:50:31.151938 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-08 00:50:31.151945 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-08 00:50:31.151951 | orchestrator | 2026-04-08 00:50:31.151958 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-04-08 00:50:31.151965 | orchestrator | Wednesday 08 April 2026 00:50:21 +0000 (0:00:08.729) 0:00:36.407 ******* 2026-04-08 00:50:31.151972 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-08 00:50:31.151979 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-08 00:50:31.151985 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-08 00:50:31.151998 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-08 00:50:31.152005 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-08 00:50:31.152012 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-08 00:50:31.152018 | orchestrator | 2026-04-08 00:50:31.152024 | orchestrator | TASK [service-check-containers : keystone | Check containers] ****************** 2026-04-08 00:50:31.152030 | orchestrator | Wednesday 08 April 2026 00:50:23 +0000 (0:00:02.550) 0:00:38.957 ******* 2026-04-08 00:50:31.152053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-08 00:50:31.152062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-08 00:50:31.152080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-08 00:50:31.152088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-08 00:50:31.152099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-08 00:50:31.152107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-08 00:50:31.152114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-08 00:50:31.152121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-08 00:50:31.152136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-08 00:50:31.152143 | orchestrator | 2026-04-08 00:50:31.152149 | orchestrator | TASK [service-check-containers : keystone | Notify handlers to restart containers] *** 2026-04-08 00:50:31.152155 | orchestrator | Wednesday 08 April 2026 00:50:26 +0000 (0:00:02.263) 0:00:41.221 ******* 2026-04-08 00:50:31.152162 | orchestrator | changed: [testbed-node-0] => { 2026-04-08 00:50:31.152174 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:50:31.152181 | orchestrator | } 2026-04-08 00:50:31.152188 | orchestrator | changed: [testbed-node-1] => { 2026-04-08 00:50:31.152195 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:50:31.152202 | orchestrator | } 2026-04-08 00:50:31.152208 | orchestrator | changed: [testbed-node-2] => { 2026-04-08 00:50:31.152214 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:50:31.152221 | orchestrator | } 2026-04-08 00:50:31.152228 | orchestrator | 2026-04-08 00:50:31.152234 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-08 00:50:31.152241 | orchestrator | Wednesday 08 April 2026 00:50:26 +0000 (0:00:00.304) 0:00:41.525 ******* 2026-04-08 00:50:31.152248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-08 00:50:31.152255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-08 00:50:31.152262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-08 00:50:31.152270 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:50:31.152285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-08 00:50:31.152297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-08 00:50:31.152303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-08 00:50:31.152310 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:50:31.152317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-08 00:50:31.152323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-08 00:50:31.152329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-08 00:50:31.152341 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:50:31.152348 | orchestrator | 2026-04-08 00:50:31.152355 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-08 00:50:31.152362 | orchestrator | Wednesday 08 April 2026 00:50:27 +0000 (0:00:00.977) 0:00:42.502 ******* 2026-04-08 00:50:31.152373 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:50:31.152380 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:50:31.152389 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:50:31.152396 | orchestrator | 2026-04-08 00:50:31.152402 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-04-08 00:50:31.152408 | orchestrator | Wednesday 08 April 2026 00:50:27 +0000 (0:00:00.297) 0:00:42.800 ******* 2026-04-08 00:50:31.152415 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-08 00:50:31.152422 | orchestrator | 2026-04-08 00:50:31.152428 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:50:31.152436 | orchestrator | testbed-node-0 : ok=18  changed=10  unreachable=0 failed=1  skipped=12  rescued=0 ignored=0 2026-04-08 00:50:31.152446 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-08 00:50:31.152453 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-08 00:50:31.152460 | orchestrator | 2026-04-08 00:50:31.152467 | orchestrator | 2026-04-08 00:50:31.152474 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:50:31.152488 | orchestrator | Wednesday 08 April 2026 00:50:28 +0000 (0:00:00.719) 0:00:43.519 ******* 2026-04-08 00:50:31.152495 | orchestrator | =============================================================================== 2026-04-08 00:50:31.152501 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.73s 2026-04-08 00:50:31.152508 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.66s 2026-04-08 00:50:31.152514 | orchestrator | keystone : Copying over config.json files for services ------------------ 2.87s 2026-04-08 00:50:31.152521 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 2.72s 2026-04-08 00:50:31.152528 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.55s 2026-04-08 00:50:31.152534 | orchestrator | service-check-containers : keystone | Check containers ------------------ 2.26s 2026-04-08 00:50:31.152542 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.20s 2026-04-08 00:50:31.152549 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 2.03s 2026-04-08 00:50:31.152556 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 1.61s 2026-04-08 00:50:31.152563 | orchestrator | keystone : Generate the required cron jobs for the node ----------------- 1.31s 2026-04-08 00:50:31.152570 | orchestrator | keystone : Checking whether keystone-paste.ini file exists -------------- 1.06s 2026-04-08 00:50:31.152577 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.98s 2026-04-08 00:50:31.152584 | orchestrator | service-cert-copy : keystone | Copying over backend internal TLS key ---- 0.96s 2026-04-08 00:50:31.152591 | orchestrator | keystone : Check if Keystone domain-specific config is supplied --------- 0.83s 2026-04-08 00:50:31.152598 | orchestrator | keystone : Copying over existing policy file ---------------------------- 0.82s 2026-04-08 00:50:31.152605 | orchestrator | keystone : Create Keystone domain-specific config directory ------------- 0.73s 2026-04-08 00:50:31.152612 | orchestrator | keystone : Creating keystone database ----------------------------------- 0.72s 2026-04-08 00:50:31.152618 | orchestrator | keystone : include_tasks ------------------------------------------------ 0.66s 2026-04-08 00:50:31.152625 | orchestrator | keystone : include_tasks ------------------------------------------------ 0.62s 2026-04-08 00:50:31.152638 | orchestrator | service-cert-copy : keystone | Copying over backend internal TLS certificate --- 0.59s 2026-04-08 00:50:31.152645 | orchestrator | 2026-04-08 00:50:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:50:34.177717 | orchestrator | 2026-04-08 00:50:34 | INFO  | Task f8569579-e36f-4843-bde3-af2d8be33392 is in state STARTED 2026-04-08 00:50:34.177801 | orchestrator | 2026-04-08 00:50:34 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:50:34.178167 | orchestrator | 2026-04-08 00:50:34 | INFO  | Task 7bb5fdc1-afbf-42f4-a79e-a215d0b54a58 is in state STARTED 2026-04-08 00:50:34.178901 | orchestrator | 2026-04-08 00:50:34 | INFO  | Task 63708608-5c69-49bd-9e34-57042e1b572f is in state STARTED 2026-04-08 00:50:34.180742 | orchestrator | 2026-04-08 00:50:34 | INFO  | Task 5042d330-8de0-4b09-a2d5-d5ce314a25e1 is in state STARTED 2026-04-08 00:50:34.180812 | orchestrator | 2026-04-08 00:50:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:50:37.206821 | orchestrator | 2026-04-08 00:50:37 | INFO  | Task f8569579-e36f-4843-bde3-af2d8be33392 is in state STARTED 2026-04-08 00:50:37.207806 | orchestrator | 2026-04-08 00:50:37 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:50:37.209265 | orchestrator | 2026-04-08 00:50:37 | INFO  | Task 7bb5fdc1-afbf-42f4-a79e-a215d0b54a58 is in state STARTED 2026-04-08 00:50:37.210802 | orchestrator | 2026-04-08 00:50:37 | INFO  | Task 63708608-5c69-49bd-9e34-57042e1b572f is in state STARTED 2026-04-08 00:50:37.212363 | orchestrator | 2026-04-08 00:50:37 | INFO  | Task 5042d330-8de0-4b09-a2d5-d5ce314a25e1 is in state STARTED 2026-04-08 00:50:37.212434 | orchestrator | 2026-04-08 00:50:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:50:40.251629 | orchestrator | 2026-04-08 00:50:40 | INFO  | Task f8569579-e36f-4843-bde3-af2d8be33392 is in state STARTED 2026-04-08 00:50:40.252977 | orchestrator | 2026-04-08 00:50:40 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:50:40.253674 | orchestrator | 2026-04-08 00:50:40 | INFO  | Task 7bb5fdc1-afbf-42f4-a79e-a215d0b54a58 is in state STARTED 2026-04-08 00:50:40.255208 | orchestrator | 2026-04-08 00:50:40 | INFO  | Task 63708608-5c69-49bd-9e34-57042e1b572f is in state STARTED 2026-04-08 00:50:40.256339 | orchestrator | 2026-04-08 00:50:40 | INFO  | Task 5042d330-8de0-4b09-a2d5-d5ce314a25e1 is in state STARTED 2026-04-08 00:50:40.256388 | orchestrator | 2026-04-08 00:50:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:50:43.301514 | orchestrator | 2026-04-08 00:50:43 | INFO  | Task f8569579-e36f-4843-bde3-af2d8be33392 is in state STARTED 2026-04-08 00:50:43.304296 | orchestrator | 2026-04-08 00:50:43 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:50:43.306750 | orchestrator | 2026-04-08 00:50:43 | INFO  | Task 7bb5fdc1-afbf-42f4-a79e-a215d0b54a58 is in state STARTED 2026-04-08 00:50:43.309411 | orchestrator | 2026-04-08 00:50:43 | INFO  | Task 63708608-5c69-49bd-9e34-57042e1b572f is in state STARTED 2026-04-08 00:50:43.310982 | orchestrator | 2026-04-08 00:50:43 | INFO  | Task 5042d330-8de0-4b09-a2d5-d5ce314a25e1 is in state STARTED 2026-04-08 00:50:43.311111 | orchestrator | 2026-04-08 00:50:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:50:46.368436 | orchestrator | 2026-04-08 00:50:46 | INFO  | Task f8569579-e36f-4843-bde3-af2d8be33392 is in state STARTED 2026-04-08 00:50:46.371985 | orchestrator | 2026-04-08 00:50:46 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:50:46.375556 | orchestrator | 2026-04-08 00:50:46 | INFO  | Task 7bb5fdc1-afbf-42f4-a79e-a215d0b54a58 is in state STARTED 2026-04-08 00:50:46.378169 | orchestrator | 2026-04-08 00:50:46 | INFO  | Task 63708608-5c69-49bd-9e34-57042e1b572f is in state STARTED 2026-04-08 00:50:46.380739 | orchestrator | 2026-04-08 00:50:46 | INFO  | Task 5042d330-8de0-4b09-a2d5-d5ce314a25e1 is in state STARTED 2026-04-08 00:50:46.380789 | orchestrator | 2026-04-08 00:50:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:50:49.427596 | orchestrator | 2026-04-08 00:50:49 | INFO  | Task f8569579-e36f-4843-bde3-af2d8be33392 is in state STARTED 2026-04-08 00:50:49.427698 | orchestrator | 2026-04-08 00:50:49 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:50:49.427714 | orchestrator | 2026-04-08 00:50:49 | INFO  | Task 7bb5fdc1-afbf-42f4-a79e-a215d0b54a58 is in state STARTED 2026-04-08 00:50:49.428526 | orchestrator | 2026-04-08 00:50:49 | INFO  | Task 63708608-5c69-49bd-9e34-57042e1b572f is in state STARTED 2026-04-08 00:50:49.429674 | orchestrator | 2026-04-08 00:50:49 | INFO  | Task 5042d330-8de0-4b09-a2d5-d5ce314a25e1 is in state STARTED 2026-04-08 00:50:49.429705 | orchestrator | 2026-04-08 00:50:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:50:52.477319 | orchestrator | 2026-04-08 00:50:52 | INFO  | Task f8569579-e36f-4843-bde3-af2d8be33392 is in state STARTED 2026-04-08 00:50:52.480763 | orchestrator | 2026-04-08 00:50:52 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:50:52.486692 | orchestrator | 2026-04-08 00:50:52 | INFO  | Task 7bb5fdc1-afbf-42f4-a79e-a215d0b54a58 is in state STARTED 2026-04-08 00:50:52.488480 | orchestrator | 2026-04-08 00:50:52 | INFO  | Task 63708608-5c69-49bd-9e34-57042e1b572f is in state STARTED 2026-04-08 00:50:52.491266 | orchestrator | 2026-04-08 00:50:52 | INFO  | Task 5042d330-8de0-4b09-a2d5-d5ce314a25e1 is in state STARTED 2026-04-08 00:50:52.491350 | orchestrator | 2026-04-08 00:50:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:50:55.544579 | orchestrator | 2026-04-08 00:50:55 | INFO  | Task f8569579-e36f-4843-bde3-af2d8be33392 is in state STARTED 2026-04-08 00:50:55.549158 | orchestrator | 2026-04-08 00:50:55 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:50:55.550795 | orchestrator | 2026-04-08 00:50:55 | INFO  | Task 7bb5fdc1-afbf-42f4-a79e-a215d0b54a58 is in state STARTED 2026-04-08 00:50:55.551976 | orchestrator | 2026-04-08 00:50:55 | INFO  | Task 63708608-5c69-49bd-9e34-57042e1b572f is in state STARTED 2026-04-08 00:50:55.553364 | orchestrator | 2026-04-08 00:50:55 | INFO  | Task 5042d330-8de0-4b09-a2d5-d5ce314a25e1 is in state STARTED 2026-04-08 00:50:55.553416 | orchestrator | 2026-04-08 00:50:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:50:58.617303 | orchestrator | 2026-04-08 00:50:58 | INFO  | Task f8569579-e36f-4843-bde3-af2d8be33392 is in state STARTED 2026-04-08 00:50:58.618653 | orchestrator | 2026-04-08 00:50:58 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:50:58.619846 | orchestrator | 2026-04-08 00:50:58 | INFO  | Task 7bb5fdc1-afbf-42f4-a79e-a215d0b54a58 is in state STARTED 2026-04-08 00:50:58.621326 | orchestrator | 2026-04-08 00:50:58 | INFO  | Task 63708608-5c69-49bd-9e34-57042e1b572f is in state STARTED 2026-04-08 00:50:58.623331 | orchestrator | 2026-04-08 00:50:58 | INFO  | Task 5042d330-8de0-4b09-a2d5-d5ce314a25e1 is in state STARTED 2026-04-08 00:50:58.623389 | orchestrator | 2026-04-08 00:50:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:51:01.668250 | orchestrator | 2026-04-08 00:51:01 | INFO  | Task f8569579-e36f-4843-bde3-af2d8be33392 is in state STARTED 2026-04-08 00:51:01.668989 | orchestrator | 2026-04-08 00:51:01 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:51:01.669683 | orchestrator | 2026-04-08 00:51:01 | INFO  | Task 7bb5fdc1-afbf-42f4-a79e-a215d0b54a58 is in state STARTED 2026-04-08 00:51:01.670374 | orchestrator | 2026-04-08 00:51:01 | INFO  | Task 63708608-5c69-49bd-9e34-57042e1b572f is in state STARTED 2026-04-08 00:51:01.671283 | orchestrator | 2026-04-08 00:51:01 | INFO  | Task 5042d330-8de0-4b09-a2d5-d5ce314a25e1 is in state STARTED 2026-04-08 00:51:01.671310 | orchestrator | 2026-04-08 00:51:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:51:04.710829 | orchestrator | 2026-04-08 00:51:04 | INFO  | Task f8569579-e36f-4843-bde3-af2d8be33392 is in state STARTED 2026-04-08 00:51:04.712583 | orchestrator | 2026-04-08 00:51:04 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:51:04.713907 | orchestrator | 2026-04-08 00:51:04 | INFO  | Task 7bb5fdc1-afbf-42f4-a79e-a215d0b54a58 is in state STARTED 2026-04-08 00:51:04.715304 | orchestrator | 2026-04-08 00:51:04 | INFO  | Task 63708608-5c69-49bd-9e34-57042e1b572f is in state STARTED 2026-04-08 00:51:04.716753 | orchestrator | 2026-04-08 00:51:04 | INFO  | Task 5042d330-8de0-4b09-a2d5-d5ce314a25e1 is in state STARTED 2026-04-08 00:51:04.716795 | orchestrator | 2026-04-08 00:51:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:51:07.758655 | orchestrator | 2026-04-08 00:51:07 | INFO  | Task f8569579-e36f-4843-bde3-af2d8be33392 is in state STARTED 2026-04-08 00:51:07.760547 | orchestrator | 2026-04-08 00:51:07 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:51:07.762146 | orchestrator | 2026-04-08 00:51:07 | INFO  | Task 7bb5fdc1-afbf-42f4-a79e-a215d0b54a58 is in state STARTED 2026-04-08 00:51:07.763633 | orchestrator | 2026-04-08 00:51:07 | INFO  | Task 63708608-5c69-49bd-9e34-57042e1b572f is in state STARTED 2026-04-08 00:51:07.765183 | orchestrator | 2026-04-08 00:51:07 | INFO  | Task 5042d330-8de0-4b09-a2d5-d5ce314a25e1 is in state STARTED 2026-04-08 00:51:07.765591 | orchestrator | 2026-04-08 00:51:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:51:10.802339 | orchestrator | 2026-04-08 00:51:10 | INFO  | Task f8569579-e36f-4843-bde3-af2d8be33392 is in state STARTED 2026-04-08 00:51:10.802429 | orchestrator | 2026-04-08 00:51:10 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:51:10.805558 | orchestrator | 2026-04-08 00:51:10 | INFO  | Task 7bb5fdc1-afbf-42f4-a79e-a215d0b54a58 is in state STARTED 2026-04-08 00:51:10.808639 | orchestrator | 2026-04-08 00:51:10 | INFO  | Task 63708608-5c69-49bd-9e34-57042e1b572f is in state STARTED 2026-04-08 00:51:10.811110 | orchestrator | 2026-04-08 00:51:10 | INFO  | Task 5042d330-8de0-4b09-a2d5-d5ce314a25e1 is in state STARTED 2026-04-08 00:51:10.811387 | orchestrator | 2026-04-08 00:51:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:51:13.851331 | orchestrator | 2026-04-08 00:51:13 | INFO  | Task f8569579-e36f-4843-bde3-af2d8be33392 is in state STARTED 2026-04-08 00:51:13.851830 | orchestrator | 2026-04-08 00:51:13 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:51:13.852234 | orchestrator | 2026-04-08 00:51:13 | INFO  | Task 7bb5fdc1-afbf-42f4-a79e-a215d0b54a58 is in state STARTED 2026-04-08 00:51:13.853399 | orchestrator | 2026-04-08 00:51:13 | INFO  | Task 63708608-5c69-49bd-9e34-57042e1b572f is in state STARTED 2026-04-08 00:51:13.854457 | orchestrator | 2026-04-08 00:51:13 | INFO  | Task 5042d330-8de0-4b09-a2d5-d5ce314a25e1 is in state STARTED 2026-04-08 00:51:13.854556 | orchestrator | 2026-04-08 00:51:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:51:16.908443 | orchestrator | 2026-04-08 00:51:16 | INFO  | Task f8569579-e36f-4843-bde3-af2d8be33392 is in state STARTED 2026-04-08 00:51:16.912483 | orchestrator | 2026-04-08 00:51:16 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:51:16.916495 | orchestrator | 2026-04-08 00:51:16 | INFO  | Task 7bb5fdc1-afbf-42f4-a79e-a215d0b54a58 is in state STARTED 2026-04-08 00:51:16.918587 | orchestrator | 2026-04-08 00:51:16 | INFO  | Task 63708608-5c69-49bd-9e34-57042e1b572f is in state SUCCESS 2026-04-08 00:51:16.921154 | orchestrator | 2026-04-08 00:51:16 | INFO  | Task 5042d330-8de0-4b09-a2d5-d5ce314a25e1 is in state STARTED 2026-04-08 00:51:16.924792 | orchestrator | 2026-04-08 00:51:16 | INFO  | Task 2ed2f342-8676-418e-9115-33d3dfa7dd50 is in state STARTED 2026-04-08 00:51:16.924855 | orchestrator | 2026-04-08 00:51:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:51:19.962789 | orchestrator | 2026-04-08 00:51:19 | INFO  | Task f8569579-e36f-4843-bde3-af2d8be33392 is in state STARTED 2026-04-08 00:51:19.965319 | orchestrator | 2026-04-08 00:51:19 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:51:19.967174 | orchestrator | 2026-04-08 00:51:19 | INFO  | Task 7bb5fdc1-afbf-42f4-a79e-a215d0b54a58 is in state STARTED 2026-04-08 00:51:19.968424 | orchestrator | 2026-04-08 00:51:19 | INFO  | Task 5042d330-8de0-4b09-a2d5-d5ce314a25e1 is in state STARTED 2026-04-08 00:51:19.970127 | orchestrator | 2026-04-08 00:51:19 | INFO  | Task 2ed2f342-8676-418e-9115-33d3dfa7dd50 is in state STARTED 2026-04-08 00:51:19.970348 | orchestrator | 2026-04-08 00:51:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:51:23.016358 | orchestrator | 2026-04-08 00:51:23 | INFO  | Task f8569579-e36f-4843-bde3-af2d8be33392 is in state STARTED 2026-04-08 00:51:23.016643 | orchestrator | 2026-04-08 00:51:23 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:51:23.017649 | orchestrator | 2026-04-08 00:51:23 | INFO  | Task 7bb5fdc1-afbf-42f4-a79e-a215d0b54a58 is in state STARTED 2026-04-08 00:51:23.018904 | orchestrator | 2026-04-08 00:51:23 | INFO  | Task 5042d330-8de0-4b09-a2d5-d5ce314a25e1 is in state STARTED 2026-04-08 00:51:23.019590 | orchestrator | 2026-04-08 00:51:23 | INFO  | Task 2ed2f342-8676-418e-9115-33d3dfa7dd50 is in state STARTED 2026-04-08 00:51:23.019627 | orchestrator | 2026-04-08 00:51:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:51:26.066257 | orchestrator | 2026-04-08 00:51:26 | INFO  | Task f8569579-e36f-4843-bde3-af2d8be33392 is in state STARTED 2026-04-08 00:51:26.068294 | orchestrator | 2026-04-08 00:51:26 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:51:26.069435 | orchestrator | 2026-04-08 00:51:26 | INFO  | Task 7bb5fdc1-afbf-42f4-a79e-a215d0b54a58 is in state STARTED 2026-04-08 00:51:26.070980 | orchestrator | 2026-04-08 00:51:26 | INFO  | Task 5042d330-8de0-4b09-a2d5-d5ce314a25e1 is in state STARTED 2026-04-08 00:51:26.073161 | orchestrator | 2026-04-08 00:51:26 | INFO  | Task 2ed2f342-8676-418e-9115-33d3dfa7dd50 is in state STARTED 2026-04-08 00:51:26.073233 | orchestrator | 2026-04-08 00:51:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:51:29.116007 | orchestrator | 2026-04-08 00:51:29 | INFO  | Task f8569579-e36f-4843-bde3-af2d8be33392 is in state SUCCESS 2026-04-08 00:51:29.116947 | orchestrator | 2026-04-08 00:51:29 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:51:29.118992 | orchestrator | 2026-04-08 00:51:29.119108 | orchestrator | 2026-04-08 00:51:29.119127 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-04-08 00:51:29.119141 | orchestrator | 2026-04-08 00:51:29.119155 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-04-08 00:51:29.119168 | orchestrator | Wednesday 08 April 2026 00:50:32 +0000 (0:00:00.239) 0:00:00.239 ******* 2026-04-08 00:51:29.119181 | orchestrator | changed: [localhost] 2026-04-08 00:51:29.119194 | orchestrator | 2026-04-08 00:51:29.119207 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-04-08 00:51:29.119220 | orchestrator | Wednesday 08 April 2026 00:50:34 +0000 (0:00:01.553) 0:00:01.793 ******* 2026-04-08 00:51:29.119232 | orchestrator | changed: [localhost] 2026-04-08 00:51:29.119245 | orchestrator | 2026-04-08 00:51:29.119269 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-04-08 00:51:29.119282 | orchestrator | Wednesday 08 April 2026 00:51:08 +0000 (0:00:34.392) 0:00:36.185 ******* 2026-04-08 00:51:29.119294 | orchestrator | changed: [localhost] 2026-04-08 00:51:29.119306 | orchestrator | 2026-04-08 00:51:29.119332 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-08 00:51:29.119345 | orchestrator | 2026-04-08 00:51:29.119357 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-08 00:51:29.119368 | orchestrator | Wednesday 08 April 2026 00:51:13 +0000 (0:00:04.783) 0:00:40.968 ******* 2026-04-08 00:51:29.119380 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:51:29.119394 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:51:29.119406 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:51:29.119418 | orchestrator | 2026-04-08 00:51:29.119430 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-08 00:51:29.119443 | orchestrator | Wednesday 08 April 2026 00:51:13 +0000 (0:00:00.248) 0:00:41.217 ******* 2026-04-08 00:51:29.119455 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-04-08 00:51:29.119467 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-04-08 00:51:29.119479 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-04-08 00:51:29.119491 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-04-08 00:51:29.119504 | orchestrator | 2026-04-08 00:51:29.119516 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-04-08 00:51:29.119528 | orchestrator | skipping: no hosts matched 2026-04-08 00:51:29.119541 | orchestrator | 2026-04-08 00:51:29.119553 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:51:29.119566 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:51:29.119581 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:51:29.119703 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:51:29.119721 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:51:29.119734 | orchestrator | 2026-04-08 00:51:29.119746 | orchestrator | 2026-04-08 00:51:29.119759 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:51:29.119771 | orchestrator | Wednesday 08 April 2026 00:51:14 +0000 (0:00:00.373) 0:00:41.590 ******* 2026-04-08 00:51:29.119784 | orchestrator | =============================================================================== 2026-04-08 00:51:29.119796 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 34.39s 2026-04-08 00:51:29.119809 | orchestrator | Download ironic-agent kernel -------------------------------------------- 4.78s 2026-04-08 00:51:29.119821 | orchestrator | Ensure the destination directory exists --------------------------------- 1.55s 2026-04-08 00:51:29.119834 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.37s 2026-04-08 00:51:29.119860 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.25s 2026-04-08 00:51:29.119872 | orchestrator | 2026-04-08 00:51:29.119884 | orchestrator | 2026-04-08 00:51:29.119896 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-08 00:51:29.119909 | orchestrator | 2026-04-08 00:51:29.119921 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-08 00:51:29.119934 | orchestrator | Wednesday 08 April 2026 00:50:33 +0000 (0:00:00.341) 0:00:00.341 ******* 2026-04-08 00:51:29.119946 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:51:29.119959 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:51:29.119972 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:51:29.119984 | orchestrator | 2026-04-08 00:51:29.119997 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-08 00:51:29.120009 | orchestrator | Wednesday 08 April 2026 00:50:33 +0000 (0:00:00.342) 0:00:00.684 ******* 2026-04-08 00:51:29.120021 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-04-08 00:51:29.120034 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-04-08 00:51:29.120046 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-04-08 00:51:29.120058 | orchestrator | 2026-04-08 00:51:29.120096 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-04-08 00:51:29.120108 | orchestrator | 2026-04-08 00:51:29.120121 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-08 00:51:29.120133 | orchestrator | Wednesday 08 April 2026 00:50:33 +0000 (0:00:00.346) 0:00:01.030 ******* 2026-04-08 00:51:29.120147 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:51:29.120160 | orchestrator | 2026-04-08 00:51:29.120173 | orchestrator | TASK [service-ks-register : designate | Creating/deleting services] ************ 2026-04-08 00:51:29.120186 | orchestrator | Wednesday 08 April 2026 00:50:34 +0000 (0:00:00.832) 0:00:01.862 ******* 2026-04-08 00:51:29.120216 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (5 retries left). 2026-04-08 00:51:29.120230 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (4 retries left). 2026-04-08 00:51:29.120243 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (3 retries left). 2026-04-08 00:51:29.120255 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (2 retries left). 2026-04-08 00:51:29.120268 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (1 retries left). 2026-04-08 00:51:29.120292 | orchestrator | failed: [testbed-node-0] (item=designate (dns)) => {"ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Designate DNS Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9001"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9001"}], "name": "designate", "type": "dns"}, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-08 00:51:29.120308 | orchestrator | 2026-04-08 00:51:29.120321 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:51:29.120334 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-04-08 00:51:29.120347 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:51:29.120359 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:51:29.120372 | orchestrator | 2026-04-08 00:51:29.120385 | orchestrator | 2026-04-08 00:51:29.120399 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:51:29.120412 | orchestrator | Wednesday 08 April 2026 00:51:27 +0000 (0:00:53.161) 0:00:55.024 ******* 2026-04-08 00:51:29.120433 | orchestrator | =============================================================================== 2026-04-08 00:51:29.120445 | orchestrator | service-ks-register : designate | Creating/deleting services ----------- 53.16s 2026-04-08 00:51:29.120457 | orchestrator | designate : include_tasks ----------------------------------------------- 0.83s 2026-04-08 00:51:29.120470 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.35s 2026-04-08 00:51:29.120483 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2026-04-08 00:51:29.120495 | orchestrator | 2026-04-08 00:51:29.120507 | orchestrator | 2026-04-08 00:51:29.120519 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-08 00:51:29.120531 | orchestrator | 2026-04-08 00:51:29.120544 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-08 00:51:29.120557 | orchestrator | Wednesday 08 April 2026 00:50:33 +0000 (0:00:00.269) 0:00:00.269 ******* 2026-04-08 00:51:29.120569 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:51:29.120582 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:51:29.120595 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:51:29.120608 | orchestrator | 2026-04-08 00:51:29.120622 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-08 00:51:29.120635 | orchestrator | Wednesday 08 April 2026 00:50:33 +0000 (0:00:00.283) 0:00:00.552 ******* 2026-04-08 00:51:29.120648 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-04-08 00:51:29.120660 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-04-08 00:51:29.120672 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-04-08 00:51:29.120685 | orchestrator | 2026-04-08 00:51:29.120697 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-04-08 00:51:29.120711 | orchestrator | 2026-04-08 00:51:29.120724 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-08 00:51:29.120736 | orchestrator | Wednesday 08 April 2026 00:50:34 +0000 (0:00:00.265) 0:00:00.818 ******* 2026-04-08 00:51:29.120749 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:51:29.120762 | orchestrator | 2026-04-08 00:51:29.120774 | orchestrator | TASK [service-ks-register : barbican | Creating/deleting services] ************* 2026-04-08 00:51:29.120787 | orchestrator | Wednesday 08 April 2026 00:50:34 +0000 (0:00:00.737) 0:00:01.555 ******* 2026-04-08 00:51:29.120798 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (5 retries left). 2026-04-08 00:51:29.120811 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (4 retries left). 2026-04-08 00:51:29.120824 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (3 retries left). 2026-04-08 00:51:29.120837 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (2 retries left). 2026-04-08 00:51:29.120851 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (1 retries left). 2026-04-08 00:51:29.120872 | orchestrator | failed: [testbed-node-0] (item=barbican (key-manager)) => {"ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Barbican Key Management Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9311"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9311"}], "name": "barbican", "type": "key-manager"}, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-08 00:51:29.120889 | orchestrator | 2026-04-08 00:51:29.120901 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:51:29.120914 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-04-08 00:51:29.120926 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:51:29.120948 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:51:29.120960 | orchestrator | 2026-04-08 00:51:29.120973 | orchestrator | 2026-04-08 00:51:29.120992 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:51:29.121005 | orchestrator | Wednesday 08 April 2026 00:51:28 +0000 (0:00:53.135) 0:00:54.691 ******* 2026-04-08 00:51:29.121017 | orchestrator | =============================================================================== 2026-04-08 00:51:29.121029 | orchestrator | service-ks-register : barbican | Creating/deleting services ------------ 53.14s 2026-04-08 00:51:29.121041 | orchestrator | barbican : include_tasks ------------------------------------------------ 0.74s 2026-04-08 00:51:29.121053 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.28s 2026-04-08 00:51:29.121124 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.27s 2026-04-08 00:51:29.121140 | orchestrator | 2026-04-08 00:51:29 | INFO  | Task 7bb5fdc1-afbf-42f4-a79e-a215d0b54a58 is in state SUCCESS 2026-04-08 00:51:29.121153 | orchestrator | 2026-04-08 00:51:29 | INFO  | Task 5042d330-8de0-4b09-a2d5-d5ce314a25e1 is in state STARTED 2026-04-08 00:51:29.121471 | orchestrator | 2026-04-08 00:51:29 | INFO  | Task 2f44c112-10bb-4737-8023-23b9c7a4a2c8 is in state STARTED 2026-04-08 00:51:29.123589 | orchestrator | 2026-04-08 00:51:29 | INFO  | Task 2ed2f342-8676-418e-9115-33d3dfa7dd50 is in state STARTED 2026-04-08 00:51:29.123636 | orchestrator | 2026-04-08 00:51:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:51:32.183614 | orchestrator | 2026-04-08 00:51:32 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:51:32.186694 | orchestrator | 2026-04-08 00:51:32 | INFO  | Task 5042d330-8de0-4b09-a2d5-d5ce314a25e1 is in state STARTED 2026-04-08 00:51:32.189120 | orchestrator | 2026-04-08 00:51:32 | INFO  | Task 2f44c112-10bb-4737-8023-23b9c7a4a2c8 is in state STARTED 2026-04-08 00:51:32.190679 | orchestrator | 2026-04-08 00:51:32 | INFO  | Task 2ed2f342-8676-418e-9115-33d3dfa7dd50 is in state STARTED 2026-04-08 00:51:32.190928 | orchestrator | 2026-04-08 00:51:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:51:35.245149 | orchestrator | 2026-04-08 00:51:35 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:51:35.245262 | orchestrator | 2026-04-08 00:51:35 | INFO  | Task 5042d330-8de0-4b09-a2d5-d5ce314a25e1 is in state SUCCESS 2026-04-08 00:51:35.245815 | orchestrator | 2026-04-08 00:51:35 | INFO  | Task 2f44c112-10bb-4737-8023-23b9c7a4a2c8 is in state STARTED 2026-04-08 00:51:35.247325 | orchestrator | 2026-04-08 00:51:35 | INFO  | Task 2ed2f342-8676-418e-9115-33d3dfa7dd50 is in state STARTED 2026-04-08 00:51:35.248821 | orchestrator | 2026-04-08 00:51:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:51:35.248921 | orchestrator | 2026-04-08 00:51:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:51:38.289521 | orchestrator | 2026-04-08 00:51:38 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:51:38.291622 | orchestrator | 2026-04-08 00:51:38 | INFO  | Task 2f44c112-10bb-4737-8023-23b9c7a4a2c8 is in state STARTED 2026-04-08 00:51:38.293674 | orchestrator | 2026-04-08 00:51:38 | INFO  | Task 2ed2f342-8676-418e-9115-33d3dfa7dd50 is in state STARTED 2026-04-08 00:51:38.295581 | orchestrator | 2026-04-08 00:51:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:51:38.295613 | orchestrator | 2026-04-08 00:51:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:51:41.349882 | orchestrator | 2026-04-08 00:51:41 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:51:41.353371 | orchestrator | 2026-04-08 00:51:41 | INFO  | Task 2f44c112-10bb-4737-8023-23b9c7a4a2c8 is in state STARTED 2026-04-08 00:51:41.355011 | orchestrator | 2026-04-08 00:51:41 | INFO  | Task 2ed2f342-8676-418e-9115-33d3dfa7dd50 is in state STARTED 2026-04-08 00:51:41.356783 | orchestrator | 2026-04-08 00:51:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:51:41.356821 | orchestrator | 2026-04-08 00:51:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:51:44.401277 | orchestrator | 2026-04-08 00:51:44 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:51:44.402283 | orchestrator | 2026-04-08 00:51:44 | INFO  | Task 2f44c112-10bb-4737-8023-23b9c7a4a2c8 is in state STARTED 2026-04-08 00:51:44.403789 | orchestrator | 2026-04-08 00:51:44 | INFO  | Task 2ed2f342-8676-418e-9115-33d3dfa7dd50 is in state STARTED 2026-04-08 00:51:44.405490 | orchestrator | 2026-04-08 00:51:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:51:44.405554 | orchestrator | 2026-04-08 00:51:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:51:47.454966 | orchestrator | 2026-04-08 00:51:47 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:51:47.457062 | orchestrator | 2026-04-08 00:51:47 | INFO  | Task 2f44c112-10bb-4737-8023-23b9c7a4a2c8 is in state STARTED 2026-04-08 00:51:47.459195 | orchestrator | 2026-04-08 00:51:47 | INFO  | Task 2ed2f342-8676-418e-9115-33d3dfa7dd50 is in state STARTED 2026-04-08 00:51:47.461787 | orchestrator | 2026-04-08 00:51:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:51:47.461845 | orchestrator | 2026-04-08 00:51:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:51:50.503488 | orchestrator | 2026-04-08 00:51:50 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:51:50.505696 | orchestrator | 2026-04-08 00:51:50 | INFO  | Task 2f44c112-10bb-4737-8023-23b9c7a4a2c8 is in state STARTED 2026-04-08 00:51:50.507356 | orchestrator | 2026-04-08 00:51:50 | INFO  | Task 2ed2f342-8676-418e-9115-33d3dfa7dd50 is in state STARTED 2026-04-08 00:51:50.509424 | orchestrator | 2026-04-08 00:51:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:51:50.509484 | orchestrator | 2026-04-08 00:51:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:51:53.551530 | orchestrator | 2026-04-08 00:51:53 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:51:53.553994 | orchestrator | 2026-04-08 00:51:53 | INFO  | Task 2f44c112-10bb-4737-8023-23b9c7a4a2c8 is in state STARTED 2026-04-08 00:51:53.555829 | orchestrator | 2026-04-08 00:51:53 | INFO  | Task 2ed2f342-8676-418e-9115-33d3dfa7dd50 is in state STARTED 2026-04-08 00:51:53.557383 | orchestrator | 2026-04-08 00:51:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:51:53.557435 | orchestrator | 2026-04-08 00:51:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:51:56.596234 | orchestrator | 2026-04-08 00:51:56 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:51:56.597448 | orchestrator | 2026-04-08 00:51:56 | INFO  | Task 2f44c112-10bb-4737-8023-23b9c7a4a2c8 is in state STARTED 2026-04-08 00:51:56.598719 | orchestrator | 2026-04-08 00:51:56 | INFO  | Task 2ed2f342-8676-418e-9115-33d3dfa7dd50 is in state STARTED 2026-04-08 00:51:56.600164 | orchestrator | 2026-04-08 00:51:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:51:56.600240 | orchestrator | 2026-04-08 00:51:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:51:59.644192 | orchestrator | 2026-04-08 00:51:59 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:51:59.645262 | orchestrator | 2026-04-08 00:51:59 | INFO  | Task 2f44c112-10bb-4737-8023-23b9c7a4a2c8 is in state STARTED 2026-04-08 00:51:59.646890 | orchestrator | 2026-04-08 00:51:59 | INFO  | Task 2ed2f342-8676-418e-9115-33d3dfa7dd50 is in state STARTED 2026-04-08 00:51:59.649692 | orchestrator | 2026-04-08 00:51:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:51:59.649752 | orchestrator | 2026-04-08 00:51:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:52:02.697062 | orchestrator | 2026-04-08 00:52:02 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:52:02.698638 | orchestrator | 2026-04-08 00:52:02 | INFO  | Task 2f44c112-10bb-4737-8023-23b9c7a4a2c8 is in state STARTED 2026-04-08 00:52:02.700195 | orchestrator | 2026-04-08 00:52:02 | INFO  | Task 2ed2f342-8676-418e-9115-33d3dfa7dd50 is in state STARTED 2026-04-08 00:52:02.701924 | orchestrator | 2026-04-08 00:52:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:52:02.701971 | orchestrator | 2026-04-08 00:52:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:52:05.745948 | orchestrator | 2026-04-08 00:52:05 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:52:05.747541 | orchestrator | 2026-04-08 00:52:05 | INFO  | Task 2f44c112-10bb-4737-8023-23b9c7a4a2c8 is in state STARTED 2026-04-08 00:52:05.749583 | orchestrator | 2026-04-08 00:52:05 | INFO  | Task 2ed2f342-8676-418e-9115-33d3dfa7dd50 is in state STARTED 2026-04-08 00:52:05.750851 | orchestrator | 2026-04-08 00:52:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:52:05.751127 | orchestrator | 2026-04-08 00:52:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:52:08.789831 | orchestrator | 2026-04-08 00:52:08 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:52:08.790977 | orchestrator | 2026-04-08 00:52:08 | INFO  | Task 2f44c112-10bb-4737-8023-23b9c7a4a2c8 is in state STARTED 2026-04-08 00:52:08.791914 | orchestrator | 2026-04-08 00:52:08 | INFO  | Task 2ed2f342-8676-418e-9115-33d3dfa7dd50 is in state STARTED 2026-04-08 00:52:08.794285 | orchestrator | 2026-04-08 00:52:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:52:08.794338 | orchestrator | 2026-04-08 00:52:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:52:11.843630 | orchestrator | 2026-04-08 00:52:11 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:52:11.846447 | orchestrator | 2026-04-08 00:52:11 | INFO  | Task 2f44c112-10bb-4737-8023-23b9c7a4a2c8 is in state STARTED 2026-04-08 00:52:11.848452 | orchestrator | 2026-04-08 00:52:11 | INFO  | Task 2ed2f342-8676-418e-9115-33d3dfa7dd50 is in state SUCCESS 2026-04-08 00:52:11.849249 | orchestrator | 2026-04-08 00:52:11.849304 | orchestrator | 2026-04-08 00:52:11.849314 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-08 00:52:11.849321 | orchestrator | 2026-04-08 00:52:11.849328 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-08 00:52:11.849335 | orchestrator | Wednesday 08 April 2026 00:50:33 +0000 (0:00:00.341) 0:00:00.341 ******* 2026-04-08 00:52:11.849341 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:11.849349 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:11.849355 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:11.849380 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:11.849387 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:11.849393 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:11.849399 | orchestrator | 2026-04-08 00:52:11.849406 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-08 00:52:11.849412 | orchestrator | Wednesday 08 April 2026 00:50:34 +0000 (0:00:00.955) 0:00:01.296 ******* 2026-04-08 00:52:11.849418 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-04-08 00:52:11.849425 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-04-08 00:52:11.849431 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-04-08 00:52:11.849437 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-04-08 00:52:11.849444 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-04-08 00:52:11.849450 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-04-08 00:52:11.849456 | orchestrator | 2026-04-08 00:52:11.849462 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-04-08 00:52:11.849468 | orchestrator | 2026-04-08 00:52:11.849474 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-08 00:52:11.849480 | orchestrator | Wednesday 08 April 2026 00:50:35 +0000 (0:00:00.814) 0:00:02.111 ******* 2026-04-08 00:52:11.849489 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:52:11.849496 | orchestrator | 2026-04-08 00:52:11.849503 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-04-08 00:52:11.849509 | orchestrator | Wednesday 08 April 2026 00:50:36 +0000 (0:00:01.029) 0:00:03.141 ******* 2026-04-08 00:52:11.849515 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:11.849521 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:11.849527 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:11.849533 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:11.849540 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:11.849546 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:11.849552 | orchestrator | 2026-04-08 00:52:11.849558 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-04-08 00:52:11.849564 | orchestrator | Wednesday 08 April 2026 00:50:37 +0000 (0:00:01.342) 0:00:04.483 ******* 2026-04-08 00:52:11.849570 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:11.849576 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:11.849582 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:11.849588 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:11.849594 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:11.849601 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:11.849607 | orchestrator | 2026-04-08 00:52:11.849613 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-04-08 00:52:11.849619 | orchestrator | Wednesday 08 April 2026 00:50:38 +0000 (0:00:00.975) 0:00:05.459 ******* 2026-04-08 00:52:11.849625 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:11.849632 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:11.849638 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:11.849644 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:11.849650 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:11.849656 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:11.849663 | orchestrator | 2026-04-08 00:52:11.849669 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-04-08 00:52:11.849675 | orchestrator | Wednesday 08 April 2026 00:50:38 +0000 (0:00:00.522) 0:00:05.982 ******* 2026-04-08 00:52:11.849681 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:11.849687 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:11.849693 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:11.849700 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:11.849706 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:11.849712 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:11.849723 | orchestrator | 2026-04-08 00:52:11.849729 | orchestrator | TASK [service-ks-register : neutron | Creating/deleting services] ************** 2026-04-08 00:52:11.849736 | orchestrator | Wednesday 08 April 2026 00:50:39 +0000 (0:00:00.704) 0:00:06.687 ******* 2026-04-08 00:52:11.849742 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (5 retries left). 2026-04-08 00:52:11.849761 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (4 retries left). 2026-04-08 00:52:11.849768 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (3 retries left). 2026-04-08 00:52:11.849774 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (2 retries left). 2026-04-08 00:52:11.849781 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (1 retries left). 2026-04-08 00:52:11.849789 | orchestrator | failed: [testbed-node-0] (item=neutron (network)) => {"ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Openstack Networking", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9696"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9696"}], "name": "neutron", "type": "network"}, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-08 00:52:11.849797 | orchestrator | 2026-04-08 00:52:11.849804 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:52:11.849822 | orchestrator | testbed-node-0 : ok=5  changed=0 unreachable=0 failed=1  skipped=2  rescued=0 ignored=0 2026-04-08 00:52:11.849830 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:52:11.849836 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:52:11.849843 | orchestrator | testbed-node-3 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:52:11.849849 | orchestrator | testbed-node-4 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:52:11.849856 | orchestrator | testbed-node-5 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:52:11.849862 | orchestrator | 2026-04-08 00:52:11.849868 | orchestrator | 2026-04-08 00:52:11.849875 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:52:11.849882 | orchestrator | Wednesday 08 April 2026 00:51:32 +0000 (0:00:52.899) 0:00:59.586 ******* 2026-04-08 00:52:11.849893 | orchestrator | =============================================================================== 2026-04-08 00:52:11.849904 | orchestrator | service-ks-register : neutron | Creating/deleting services ------------- 52.90s 2026-04-08 00:52:11.849913 | orchestrator | neutron : Get container facts ------------------------------------------- 1.34s 2026-04-08 00:52:11.849923 | orchestrator | neutron : include_tasks ------------------------------------------------- 1.03s 2026-04-08 00:52:11.849933 | orchestrator | neutron : Get container volume facts ------------------------------------ 0.98s 2026-04-08 00:52:11.849942 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.96s 2026-04-08 00:52:11.849952 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.81s 2026-04-08 00:52:11.849962 | orchestrator | neutron : Check for ML2/OVS presence ------------------------------------ 0.70s 2026-04-08 00:52:11.849971 | orchestrator | neutron : Check for ML2/OVN presence ------------------------------------ 0.52s 2026-04-08 00:52:11.849981 | orchestrator | 2026-04-08 00:52:11.849990 | orchestrator | 2026-04-08 00:52:11.850001 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-08 00:52:11.850011 | orchestrator | 2026-04-08 00:52:11.850070 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-08 00:52:11.850076 | orchestrator | Wednesday 08 April 2026 00:51:17 +0000 (0:00:00.294) 0:00:00.294 ******* 2026-04-08 00:52:11.850083 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:11.850089 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:11.850116 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:11.850123 | orchestrator | 2026-04-08 00:52:11.850129 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-08 00:52:11.850135 | orchestrator | Wednesday 08 April 2026 00:51:17 +0000 (0:00:00.263) 0:00:00.558 ******* 2026-04-08 00:52:11.850141 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-04-08 00:52:11.850148 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-04-08 00:52:11.850154 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-04-08 00:52:11.850160 | orchestrator | 2026-04-08 00:52:11.850167 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-04-08 00:52:11.850173 | orchestrator | 2026-04-08 00:52:11.850179 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-08 00:52:11.850185 | orchestrator | Wednesday 08 April 2026 00:51:17 +0000 (0:00:00.278) 0:00:00.836 ******* 2026-04-08 00:52:11.850192 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:11.850198 | orchestrator | 2026-04-08 00:52:11.850205 | orchestrator | TASK [service-ks-register : placement | Creating/deleting services] ************ 2026-04-08 00:52:11.850215 | orchestrator | Wednesday 08 April 2026 00:51:18 +0000 (0:00:00.553) 0:00:01.390 ******* 2026-04-08 00:52:11.850230 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (5 retries left). 2026-04-08 00:52:11.850243 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (4 retries left). 2026-04-08 00:52:11.850260 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (3 retries left). 2026-04-08 00:52:11.850271 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (2 retries left). 2026-04-08 00:52:11.850280 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (1 retries left). 2026-04-08 00:52:11.850291 | orchestrator | failed: [testbed-node-0] (item=placement (placement)) => {"ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Placement Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:8780"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:8780"}], "name": "placement", "type": "placement"}, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-08 00:52:11.850301 | orchestrator | 2026-04-08 00:52:11.850310 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:52:11.850331 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-04-08 00:52:11.850341 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:52:11.850352 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:52:11.850362 | orchestrator | 2026-04-08 00:52:11.850372 | orchestrator | 2026-04-08 00:52:11.850382 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:52:11.850392 | orchestrator | Wednesday 08 April 2026 00:52:11 +0000 (0:00:53.064) 0:00:54.455 ******* 2026-04-08 00:52:11.850398 | orchestrator | =============================================================================== 2026-04-08 00:52:11.850405 | orchestrator | service-ks-register : placement | Creating/deleting services ----------- 53.07s 2026-04-08 00:52:11.850411 | orchestrator | placement : include_tasks ----------------------------------------------- 0.55s 2026-04-08 00:52:11.850423 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.28s 2026-04-08 00:52:11.850430 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.26s 2026-04-08 00:52:11.851382 | orchestrator | 2026-04-08 00:52:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:52:11.853587 | orchestrator | 2026-04-08 00:52:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:52:14.900941 | orchestrator | 2026-04-08 00:52:14 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state STARTED 2026-04-08 00:52:14.902768 | orchestrator | 2026-04-08 00:52:14 | INFO  | Task 2f44c112-10bb-4737-8023-23b9c7a4a2c8 is in state STARTED 2026-04-08 00:52:14.904335 | orchestrator | 2026-04-08 00:52:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:52:14.904607 | orchestrator | 2026-04-08 00:52:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:52:17.960494 | orchestrator | 2026-04-08 00:52:17 | INFO  | Task aa1ed5b5-d83b-4fd0-bcce-40b93d5cbfbf is in state SUCCESS 2026-04-08 00:52:17.962426 | orchestrator | 2026-04-08 00:52:17.962482 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-08 00:52:17.962494 | orchestrator | 2.16.14 2026-04-08 00:52:17.962505 | orchestrator | 2026-04-08 00:52:17.962515 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-04-08 00:52:17.962526 | orchestrator | 2026-04-08 00:52:17.962536 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-08 00:52:17.962547 | orchestrator | Wednesday 08 April 2026 00:41:47 +0000 (0:00:00.777) 0:00:00.777 ******* 2026-04-08 00:52:17.962559 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:17.962569 | orchestrator | 2026-04-08 00:52:17.962579 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-08 00:52:17.962588 | orchestrator | Wednesday 08 April 2026 00:41:48 +0000 (0:00:01.281) 0:00:02.058 ******* 2026-04-08 00:52:17.962599 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.962609 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.962616 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.962622 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.962628 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.962634 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.962641 | orchestrator | 2026-04-08 00:52:17.962647 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-08 00:52:17.962653 | orchestrator | Wednesday 08 April 2026 00:41:50 +0000 (0:00:01.870) 0:00:03.929 ******* 2026-04-08 00:52:17.962659 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.962665 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.962671 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.962676 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.962682 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.962688 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.962694 | orchestrator | 2026-04-08 00:52:17.962700 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-08 00:52:17.962706 | orchestrator | Wednesday 08 April 2026 00:41:51 +0000 (0:00:00.648) 0:00:04.577 ******* 2026-04-08 00:52:17.963020 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.963026 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.963032 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.963049 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.963055 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.963061 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.963067 | orchestrator | 2026-04-08 00:52:17.963073 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-08 00:52:17.963079 | orchestrator | Wednesday 08 April 2026 00:41:51 +0000 (0:00:00.867) 0:00:05.445 ******* 2026-04-08 00:52:17.963128 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.963136 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.963142 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.963147 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.963153 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.963159 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.963164 | orchestrator | 2026-04-08 00:52:17.963170 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-08 00:52:17.963176 | orchestrator | Wednesday 08 April 2026 00:41:52 +0000 (0:00:00.828) 0:00:06.273 ******* 2026-04-08 00:52:17.963182 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.963187 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.963194 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.963199 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.963205 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.963210 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.963216 | orchestrator | 2026-04-08 00:52:17.963222 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-08 00:52:17.963227 | orchestrator | Wednesday 08 April 2026 00:41:53 +0000 (0:00:01.007) 0:00:07.280 ******* 2026-04-08 00:52:17.963233 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.963239 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.963244 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.963250 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.963256 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.963261 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.963267 | orchestrator | 2026-04-08 00:52:17.963272 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-08 00:52:17.963278 | orchestrator | Wednesday 08 April 2026 00:41:55 +0000 (0:00:01.198) 0:00:08.479 ******* 2026-04-08 00:52:17.963284 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.963290 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.963296 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.963302 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.963307 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.963313 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.963319 | orchestrator | 2026-04-08 00:52:17.963324 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-08 00:52:17.963330 | orchestrator | Wednesday 08 April 2026 00:41:55 +0000 (0:00:00.851) 0:00:09.330 ******* 2026-04-08 00:52:17.963336 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.963341 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.963347 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.963353 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.963358 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.963364 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.963369 | orchestrator | 2026-04-08 00:52:17.963375 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-08 00:52:17.963381 | orchestrator | Wednesday 08 April 2026 00:41:56 +0000 (0:00:00.619) 0:00:09.950 ******* 2026-04-08 00:52:17.963387 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-08 00:52:17.963393 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-08 00:52:17.963398 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-08 00:52:17.963404 | orchestrator | 2026-04-08 00:52:17.963410 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-08 00:52:17.963415 | orchestrator | Wednesday 08 April 2026 00:41:57 +0000 (0:00:01.365) 0:00:11.315 ******* 2026-04-08 00:52:17.963421 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.963427 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.963432 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.963449 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.963455 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.963461 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.963472 | orchestrator | 2026-04-08 00:52:17.963478 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-08 00:52:17.963553 | orchestrator | Wednesday 08 April 2026 00:41:59 +0000 (0:00:01.674) 0:00:12.990 ******* 2026-04-08 00:52:17.963561 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-08 00:52:17.963567 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-08 00:52:17.963573 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-08 00:52:17.963578 | orchestrator | 2026-04-08 00:52:17.963584 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-08 00:52:17.963590 | orchestrator | Wednesday 08 April 2026 00:42:01 +0000 (0:00:02.444) 0:00:15.434 ******* 2026-04-08 00:52:17.963596 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-08 00:52:17.963602 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-08 00:52:17.963608 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-08 00:52:17.963613 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.963619 | orchestrator | 2026-04-08 00:52:17.963626 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-08 00:52:17.963633 | orchestrator | Wednesday 08 April 2026 00:42:02 +0000 (0:00:00.936) 0:00:16.370 ******* 2026-04-08 00:52:17.963642 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.963657 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.963872 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.963880 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.963886 | orchestrator | 2026-04-08 00:52:17.963891 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-08 00:52:17.963897 | orchestrator | Wednesday 08 April 2026 00:42:03 +0000 (0:00:01.039) 0:00:17.411 ******* 2026-04-08 00:52:17.963905 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.963914 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.963920 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.963926 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.963932 | orchestrator | 2026-04-08 00:52:17.963937 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-08 00:52:17.963949 | orchestrator | Wednesday 08 April 2026 00:42:04 +0000 (0:00:00.228) 0:00:17.639 ******* 2026-04-08 00:52:17.963976 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-08 00:42:00.229609', 'end': '2026-04-08 00:42:00.317062', 'delta': '0:00:00.087453', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.963986 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-08 00:42:00.793906', 'end': '2026-04-08 00:42:00.894048', 'delta': '0:00:00.100142', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.964001 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-08 00:42:01.552938', 'end': '2026-04-08 00:42:01.646490', 'delta': '0:00:00.093552', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.964007 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.964013 | orchestrator | 2026-04-08 00:52:17.964019 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-08 00:52:17.964025 | orchestrator | Wednesday 08 April 2026 00:42:04 +0000 (0:00:00.281) 0:00:17.920 ******* 2026-04-08 00:52:17.964031 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.964037 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.964043 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.964051 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.964060 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.964069 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.964078 | orchestrator | 2026-04-08 00:52:17.964087 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-08 00:52:17.964124 | orchestrator | Wednesday 08 April 2026 00:42:07 +0000 (0:00:03.366) 0:00:21.287 ******* 2026-04-08 00:52:17.964135 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-08 00:52:17.964177 | orchestrator | 2026-04-08 00:52:17.964187 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-08 00:52:17.964338 | orchestrator | Wednesday 08 April 2026 00:42:09 +0000 (0:00:01.873) 0:00:23.160 ******* 2026-04-08 00:52:17.964350 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.964581 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.964593 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.964603 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.964613 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.964631 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.964641 | orchestrator | 2026-04-08 00:52:17.964650 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-08 00:52:17.964660 | orchestrator | Wednesday 08 April 2026 00:42:10 +0000 (0:00:00.939) 0:00:24.100 ******* 2026-04-08 00:52:17.964669 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.964678 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.964687 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.964697 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.964706 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.964715 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.964724 | orchestrator | 2026-04-08 00:52:17.964734 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-08 00:52:17.964743 | orchestrator | Wednesday 08 April 2026 00:42:12 +0000 (0:00:01.508) 0:00:25.608 ******* 2026-04-08 00:52:17.964753 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.964762 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.964813 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.964823 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.964832 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.964842 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.964851 | orchestrator | 2026-04-08 00:52:17.964860 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-08 00:52:17.964870 | orchestrator | Wednesday 08 April 2026 00:42:13 +0000 (0:00:00.937) 0:00:26.546 ******* 2026-04-08 00:52:17.964879 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.964888 | orchestrator | 2026-04-08 00:52:17.964897 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-08 00:52:17.965386 | orchestrator | Wednesday 08 April 2026 00:42:13 +0000 (0:00:00.324) 0:00:26.871 ******* 2026-04-08 00:52:17.965405 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.965415 | orchestrator | 2026-04-08 00:52:17.965424 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-08 00:52:17.965434 | orchestrator | Wednesday 08 April 2026 00:42:13 +0000 (0:00:00.179) 0:00:27.051 ******* 2026-04-08 00:52:17.965571 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.965585 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.965596 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.965665 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.965678 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.965688 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.965698 | orchestrator | 2026-04-08 00:52:17.965708 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-08 00:52:17.965718 | orchestrator | Wednesday 08 April 2026 00:42:14 +0000 (0:00:00.713) 0:00:27.765 ******* 2026-04-08 00:52:17.965728 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.965736 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.965745 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.965752 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.965761 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.965769 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.965777 | orchestrator | 2026-04-08 00:52:17.965785 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-08 00:52:17.965792 | orchestrator | Wednesday 08 April 2026 00:42:15 +0000 (0:00:01.143) 0:00:28.908 ******* 2026-04-08 00:52:17.965798 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.965803 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.965808 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.965813 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.965818 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.965823 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.965828 | orchestrator | 2026-04-08 00:52:17.965833 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-08 00:52:17.965848 | orchestrator | Wednesday 08 April 2026 00:42:16 +0000 (0:00:00.574) 0:00:29.482 ******* 2026-04-08 00:52:17.965853 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.965859 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.965864 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.965869 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.965874 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.965879 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.965884 | orchestrator | 2026-04-08 00:52:17.966094 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-08 00:52:17.966409 | orchestrator | Wednesday 08 April 2026 00:42:16 +0000 (0:00:00.743) 0:00:30.225 ******* 2026-04-08 00:52:17.966421 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.966434 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.966442 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.966451 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.966458 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.966467 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.966475 | orchestrator | 2026-04-08 00:52:17.966483 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-08 00:52:17.966490 | orchestrator | Wednesday 08 April 2026 00:42:17 +0000 (0:00:00.554) 0:00:30.780 ******* 2026-04-08 00:52:17.966552 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.966562 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.966960 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.966966 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.966970 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.966975 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.966980 | orchestrator | 2026-04-08 00:52:17.966985 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-08 00:52:17.966990 | orchestrator | Wednesday 08 April 2026 00:42:18 +0000 (0:00:00.764) 0:00:31.545 ******* 2026-04-08 00:52:17.966995 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.967000 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.967005 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.967010 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.967014 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.967019 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.967024 | orchestrator | 2026-04-08 00:52:17.967028 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-08 00:52:17.967033 | orchestrator | Wednesday 08 April 2026 00:42:18 +0000 (0:00:00.497) 0:00:32.042 ******* 2026-04-08 00:52:17.967040 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--19ae3695--7a84--5d0f--ba8d--a81d8fecc8c8-osd--block--19ae3695--7a84--5d0f--ba8d--a81d8fecc8c8', 'dm-uuid-LVM-xwCsGlDwFfkxburlVqB5NLDI6n7sZpTvjhaJzMQa8eJCFjLlT410JpbIrJ5LtPNv'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.967047 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9c748ac0--b7ad--5284--8a6e--a168bddd5b66-osd--block--9c748ac0--b7ad--5284--8a6e--a168bddd5b66', 'dm-uuid-LVM-XLVRyFhPs4iyEi8xqu03f7y4c8kn3scmlHnu77STip8Ug3VlNS1rlqeaSKGQ5WqB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.967136 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.967157 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.967162 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.967168 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.967178 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.967183 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.967188 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.967193 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.967241 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4596a618-b0c7-4f6c-b3f8-3bb0eece7c92', 'scsi-SQEMU_QEMU_HARDDISK_4596a618-b0c7-4f6c-b3f8-3bb0eece7c92'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4596a618-b0c7-4f6c-b3f8-3bb0eece7c92-part1', 'scsi-SQEMU_QEMU_HARDDISK_4596a618-b0c7-4f6c-b3f8-3bb0eece7c92-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4596a618-b0c7-4f6c-b3f8-3bb0eece7c92-part14', 'scsi-SQEMU_QEMU_HARDDISK_4596a618-b0c7-4f6c-b3f8-3bb0eece7c92-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4596a618-b0c7-4f6c-b3f8-3bb0eece7c92-part15', 'scsi-SQEMU_QEMU_HARDDISK_4596a618-b0c7-4f6c-b3f8-3bb0eece7c92-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4596a618-b0c7-4f6c-b3f8-3bb0eece7c92-part16', 'scsi-SQEMU_QEMU_HARDDISK_4596a618-b0c7-4f6c-b3f8-3bb0eece7c92-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:52:17.967312 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--19ae3695--7a84--5d0f--ba8d--a81d8fecc8c8-osd--block--19ae3695--7a84--5d0f--ba8d--a81d8fecc8c8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-KIB693-1MXL-Jsrw-Vj0a-y756-IACV-bcAZ1n', 'scsi-0QEMU_QEMU_HARDDISK_9851de66-42ac-4afe-9f6b-65921d8ebe77', 'scsi-SQEMU_QEMU_HARDDISK_9851de66-42ac-4afe-9f6b-65921d8ebe77'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:52:17.967321 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--9c748ac0--b7ad--5284--8a6e--a168bddd5b66-osd--block--9c748ac0--b7ad--5284--8a6e--a168bddd5b66'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fF1fTX-jpev-cNOf-sWvF-b0nY-2dsf-dsD3cE', 'scsi-0QEMU_QEMU_HARDDISK_3c9bb5e0-782f-4e13-9d09-525e18a95d4a', 'scsi-SQEMU_QEMU_HARDDISK_3c9bb5e0-782f-4e13-9d09-525e18a95d4a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:52:17.967326 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10ca2d35-3b66-46f3-ab0f-253d8a66f2e6', 'scsi-SQEMU_QEMU_HARDDISK_10ca2d35-3b66-46f3-ab0f-253d8a66f2e6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:52:17.967331 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-08-00-02-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:52:17.967386 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c5eee886--e951--5b32--a4a0--4842fe7aed13-osd--block--c5eee886--e951--5b32--a4a0--4842fe7aed13', 'dm-uuid-LVM-hSJJjoTW0i9cqMB7qnzyDSUuFdptcJJbpgOsaXvL3Qzue28rxFzgg6iQ1OJLNey5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.967394 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--16b9c52d--170e--5f8d--b9c1--c30752bb4b9e-osd--block--16b9c52d--170e--5f8d--b9c1--c30752bb4b9e', 'dm-uuid-LVM-MNYRr1GmUlANIkrAm8Q1XiTJ6Tj3RDwVlEQKgEfBtVKj0DMgSbGsmSH0IckhcMP5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.967399 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.967404 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.967413 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.967418 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.967424 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.967428 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.967433 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.967443 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.967497 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.967516 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f87f253-d467-48bc-bac0-692ec5abf0aa', 'scsi-SQEMU_QEMU_HARDDISK_1f87f253-d467-48bc-bac0-692ec5abf0aa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f87f253-d467-48bc-bac0-692ec5abf0aa-part1', 'scsi-SQEMU_QEMU_HARDDISK_1f87f253-d467-48bc-bac0-692ec5abf0aa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f87f253-d467-48bc-bac0-692ec5abf0aa-part14', 'scsi-SQEMU_QEMU_HARDDISK_1f87f253-d467-48bc-bac0-692ec5abf0aa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f87f253-d467-48bc-bac0-692ec5abf0aa-part15', 'scsi-SQEMU_QEMU_HARDDISK_1f87f253-d467-48bc-bac0-692ec5abf0aa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f87f253-d467-48bc-bac0-692ec5abf0aa-part16', 'scsi-SQEMU_QEMU_HARDDISK_1f87f253-d467-48bc-bac0-692ec5abf0aa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:52:17.967525 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c5eee886--e951--5b32--a4a0--4842fe7aed13-osd--block--c5eee886--e951--5b32--a4a0--4842fe7aed13'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-T24umF-dGrh-Fo0n-yTgT-OrMV-jVVv-MHbK0G', 'scsi-0QEMU_QEMU_HARDDISK_49047f2d-69c1-4fac-a475-f46440c51814', 'scsi-SQEMU_QEMU_HARDDISK_49047f2d-69c1-4fac-a475-f46440c51814'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:52:17.967564 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--16b9c52d--170e--5f8d--b9c1--c30752bb4b9e-osd--block--16b9c52d--170e--5f8d--b9c1--c30752bb4b9e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-c54kMR-Ip2S-4T1g-ey67-uSnv-3dsN-HVYVia', 'scsi-0QEMU_QEMU_HARDDISK_0a17ff12-522e-4235-8e4c-edb4898b90f5', 'scsi-SQEMU_QEMU_HARDDISK_0a17ff12-522e-4235-8e4c-edb4898b90f5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:52:17.967624 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_22a44c82-a679-4e37-857c-f96ffb845a8a', 'scsi-SQEMU_QEMU_HARDDISK_22a44c82-a679-4e37-857c-f96ffb845a8a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:52:17.967635 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-08-00-02-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:52:17.967647 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c80af5d6--1159--5955--8f01--035b314db1bd-osd--block--c80af5d6--1159--5955--8f01--035b314db1bd', 'dm-uuid-LVM-KlTrF1EDIjiTHHK8zRzK8yCGxCI0DGQ1CUnwoXChyL021HsQR4VIfiu0fYA0jc6C'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.967655 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d7d0ff5a--46f9--53d2--8425--61ef59e49033-osd--block--d7d0ff5a--46f9--53d2--8425--61ef59e49033', 'dm-uuid-LVM-rXS6OKBks0F68YdHLhvZFzeH4w2Md7iuhu1erBcrjvjJQBIjk4II21gfgMcpkuKL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.967663 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.967671 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.967697 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.967705 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.967765 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.967777 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.967785 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.967811 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.967824 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.967835 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab6e22-4bb9-43f8-8e9a-32a0c49ce343', 'scsi-SQEMU_QEMU_HARDDISK_e5ab6e22-4bb9-43f8-8e9a-32a0c49ce343'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab6e22-4bb9-43f8-8e9a-32a0c49ce343-part1', 'scsi-SQEMU_QEMU_HARDDISK_e5ab6e22-4bb9-43f8-8e9a-32a0c49ce343-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab6e22-4bb9-43f8-8e9a-32a0c49ce343-part14', 'scsi-SQEMU_QEMU_HARDDISK_e5ab6e22-4bb9-43f8-8e9a-32a0c49ce343-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab6e22-4bb9-43f8-8e9a-32a0c49ce343-part15', 'scsi-SQEMU_QEMU_HARDDISK_e5ab6e22-4bb9-43f8-8e9a-32a0c49ce343-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab6e22-4bb9-43f8-8e9a-32a0c49ce343-part16', 'scsi-SQEMU_QEMU_HARDDISK_e5ab6e22-4bb9-43f8-8e9a-32a0c49ce343-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:52:17.967892 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c80af5d6--1159--5955--8f01--035b314db1bd-osd--block--c80af5d6--1159--5955--8f01--035b314db1bd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-8o3EVe-Utw7-lM15-VLzY-7aD3-pHv9-pl9uyv', 'scsi-0QEMU_QEMU_HARDDISK_8b42dfd4-7d2e-4d67-9c6f-8993b8aa5f54', 'scsi-SQEMU_QEMU_HARDDISK_8b42dfd4-7d2e-4d67-9c6f-8993b8aa5f54'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:52:17.967901 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d7d0ff5a--46f9--53d2--8425--61ef59e49033-osd--block--d7d0ff5a--46f9--53d2--8425--61ef59e49033'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-I6vPbl-qQTp-H4zu-SOPt-3OKc-cfy1-s5oD5i', 'scsi-0QEMU_QEMU_HARDDISK_29911cfa-2062-4b30-9263-aae8438640a0', 'scsi-SQEMU_QEMU_HARDDISK_29911cfa-2062-4b30-9263-aae8438640a0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:52:17.967913 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b1a6d0f-b1ea-4446-8ff8-479db77ebf36', 'scsi-SQEMU_QEMU_HARDDISK_7b1a6d0f-b1ea-4446-8ff8-479db77ebf36'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:52:17.967919 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-08-00-02-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:52:17.967924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.967933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.967938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.967944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.967998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.968009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.968017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.968028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.968037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e6c101f-9165-4e11-b46f-dc6c65af7f32', 'scsi-SQEMU_QEMU_HARDDISK_7e6c101f-9165-4e11-b46f-dc6c65af7f32'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e6c101f-9165-4e11-b46f-dc6c65af7f32-part1', 'scsi-SQEMU_QEMU_HARDDISK_7e6c101f-9165-4e11-b46f-dc6c65af7f32-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e6c101f-9165-4e11-b46f-dc6c65af7f32-part14', 'scsi-SQEMU_QEMU_HARDDISK_7e6c101f-9165-4e11-b46f-dc6c65af7f32-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e6c101f-9165-4e11-b46f-dc6c65af7f32-part15', 'scsi-SQEMU_QEMU_HARDDISK_7e6c101f-9165-4e11-b46f-dc6c65af7f32-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e6c101f-9165-4e11-b46f-dc6c65af7f32-part16', 'scsi-SQEMU_QEMU_HARDDISK_7e6c101f-9165-4e11-b46f-dc6c65af7f32-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:52:17.968128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-08-00-02-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:52:17.968140 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.968148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.968157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.968170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.968177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.968182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.968194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.968199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.968204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.968265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d75891d9-cfe8-446f-818d-bc8c8304d51b', 'scsi-SQEMU_QEMU_HARDDISK_d75891d9-cfe8-446f-818d-bc8c8304d51b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d75891d9-cfe8-446f-818d-bc8c8304d51b-part1', 'scsi-SQEMU_QEMU_HARDDISK_d75891d9-cfe8-446f-818d-bc8c8304d51b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d75891d9-cfe8-446f-818d-bc8c8304d51b-part14', 'scsi-SQEMU_QEMU_HARDDISK_d75891d9-cfe8-446f-818d-bc8c8304d51b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d75891d9-cfe8-446f-818d-bc8c8304d51b-part15', 'scsi-SQEMU_QEMU_HARDDISK_d75891d9-cfe8-446f-818d-bc8c8304d51b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d75891d9-cfe8-446f-818d-bc8c8304d51b-part16', 'scsi-SQEMU_QEMU_HARDDISK_d75891d9-cfe8-446f-818d-bc8c8304d51b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:52:17.968275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-08-00-02-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:52:17.968285 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.968290 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.968295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.968301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.968306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.968311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.968351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.968361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.968369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.968382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:52:17.968391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2181995-561c-469f-942a-3ff6a519a6a0', 'scsi-SQEMU_QEMU_HARDDISK_e2181995-561c-469f-942a-3ff6a519a6a0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2181995-561c-469f-942a-3ff6a519a6a0-part1', 'scsi-SQEMU_QEMU_HARDDISK_e2181995-561c-469f-942a-3ff6a519a6a0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2181995-561c-469f-942a-3ff6a519a6a0-part14', 'scsi-SQEMU_QEMU_HARDDISK_e2181995-561c-469f-942a-3ff6a519a6a0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2181995-561c-469f-942a-3ff6a519a6a0-part15', 'scsi-SQEMU_QEMU_HARDDISK_e2181995-561c-469f-942a-3ff6a519a6a0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2181995-561c-469f-942a-3ff6a519a6a0-part16', 'scsi-SQEMU_QEMU_HARDDISK_e2181995-561c-469f-942a-3ff6a519a6a0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:52:17.968442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-08-00-02-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:52:17.968449 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.968454 | orchestrator | 2026-04-08 00:52:17.968459 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-08 00:52:17.968465 | orchestrator | Wednesday 08 April 2026 00:42:19 +0000 (0:00:01.382) 0:00:33.424 ******* 2026-04-08 00:52:17.968471 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--19ae3695--7a84--5d0f--ba8d--a81d8fecc8c8-osd--block--19ae3695--7a84--5d0f--ba8d--a81d8fecc8c8', 'dm-uuid-LVM-xwCsGlDwFfkxburlVqB5NLDI6n7sZpTvjhaJzMQa8eJCFjLlT410JpbIrJ5LtPNv'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.968481 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9c748ac0--b7ad--5284--8a6e--a168bddd5b66-osd--block--9c748ac0--b7ad--5284--8a6e--a168bddd5b66', 'dm-uuid-LVM-XLVRyFhPs4iyEi8xqu03f7y4c8kn3scmlHnu77STip8Ug3VlNS1rlqeaSKGQ5WqB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.968500 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.968506 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.968511 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.968551 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.968558 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c5eee886--e951--5b32--a4a0--4842fe7aed13-osd--block--c5eee886--e951--5b32--a4a0--4842fe7aed13', 'dm-uuid-LVM-hSJJjoTW0i9cqMB7qnzyDSUuFdptcJJbpgOsaXvL3Qzue28rxFzgg6iQ1OJLNey5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.968566 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.968575 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--16b9c52d--170e--5f8d--b9c1--c30752bb4b9e-osd--block--16b9c52d--170e--5f8d--b9c1--c30752bb4b9e', 'dm-uuid-LVM-MNYRr1GmUlANIkrAm8Q1XiTJ6Tj3RDwVlEQKgEfBtVKj0DMgSbGsmSH0IckhcMP5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.968580 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.968585 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.968625 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.968632 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.968637 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.968646 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.968651 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.968657 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.968666 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.968744 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.968798 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.968820 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4596a618-b0c7-4f6c-b3f8-3bb0eece7c92', 'scsi-SQEMU_QEMU_HARDDISK_4596a618-b0c7-4f6c-b3f8-3bb0eece7c92'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4596a618-b0c7-4f6c-b3f8-3bb0eece7c92-part1', 'scsi-SQEMU_QEMU_HARDDISK_4596a618-b0c7-4f6c-b3f8-3bb0eece7c92-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4596a618-b0c7-4f6c-b3f8-3bb0eece7c92-part14', 'scsi-SQEMU_QEMU_HARDDISK_4596a618-b0c7-4f6c-b3f8-3bb0eece7c92-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4596a618-b0c7-4f6c-b3f8-3bb0eece7c92-part15', 'scsi-SQEMU_QEMU_HARDDISK_4596a618-b0c7-4f6c-b3f8-3bb0eece7c92-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4596a618-b0c7-4f6c-b3f8-3bb0eece7c92-part16', 'scsi-SQEMU_QEMU_HARDDISK_4596a618-b0c7-4f6c-b3f8-3bb0eece7c92-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.968895 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--19ae3695--7a84--5d0f--ba8d--a81d8fecc8c8-osd--block--19ae3695--7a84--5d0f--ba8d--a81d8fecc8c8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-KIB693-1MXL-Jsrw-Vj0a-y756-IACV-bcAZ1n', 'scsi-0QEMU_QEMU_HARDDISK_9851de66-42ac-4afe-9f6b-65921d8ebe77', 'scsi-SQEMU_QEMU_HARDDISK_9851de66-42ac-4afe-9f6b-65921d8ebe77'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.968911 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f87f253-d467-48bc-bac0-692ec5abf0aa', 'scsi-SQEMU_QEMU_HARDDISK_1f87f253-d467-48bc-bac0-692ec5abf0aa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f87f253-d467-48bc-bac0-692ec5abf0aa-part1', 'scsi-SQEMU_QEMU_HARDDISK_1f87f253-d467-48bc-bac0-692ec5abf0aa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f87f253-d467-48bc-bac0-692ec5abf0aa-part14', 'scsi-SQEMU_QEMU_HARDDISK_1f87f253-d467-48bc-bac0-692ec5abf0aa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f87f253-d467-48bc-bac0-692ec5abf0aa-part15', 'scsi-SQEMU_QEMU_HARDDISK_1f87f253-d467-48bc-bac0-692ec5abf0aa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f87f253-d467-48bc-bac0-692ec5abf0aa-part16', 'scsi-SQEMU_QEMU_HARDDISK_1f87f253-d467-48bc-bac0-692ec5abf0aa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.968923 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--9c748ac0--b7ad--5284--8a6e--a168bddd5b66-osd--block--9c748ac0--b7ad--5284--8a6e--a168bddd5b66'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fF1fTX-jpev-cNOf-sWvF-b0nY-2dsf-dsD3cE', 'scsi-0QEMU_QEMU_HARDDISK_3c9bb5e0-782f-4e13-9d09-525e18a95d4a', 'scsi-SQEMU_QEMU_HARDDISK_3c9bb5e0-782f-4e13-9d09-525e18a95d4a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.968973 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10ca2d35-3b66-46f3-ab0f-253d8a66f2e6', 'scsi-SQEMU_QEMU_HARDDISK_10ca2d35-3b66-46f3-ab0f-253d8a66f2e6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.968985 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c5eee886--e951--5b32--a4a0--4842fe7aed13-osd--block--c5eee886--e951--5b32--a4a0--4842fe7aed13'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-T24umF-dGrh-Fo0n-yTgT-OrMV-jVVv-MHbK0G', 'scsi-0QEMU_QEMU_HARDDISK_49047f2d-69c1-4fac-a475-f46440c51814', 'scsi-SQEMU_QEMU_HARDDISK_49047f2d-69c1-4fac-a475-f46440c51814'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.969005 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-08-00-02-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.969015 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--16b9c52d--170e--5f8d--b9c1--c30752bb4b9e-osd--block--16b9c52d--170e--5f8d--b9c1--c30752bb4b9e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-c54kMR-Ip2S-4T1g-ey67-uSnv-3dsN-HVYVia', 'scsi-0QEMU_QEMU_HARDDISK_0a17ff12-522e-4235-8e4c-edb4898b90f5', 'scsi-SQEMU_QEMU_HARDDISK_0a17ff12-522e-4235-8e4c-edb4898b90f5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.969021 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_22a44c82-a679-4e37-857c-f96ffb845a8a', 'scsi-SQEMU_QEMU_HARDDISK_22a44c82-a679-4e37-857c-f96ffb845a8a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.969071 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-08-00-02-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.969079 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.969085 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c80af5d6--1159--5955--8f01--035b314db1bd-osd--block--c80af5d6--1159--5955--8f01--035b314db1bd', 'dm-uuid-LVM-KlTrF1EDIjiTHHK8zRzK8yCGxCI0DGQ1CUnwoXChyL021HsQR4VIfiu0fYA0jc6C'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.969095 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.969122 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d7d0ff5a--46f9--53d2--8425--61ef59e49033-osd--block--d7d0ff5a--46f9--53d2--8425--61ef59e49033', 'dm-uuid-LVM-rXS6OKBks0F68YdHLhvZFzeH4w2Md7iuhu1erBcrjvjJQBIjk4II21gfgMcpkuKL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.969128 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.969133 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.969138 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.969185 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.969193 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.969206 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.969211 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.969255 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.969264 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.969273 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.969334 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.969351 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.969365 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.969373 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.969434 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e6c101f-9165-4e11-b46f-dc6c65af7f32', 'scsi-SQEMU_QEMU_HARDDISK_7e6c101f-9165-4e11-b46f-dc6c65af7f32'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e6c101f-9165-4e11-b46f-dc6c65af7f32-part1', 'scsi-SQEMU_QEMU_HARDDISK_7e6c101f-9165-4e11-b46f-dc6c65af7f32-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e6c101f-9165-4e11-b46f-dc6c65af7f32-part14', 'scsi-SQEMU_QEMU_HARDDISK_7e6c101f-9165-4e11-b46f-dc6c65af7f32-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e6c101f-9165-4e11-b46f-dc6c65af7f32-part15', 'scsi-SQEMU_QEMU_HARDDISK_7e6c101f-9165-4e11-b46f-dc6c65af7f32-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e6c101f-9165-4e11-b46f-dc6c65af7f32-part16', 'scsi-SQEMU_QEMU_HARDDISK_7e6c101f-9165-4e11-b46f-dc6c65af7f32-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.969452 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.969465 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-08-00-02-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.969474 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.969549 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab6e22-4bb9-43f8-8e9a-32a0c49ce343', 'scsi-SQEMU_QEMU_HARDDISK_e5ab6e22-4bb9-43f8-8e9a-32a0c49ce343'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab6e22-4bb9-43f8-8e9a-32a0c49ce343-part1', 'scsi-SQEMU_QEMU_HARDDISK_e5ab6e22-4bb9-43f8-8e9a-32a0c49ce343-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab6e22-4bb9-43f8-8e9a-32a0c49ce343-part14', 'scsi-SQEMU_QEMU_HARDDISK_e5ab6e22-4bb9-43f8-8e9a-32a0c49ce343-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab6e22-4bb9-43f8-8e9a-32a0c49ce343-part15', 'scsi-SQEMU_QEMU_HARDDISK_e5ab6e22-4bb9-43f8-8e9a-32a0c49ce343-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab6e22-4bb9-43f8-8e9a-32a0c49ce343-part16', 'scsi-SQEMU_QEMU_HARDDISK_e5ab6e22-4bb9-43f8-8e9a-32a0c49ce343-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.969571 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.969593 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c80af5d6--1159--5955--8f01--035b314db1bd-osd--block--c80af5d6--1159--5955--8f01--035b314db1bd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-8o3EVe-Utw7-lM15-VLzY-7aD3-pHv9-pl9uyv', 'scsi-0QEMU_QEMU_HARDDISK_8b42dfd4-7d2e-4d67-9c6f-8993b8aa5f54', 'scsi-SQEMU_QEMU_HARDDISK_8b42dfd4-7d2e-4d67-9c6f-8993b8aa5f54'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.969603 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--d7d0ff5a--46f9--53d2--8425--61ef59e49033-osd--block--d7d0ff5a--46f9--53d2--8425--61ef59e49033'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-I6vPbl-qQTp-H4zu-SOPt-3OKc-cfy1-s5oD5i', 'scsi-0QEMU_QEMU_HARDDISK_29911cfa-2062-4b30-9263-aae8438640a0', 'scsi-SQEMU_QEMU_HARDDISK_29911cfa-2062-4b30-9263-aae8438640a0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.969611 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b1a6d0f-b1ea-4446-8ff8-479db77ebf36', 'scsi-SQEMU_QEMU_HARDDISK_7b1a6d0f-b1ea-4446-8ff8-479db77ebf36'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.969661 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-08-00-02-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.969676 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.969684 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.969695 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.969703 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.969710 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.969718 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.969772 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.969782 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.969803 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d75891d9-cfe8-446f-818d-bc8c8304d51b', 'scsi-SQEMU_QEMU_HARDDISK_d75891d9-cfe8-446f-818d-bc8c8304d51b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d75891d9-cfe8-446f-818d-bc8c8304d51b-part1', 'scsi-SQEMU_QEMU_HARDDISK_d75891d9-cfe8-446f-818d-bc8c8304d51b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d75891d9-cfe8-446f-818d-bc8c8304d51b-part14', 'scsi-SQEMU_QEMU_HARDDISK_d75891d9-cfe8-446f-818d-bc8c8304d51b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d75891d9-cfe8-446f-818d-bc8c8304d51b-part15', 'scsi-SQEMU_QEMU_HARDDISK_d75891d9-cfe8-446f-818d-bc8c8304d51b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d75891d9-cfe8-446f-818d-bc8c8304d51b-part16', 'scsi-SQEMU_QEMU_HARDDISK_d75891d9-cfe8-446f-818d-bc8c8304d51b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.969813 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.969866 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-08-00-02-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.969882 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.969890 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.969898 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.969911 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.969919 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.969927 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.969935 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.969993 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.970006 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.970077 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2181995-561c-469f-942a-3ff6a519a6a0', 'scsi-SQEMU_QEMU_HARDDISK_e2181995-561c-469f-942a-3ff6a519a6a0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2181995-561c-469f-942a-3ff6a519a6a0-part1', 'scsi-SQEMU_QEMU_HARDDISK_e2181995-561c-469f-942a-3ff6a519a6a0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2181995-561c-469f-942a-3ff6a519a6a0-part14', 'scsi-SQEMU_QEMU_HARDDISK_e2181995-561c-469f-942a-3ff6a519a6a0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2181995-561c-469f-942a-3ff6a519a6a0-part15', 'scsi-SQEMU_QEMU_HARDDISK_e2181995-561c-469f-942a-3ff6a519a6a0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2181995-561c-469f-942a-3ff6a519a6a0-part16', 'scsi-SQEMU_QEMU_HARDDISK_e2181995-561c-469f-942a-3ff6a519a6a0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.970093 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-08-00-02-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:52:17.970145 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.970151 | orchestrator | 2026-04-08 00:52:17.970204 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-08 00:52:17.970212 | orchestrator | Wednesday 08 April 2026 00:42:21 +0000 (0:00:01.426) 0:00:34.851 ******* 2026-04-08 00:52:17.970217 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.970222 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.970227 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.970232 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.970237 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.970241 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.970246 | orchestrator | 2026-04-08 00:52:17.970251 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-08 00:52:17.970256 | orchestrator | Wednesday 08 April 2026 00:42:22 +0000 (0:00:01.456) 0:00:36.307 ******* 2026-04-08 00:52:17.970261 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.970265 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.970270 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.970275 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.970280 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.970284 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.970289 | orchestrator | 2026-04-08 00:52:17.970294 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-08 00:52:17.970299 | orchestrator | Wednesday 08 April 2026 00:42:23 +0000 (0:00:00.832) 0:00:37.139 ******* 2026-04-08 00:52:17.970304 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.970308 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.970326 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.970331 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.970338 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.970346 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.970354 | orchestrator | 2026-04-08 00:52:17.970361 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-08 00:52:17.970369 | orchestrator | Wednesday 08 April 2026 00:42:24 +0000 (0:00:00.848) 0:00:37.988 ******* 2026-04-08 00:52:17.970376 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.970383 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.970390 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.970398 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.970405 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.970412 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.970420 | orchestrator | 2026-04-08 00:52:17.970432 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-08 00:52:17.970440 | orchestrator | Wednesday 08 April 2026 00:42:25 +0000 (0:00:00.822) 0:00:38.810 ******* 2026-04-08 00:52:17.970447 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.970455 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.970462 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.970469 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.970477 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.970484 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.970491 | orchestrator | 2026-04-08 00:52:17.970499 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-08 00:52:17.970513 | orchestrator | Wednesday 08 April 2026 00:42:26 +0000 (0:00:00.653) 0:00:39.464 ******* 2026-04-08 00:52:17.970520 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.970528 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.970535 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.970542 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.970550 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.970557 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.970564 | orchestrator | 2026-04-08 00:52:17.970572 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-08 00:52:17.970579 | orchestrator | Wednesday 08 April 2026 00:42:27 +0000 (0:00:01.273) 0:00:40.737 ******* 2026-04-08 00:52:17.970586 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-08 00:52:17.970594 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-08 00:52:17.970601 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-08 00:52:17.970609 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-08 00:52:17.970617 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-08 00:52:17.970625 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-08 00:52:17.970632 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-08 00:52:17.970639 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-08 00:52:17.970647 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-08 00:52:17.970654 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-08 00:52:17.970662 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-04-08 00:52:17.970669 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-08 00:52:17.970676 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-08 00:52:17.970683 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-08 00:52:17.970690 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-04-08 00:52:17.970696 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-04-08 00:52:17.970703 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-04-08 00:52:17.970709 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-08 00:52:17.970716 | orchestrator | 2026-04-08 00:52:17.970723 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-08 00:52:17.970730 | orchestrator | Wednesday 08 April 2026 00:42:31 +0000 (0:00:04.211) 0:00:44.949 ******* 2026-04-08 00:52:17.970737 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-08 00:52:17.970743 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-08 00:52:17.970750 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-08 00:52:17.970757 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.970763 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-08 00:52:17.970770 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-08 00:52:17.970777 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-08 00:52:17.970784 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.970791 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-08 00:52:17.970822 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-08 00:52:17.970830 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-08 00:52:17.970837 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-08 00:52:17.970844 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.970851 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-08 00:52:17.970859 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-08 00:52:17.970866 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-08 00:52:17.970873 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-08 00:52:17.970880 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-08 00:52:17.970893 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.970900 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.970908 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-08 00:52:17.970915 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-08 00:52:17.970922 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-08 00:52:17.970929 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.970934 | orchestrator | 2026-04-08 00:52:17.970939 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-08 00:52:17.970945 | orchestrator | Wednesday 08 April 2026 00:42:32 +0000 (0:00:01.406) 0:00:46.356 ******* 2026-04-08 00:52:17.970949 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.970955 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.970960 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.970966 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:52:17.970971 | orchestrator | 2026-04-08 00:52:17.970976 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-08 00:52:17.970982 | orchestrator | Wednesday 08 April 2026 00:42:34 +0000 (0:00:01.434) 0:00:47.790 ******* 2026-04-08 00:52:17.970990 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.970995 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.971001 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.971006 | orchestrator | 2026-04-08 00:52:17.971011 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-08 00:52:17.971016 | orchestrator | Wednesday 08 April 2026 00:42:34 +0000 (0:00:00.431) 0:00:48.222 ******* 2026-04-08 00:52:17.971021 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.971026 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.971031 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.971036 | orchestrator | 2026-04-08 00:52:17.971041 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-08 00:52:17.971046 | orchestrator | Wednesday 08 April 2026 00:42:35 +0000 (0:00:00.319) 0:00:48.542 ******* 2026-04-08 00:52:17.971051 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.971056 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.971061 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.971066 | orchestrator | 2026-04-08 00:52:17.971071 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-08 00:52:17.971076 | orchestrator | Wednesday 08 April 2026 00:42:35 +0000 (0:00:00.375) 0:00:48.917 ******* 2026-04-08 00:52:17.971081 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.971086 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.971092 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.971116 | orchestrator | 2026-04-08 00:52:17.971122 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-08 00:52:17.971128 | orchestrator | Wednesday 08 April 2026 00:42:36 +0000 (0:00:00.743) 0:00:49.661 ******* 2026-04-08 00:52:17.971133 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-08 00:52:17.971138 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-08 00:52:17.971143 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-08 00:52:17.971148 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.971153 | orchestrator | 2026-04-08 00:52:17.971158 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-08 00:52:17.971163 | orchestrator | Wednesday 08 April 2026 00:42:36 +0000 (0:00:00.461) 0:00:50.123 ******* 2026-04-08 00:52:17.971168 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-08 00:52:17.971173 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-08 00:52:17.971179 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-08 00:52:17.971193 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.971200 | orchestrator | 2026-04-08 00:52:17.971207 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-08 00:52:17.971213 | orchestrator | Wednesday 08 April 2026 00:42:37 +0000 (0:00:00.404) 0:00:50.528 ******* 2026-04-08 00:52:17.971220 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-08 00:52:17.971227 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-08 00:52:17.971233 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-08 00:52:17.971239 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.971245 | orchestrator | 2026-04-08 00:52:17.971252 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-08 00:52:17.971259 | orchestrator | Wednesday 08 April 2026 00:42:37 +0000 (0:00:00.472) 0:00:51.000 ******* 2026-04-08 00:52:17.971265 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.971272 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.971279 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.971286 | orchestrator | 2026-04-08 00:52:17.971293 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-08 00:52:17.971299 | orchestrator | Wednesday 08 April 2026 00:42:37 +0000 (0:00:00.395) 0:00:51.395 ******* 2026-04-08 00:52:17.971306 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-08 00:52:17.971313 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-08 00:52:17.971355 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-08 00:52:17.971362 | orchestrator | 2026-04-08 00:52:17.971366 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-08 00:52:17.971370 | orchestrator | Wednesday 08 April 2026 00:42:38 +0000 (0:00:00.910) 0:00:52.306 ******* 2026-04-08 00:52:17.971375 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-08 00:52:17.971379 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-08 00:52:17.971384 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-08 00:52:17.971388 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-08 00:52:17.971392 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-08 00:52:17.971397 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-08 00:52:17.971401 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-08 00:52:17.971405 | orchestrator | 2026-04-08 00:52:17.971409 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-08 00:52:17.971414 | orchestrator | Wednesday 08 April 2026 00:42:40 +0000 (0:00:01.633) 0:00:53.940 ******* 2026-04-08 00:52:17.971418 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-08 00:52:17.971422 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-08 00:52:17.971427 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-08 00:52:17.971431 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-08 00:52:17.971435 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-08 00:52:17.971439 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-08 00:52:17.971448 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-08 00:52:17.971452 | orchestrator | 2026-04-08 00:52:17.971457 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-08 00:52:17.971461 | orchestrator | Wednesday 08 April 2026 00:42:43 +0000 (0:00:02.574) 0:00:56.514 ******* 2026-04-08 00:52:17.971466 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:17.971478 | orchestrator | 2026-04-08 00:52:17.971482 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-08 00:52:17.971486 | orchestrator | Wednesday 08 April 2026 00:42:43 +0000 (0:00:00.892) 0:00:57.407 ******* 2026-04-08 00:52:17.971491 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:17.971495 | orchestrator | 2026-04-08 00:52:17.971499 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-08 00:52:17.971504 | orchestrator | Wednesday 08 April 2026 00:42:44 +0000 (0:00:00.888) 0:00:58.295 ******* 2026-04-08 00:52:17.971508 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.971512 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.971517 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.971521 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.971525 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.971530 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.971534 | orchestrator | 2026-04-08 00:52:17.971538 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-08 00:52:17.971543 | orchestrator | Wednesday 08 April 2026 00:42:45 +0000 (0:00:00.822) 0:00:59.118 ******* 2026-04-08 00:52:17.971547 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.971551 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.971556 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.971560 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.971564 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.971568 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.971573 | orchestrator | 2026-04-08 00:52:17.971577 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-08 00:52:17.971581 | orchestrator | Wednesday 08 April 2026 00:42:46 +0000 (0:00:00.972) 0:01:00.090 ******* 2026-04-08 00:52:17.971585 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.971590 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.971594 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.971598 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.971602 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.971607 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.971611 | orchestrator | 2026-04-08 00:52:17.971615 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-08 00:52:17.971620 | orchestrator | Wednesday 08 April 2026 00:42:47 +0000 (0:00:00.835) 0:01:00.925 ******* 2026-04-08 00:52:17.971624 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.971628 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.971632 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.971637 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.971641 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.971645 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.971650 | orchestrator | 2026-04-08 00:52:17.971654 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-08 00:52:17.971658 | orchestrator | Wednesday 08 April 2026 00:42:48 +0000 (0:00:00.849) 0:01:01.775 ******* 2026-04-08 00:52:17.971662 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.971667 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.971671 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.971675 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.971680 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.971700 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.971706 | orchestrator | 2026-04-08 00:52:17.971710 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-08 00:52:17.971715 | orchestrator | Wednesday 08 April 2026 00:42:49 +0000 (0:00:00.913) 0:01:02.688 ******* 2026-04-08 00:52:17.971719 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.971723 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.971728 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.971735 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.971740 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.971744 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.971748 | orchestrator | 2026-04-08 00:52:17.971753 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-08 00:52:17.971757 | orchestrator | Wednesday 08 April 2026 00:42:50 +0000 (0:00:00.802) 0:01:03.491 ******* 2026-04-08 00:52:17.971761 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.971766 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.971770 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.971775 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.971779 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.971784 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.971788 | orchestrator | 2026-04-08 00:52:17.971792 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-08 00:52:17.971797 | orchestrator | Wednesday 08 April 2026 00:42:50 +0000 (0:00:00.619) 0:01:04.110 ******* 2026-04-08 00:52:17.971801 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.971806 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.971810 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.971814 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.971819 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.971823 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.971827 | orchestrator | 2026-04-08 00:52:17.971832 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-08 00:52:17.971836 | orchestrator | Wednesday 08 April 2026 00:42:52 +0000 (0:00:01.427) 0:01:05.537 ******* 2026-04-08 00:52:17.971841 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.971845 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.971849 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.971854 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.971858 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.971865 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.971870 | orchestrator | 2026-04-08 00:52:17.971874 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-08 00:52:17.971878 | orchestrator | Wednesday 08 April 2026 00:42:53 +0000 (0:00:01.071) 0:01:06.609 ******* 2026-04-08 00:52:17.971883 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.971887 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.971892 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.971896 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.971900 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.971906 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.971913 | orchestrator | 2026-04-08 00:52:17.971920 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-08 00:52:17.971925 | orchestrator | Wednesday 08 April 2026 00:42:53 +0000 (0:00:00.838) 0:01:07.447 ******* 2026-04-08 00:52:17.971929 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.971934 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.971938 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.971942 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.971946 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.971951 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.971955 | orchestrator | 2026-04-08 00:52:17.971959 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-08 00:52:17.971963 | orchestrator | Wednesday 08 April 2026 00:42:54 +0000 (0:00:00.658) 0:01:08.106 ******* 2026-04-08 00:52:17.971968 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.971972 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.971976 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.971980 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.971985 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.971989 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.971993 | orchestrator | 2026-04-08 00:52:17.971997 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-08 00:52:17.972006 | orchestrator | Wednesday 08 April 2026 00:42:55 +0000 (0:00:00.848) 0:01:08.954 ******* 2026-04-08 00:52:17.972010 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.972015 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.972019 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.972023 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.972027 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.972032 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.972036 | orchestrator | 2026-04-08 00:52:17.972040 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-08 00:52:17.972044 | orchestrator | Wednesday 08 April 2026 00:42:56 +0000 (0:00:00.819) 0:01:09.774 ******* 2026-04-08 00:52:17.972049 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.972053 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.972058 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.972062 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.972066 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.972070 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.972075 | orchestrator | 2026-04-08 00:52:17.972079 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-08 00:52:17.972083 | orchestrator | Wednesday 08 April 2026 00:42:57 +0000 (0:00:00.906) 0:01:10.681 ******* 2026-04-08 00:52:17.972087 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.972092 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.972096 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.972142 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.972147 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.972151 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.972155 | orchestrator | 2026-04-08 00:52:17.972159 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-08 00:52:17.972164 | orchestrator | Wednesday 08 April 2026 00:42:58 +0000 (0:00:00.838) 0:01:11.519 ******* 2026-04-08 00:52:17.972168 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.972172 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.972177 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.972181 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.972204 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.972209 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.972213 | orchestrator | 2026-04-08 00:52:17.972218 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-08 00:52:17.972222 | orchestrator | Wednesday 08 April 2026 00:42:58 +0000 (0:00:00.729) 0:01:12.249 ******* 2026-04-08 00:52:17.972226 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.972231 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.972235 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.972239 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.972244 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.972248 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.972252 | orchestrator | 2026-04-08 00:52:17.972257 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-08 00:52:17.972261 | orchestrator | Wednesday 08 April 2026 00:42:59 +0000 (0:00:00.685) 0:01:12.934 ******* 2026-04-08 00:52:17.972265 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.972270 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.972274 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.972278 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.972283 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.972287 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.972294 | orchestrator | 2026-04-08 00:52:17.972301 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-08 00:52:17.972309 | orchestrator | Wednesday 08 April 2026 00:43:00 +0000 (0:00:00.707) 0:01:13.642 ******* 2026-04-08 00:52:17.972316 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.972322 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.972341 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.972349 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.972356 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.972362 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.972369 | orchestrator | 2026-04-08 00:52:17.972376 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-08 00:52:17.972383 | orchestrator | Wednesday 08 April 2026 00:43:01 +0000 (0:00:01.218) 0:01:14.861 ******* 2026-04-08 00:52:17.972390 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:52:17.972397 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:52:17.972403 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:52:17.972410 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:17.972422 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:17.972429 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:17.972435 | orchestrator | 2026-04-08 00:52:17.972443 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-08 00:52:17.972450 | orchestrator | Wednesday 08 April 2026 00:43:03 +0000 (0:00:02.436) 0:01:17.297 ******* 2026-04-08 00:52:17.972457 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:17.972464 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:52:17.972471 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:52:17.972478 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:17.972485 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:52:17.972492 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:17.972499 | orchestrator | 2026-04-08 00:52:17.972506 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-08 00:52:17.972513 | orchestrator | Wednesday 08 April 2026 00:43:06 +0000 (0:00:02.544) 0:01:19.842 ******* 2026-04-08 00:52:17.972521 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:17.972528 | orchestrator | 2026-04-08 00:52:17.972535 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-08 00:52:17.972542 | orchestrator | Wednesday 08 April 2026 00:43:08 +0000 (0:00:02.062) 0:01:21.905 ******* 2026-04-08 00:52:17.972549 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.972557 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.972564 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.972571 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.972578 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.972585 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.972591 | orchestrator | 2026-04-08 00:52:17.972598 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-08 00:52:17.972605 | orchestrator | Wednesday 08 April 2026 00:43:09 +0000 (0:00:00.918) 0:01:22.823 ******* 2026-04-08 00:52:17.972612 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.972619 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.972626 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.972633 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.972640 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.972647 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.972653 | orchestrator | 2026-04-08 00:52:17.972660 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-08 00:52:17.972667 | orchestrator | Wednesday 08 April 2026 00:43:10 +0000 (0:00:00.902) 0:01:23.726 ******* 2026-04-08 00:52:17.972673 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-08 00:52:17.972680 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-08 00:52:17.972686 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-08 00:52:17.972693 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-08 00:52:17.972699 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-08 00:52:17.972712 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-08 00:52:17.972716 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-08 00:52:17.972720 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-08 00:52:17.972724 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-08 00:52:17.972728 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-08 00:52:17.972767 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-08 00:52:17.972772 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-08 00:52:17.972776 | orchestrator | 2026-04-08 00:52:17.972781 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-08 00:52:17.972784 | orchestrator | Wednesday 08 April 2026 00:43:12 +0000 (0:00:02.150) 0:01:25.877 ******* 2026-04-08 00:52:17.972788 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:52:17.972792 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:52:17.972797 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:17.972801 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:52:17.972804 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:17.972809 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:17.972813 | orchestrator | 2026-04-08 00:52:17.972817 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-08 00:52:17.972821 | orchestrator | Wednesday 08 April 2026 00:43:13 +0000 (0:00:01.469) 0:01:27.346 ******* 2026-04-08 00:52:17.972825 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.972829 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.972833 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.972837 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.972841 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.972845 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.972849 | orchestrator | 2026-04-08 00:52:17.972853 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-08 00:52:17.972857 | orchestrator | Wednesday 08 April 2026 00:43:14 +0000 (0:00:00.864) 0:01:28.210 ******* 2026-04-08 00:52:17.972861 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.972865 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.972869 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.972872 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.972876 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.972880 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.972884 | orchestrator | 2026-04-08 00:52:17.972888 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-08 00:52:17.972897 | orchestrator | Wednesday 08 April 2026 00:43:15 +0000 (0:00:00.542) 0:01:28.753 ******* 2026-04-08 00:52:17.972902 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.972905 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.972909 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.972913 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.972917 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.972921 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.972925 | orchestrator | 2026-04-08 00:52:17.972929 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-08 00:52:17.972933 | orchestrator | Wednesday 08 April 2026 00:43:15 +0000 (0:00:00.689) 0:01:29.443 ******* 2026-04-08 00:52:17.972938 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:17.972942 | orchestrator | 2026-04-08 00:52:17.972946 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-08 00:52:17.972954 | orchestrator | Wednesday 08 April 2026 00:43:16 +0000 (0:00:00.977) 0:01:30.420 ******* 2026-04-08 00:52:17.972958 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.972963 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.972967 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.972971 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.972975 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.972978 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.972982 | orchestrator | 2026-04-08 00:52:17.972987 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-08 00:52:17.972991 | orchestrator | Wednesday 08 April 2026 00:44:21 +0000 (0:01:04.755) 0:02:35.176 ******* 2026-04-08 00:52:17.972994 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-08 00:52:17.972998 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-08 00:52:17.973002 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-08 00:52:17.973006 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.973010 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-08 00:52:17.973014 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-08 00:52:17.973018 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-08 00:52:17.973022 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.973026 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-08 00:52:17.973030 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-08 00:52:17.973034 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-08 00:52:17.973038 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.973042 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-08 00:52:17.973046 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-08 00:52:17.973050 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-08 00:52:17.973054 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-08 00:52:17.973058 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-08 00:52:17.973062 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-08 00:52:17.973066 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.973070 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.973088 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-08 00:52:17.973093 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-08 00:52:17.973121 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-08 00:52:17.973126 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.973130 | orchestrator | 2026-04-08 00:52:17.973134 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-08 00:52:17.973138 | orchestrator | Wednesday 08 April 2026 00:44:22 +0000 (0:00:00.631) 0:02:35.807 ******* 2026-04-08 00:52:17.973142 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.973146 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.973150 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.973153 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.973157 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.973161 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.973165 | orchestrator | 2026-04-08 00:52:17.973169 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-08 00:52:17.973173 | orchestrator | Wednesday 08 April 2026 00:44:22 +0000 (0:00:00.472) 0:02:36.280 ******* 2026-04-08 00:52:17.973177 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.973185 | orchestrator | 2026-04-08 00:52:17.973189 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-08 00:52:17.973193 | orchestrator | Wednesday 08 April 2026 00:44:23 +0000 (0:00:00.179) 0:02:36.460 ******* 2026-04-08 00:52:17.973197 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.973201 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.973205 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.973209 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.973212 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.973216 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.973220 | orchestrator | 2026-04-08 00:52:17.973224 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-08 00:52:17.973228 | orchestrator | Wednesday 08 April 2026 00:44:23 +0000 (0:00:00.665) 0:02:37.125 ******* 2026-04-08 00:52:17.973232 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.973236 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.973240 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.973247 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.973251 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.973255 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.973259 | orchestrator | 2026-04-08 00:52:17.973263 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-08 00:52:17.973267 | orchestrator | Wednesday 08 April 2026 00:44:24 +0000 (0:00:00.551) 0:02:37.677 ******* 2026-04-08 00:52:17.973270 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.973274 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.973278 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.973282 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.973286 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.973290 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.973294 | orchestrator | 2026-04-08 00:52:17.973298 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-08 00:52:17.973302 | orchestrator | Wednesday 08 April 2026 00:44:24 +0000 (0:00:00.672) 0:02:38.349 ******* 2026-04-08 00:52:17.973306 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.973310 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.973313 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.973317 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.973321 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.973325 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.973329 | orchestrator | 2026-04-08 00:52:17.973333 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-08 00:52:17.973337 | orchestrator | Wednesday 08 April 2026 00:44:27 +0000 (0:00:02.371) 0:02:40.721 ******* 2026-04-08 00:52:17.973341 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.973345 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.973349 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.973353 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.973359 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.973365 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.973371 | orchestrator | 2026-04-08 00:52:17.973377 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-08 00:52:17.973383 | orchestrator | Wednesday 08 April 2026 00:44:28 +0000 (0:00:00.893) 0:02:41.615 ******* 2026-04-08 00:52:17.973389 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:17.973396 | orchestrator | 2026-04-08 00:52:17.973402 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-08 00:52:17.973408 | orchestrator | Wednesday 08 April 2026 00:44:29 +0000 (0:00:01.273) 0:02:42.889 ******* 2026-04-08 00:52:17.973415 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.973421 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.973427 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.973437 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.973444 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.973450 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.973456 | orchestrator | 2026-04-08 00:52:17.973462 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-08 00:52:17.973468 | orchestrator | Wednesday 08 April 2026 00:44:29 +0000 (0:00:00.542) 0:02:43.432 ******* 2026-04-08 00:52:17.973475 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.973481 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.973487 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.973495 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.973499 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.973503 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.973507 | orchestrator | 2026-04-08 00:52:17.973511 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-08 00:52:17.973515 | orchestrator | Wednesday 08 April 2026 00:44:30 +0000 (0:00:00.726) 0:02:44.158 ******* 2026-04-08 00:52:17.973519 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.973523 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.973545 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.973550 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.973554 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.973558 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.973562 | orchestrator | 2026-04-08 00:52:17.973566 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-08 00:52:17.973570 | orchestrator | Wednesday 08 April 2026 00:44:31 +0000 (0:00:00.587) 0:02:44.745 ******* 2026-04-08 00:52:17.973574 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.973578 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.973582 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.973586 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.973590 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.973593 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.973597 | orchestrator | 2026-04-08 00:52:17.973601 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-08 00:52:17.973605 | orchestrator | Wednesday 08 April 2026 00:44:32 +0000 (0:00:00.714) 0:02:45.460 ******* 2026-04-08 00:52:17.973611 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.973618 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.973625 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.973633 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.973637 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.973641 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.973645 | orchestrator | 2026-04-08 00:52:17.973649 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-08 00:52:17.973653 | orchestrator | Wednesday 08 April 2026 00:44:32 +0000 (0:00:00.512) 0:02:45.972 ******* 2026-04-08 00:52:17.973657 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.973660 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.973665 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.973671 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.973677 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.973683 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.973689 | orchestrator | 2026-04-08 00:52:17.973696 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-08 00:52:17.973703 | orchestrator | Wednesday 08 April 2026 00:44:33 +0000 (0:00:00.695) 0:02:46.667 ******* 2026-04-08 00:52:17.973713 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.973720 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.973727 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.973735 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.973739 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.973743 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.973766 | orchestrator | 2026-04-08 00:52:17.973773 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-08 00:52:17.973779 | orchestrator | Wednesday 08 April 2026 00:44:33 +0000 (0:00:00.552) 0:02:47.220 ******* 2026-04-08 00:52:17.973785 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.973792 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.973799 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.973803 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.973807 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.973811 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.973815 | orchestrator | 2026-04-08 00:52:17.973818 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-08 00:52:17.973822 | orchestrator | Wednesday 08 April 2026 00:44:34 +0000 (0:00:00.817) 0:02:48.038 ******* 2026-04-08 00:52:17.973826 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.973830 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.973834 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.973838 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.973842 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.973846 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.973850 | orchestrator | 2026-04-08 00:52:17.973854 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-08 00:52:17.973858 | orchestrator | Wednesday 08 April 2026 00:44:35 +0000 (0:00:01.013) 0:02:49.051 ******* 2026-04-08 00:52:17.973861 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:17.973866 | orchestrator | 2026-04-08 00:52:17.973869 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-08 00:52:17.973873 | orchestrator | Wednesday 08 April 2026 00:44:36 +0000 (0:00:01.177) 0:02:50.229 ******* 2026-04-08 00:52:17.973877 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-04-08 00:52:17.973881 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-04-08 00:52:17.973885 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-04-08 00:52:17.973889 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-04-08 00:52:17.973895 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-04-08 00:52:17.973901 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-04-08 00:52:17.973908 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-04-08 00:52:17.973914 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-04-08 00:52:17.973919 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-04-08 00:52:17.973925 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-04-08 00:52:17.973931 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-04-08 00:52:17.973937 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-04-08 00:52:17.973943 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-04-08 00:52:17.973950 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-04-08 00:52:17.973955 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-04-08 00:52:17.973959 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-04-08 00:52:17.973963 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-04-08 00:52:17.973967 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-04-08 00:52:17.973990 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-04-08 00:52:17.973995 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-04-08 00:52:17.973999 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-04-08 00:52:17.974003 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-04-08 00:52:17.974007 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-04-08 00:52:17.974042 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-04-08 00:52:17.974048 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-04-08 00:52:17.974052 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-04-08 00:52:17.974055 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-04-08 00:52:17.974059 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-04-08 00:52:17.974063 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-04-08 00:52:17.974067 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-04-08 00:52:17.974071 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-04-08 00:52:17.974075 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-04-08 00:52:17.974079 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-04-08 00:52:17.974082 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-04-08 00:52:17.974086 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-04-08 00:52:17.974090 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-04-08 00:52:17.974094 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-04-08 00:52:17.974115 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-04-08 00:52:17.974119 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-04-08 00:52:17.974123 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-04-08 00:52:17.974127 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-04-08 00:52:17.974134 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-04-08 00:52:17.974138 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-04-08 00:52:17.974142 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-08 00:52:17.974146 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-04-08 00:52:17.974150 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-04-08 00:52:17.974154 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-04-08 00:52:17.974157 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-04-08 00:52:17.974161 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-08 00:52:17.974165 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-08 00:52:17.974169 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-08 00:52:17.974173 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-08 00:52:17.974177 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-08 00:52:17.974181 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-04-08 00:52:17.974185 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-08 00:52:17.974189 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-08 00:52:17.974193 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-08 00:52:17.974197 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-08 00:52:17.974201 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-08 00:52:17.974207 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-08 00:52:17.974213 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-08 00:52:17.974219 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-08 00:52:17.974226 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-08 00:52:17.974231 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-08 00:52:17.974235 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-08 00:52:17.974243 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-08 00:52:17.974247 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-08 00:52:17.974250 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-08 00:52:17.974254 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-08 00:52:17.974258 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-08 00:52:17.974262 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-08 00:52:17.974266 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-08 00:52:17.974270 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-08 00:52:17.974274 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-08 00:52:17.974279 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-08 00:52:17.974286 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-08 00:52:17.974313 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-08 00:52:17.974319 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-08 00:52:17.974323 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-04-08 00:52:17.974327 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-08 00:52:17.974330 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-08 00:52:17.974334 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-08 00:52:17.974338 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-08 00:52:17.974342 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-08 00:52:17.974346 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-04-08 00:52:17.974350 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-04-08 00:52:17.974354 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-04-08 00:52:17.974358 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-04-08 00:52:17.974362 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-04-08 00:52:17.974366 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-08 00:52:17.974370 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-04-08 00:52:17.974373 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-04-08 00:52:17.974377 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-04-08 00:52:17.974381 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-04-08 00:52:17.974385 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-04-08 00:52:17.974389 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-04-08 00:52:17.974393 | orchestrator | 2026-04-08 00:52:17.974397 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-08 00:52:17.974401 | orchestrator | Wednesday 08 April 2026 00:44:42 +0000 (0:00:05.936) 0:02:56.165 ******* 2026-04-08 00:52:17.974407 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.974411 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.974415 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.974420 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:52:17.974424 | orchestrator | 2026-04-08 00:52:17.974428 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-08 00:52:17.974432 | orchestrator | Wednesday 08 April 2026 00:44:43 +0000 (0:00:00.934) 0:02:57.099 ******* 2026-04-08 00:52:17.974436 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-08 00:52:17.974445 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-08 00:52:17.974449 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-08 00:52:17.974453 | orchestrator | 2026-04-08 00:52:17.974467 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-08 00:52:17.974471 | orchestrator | Wednesday 08 April 2026 00:44:44 +0000 (0:00:00.613) 0:02:57.713 ******* 2026-04-08 00:52:17.974475 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-08 00:52:17.974479 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-08 00:52:17.974483 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-08 00:52:17.974487 | orchestrator | 2026-04-08 00:52:17.974491 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-08 00:52:17.974497 | orchestrator | Wednesday 08 April 2026 00:44:45 +0000 (0:00:01.131) 0:02:58.844 ******* 2026-04-08 00:52:17.974504 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.974510 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.974518 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.974522 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.974526 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.974530 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.974534 | orchestrator | 2026-04-08 00:52:17.974538 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-08 00:52:17.974542 | orchestrator | Wednesday 08 April 2026 00:44:46 +0000 (0:00:00.739) 0:02:59.583 ******* 2026-04-08 00:52:17.974546 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.974550 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.974554 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.974557 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.974562 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.974568 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.974574 | orchestrator | 2026-04-08 00:52:17.974580 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-08 00:52:17.974587 | orchestrator | Wednesday 08 April 2026 00:44:46 +0000 (0:00:00.573) 0:03:00.156 ******* 2026-04-08 00:52:17.974593 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.974599 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.974605 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.974611 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.974617 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.974624 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.974630 | orchestrator | 2026-04-08 00:52:17.974656 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-08 00:52:17.974663 | orchestrator | Wednesday 08 April 2026 00:44:47 +0000 (0:00:00.666) 0:03:00.823 ******* 2026-04-08 00:52:17.974672 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.974680 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.974686 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.974692 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.974699 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.974706 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.974710 | orchestrator | 2026-04-08 00:52:17.974714 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-08 00:52:17.974718 | orchestrator | Wednesday 08 April 2026 00:44:47 +0000 (0:00:00.477) 0:03:01.300 ******* 2026-04-08 00:52:17.974722 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.974731 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.974734 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.974741 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.974747 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.974753 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.974760 | orchestrator | 2026-04-08 00:52:17.974766 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-08 00:52:17.974773 | orchestrator | Wednesday 08 April 2026 00:44:48 +0000 (0:00:00.612) 0:03:01.913 ******* 2026-04-08 00:52:17.974781 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.974785 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.974789 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.974793 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.974797 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.974804 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.974810 | orchestrator | 2026-04-08 00:52:17.974817 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-08 00:52:17.974823 | orchestrator | Wednesday 08 April 2026 00:44:48 +0000 (0:00:00.516) 0:03:02.430 ******* 2026-04-08 00:52:17.974830 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.974836 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.974895 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.974902 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.974914 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.974918 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.974922 | orchestrator | 2026-04-08 00:52:17.974926 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-08 00:52:17.974930 | orchestrator | Wednesday 08 April 2026 00:44:49 +0000 (0:00:00.689) 0:03:03.119 ******* 2026-04-08 00:52:17.974934 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.974938 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.974942 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.974946 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.974950 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.974954 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.974958 | orchestrator | 2026-04-08 00:52:17.974962 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-08 00:52:17.974966 | orchestrator | Wednesday 08 April 2026 00:44:50 +0000 (0:00:00.583) 0:03:03.703 ******* 2026-04-08 00:52:17.974972 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.974978 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.974984 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.974990 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.974997 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.975002 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.975008 | orchestrator | 2026-04-08 00:52:17.975014 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-08 00:52:17.975021 | orchestrator | Wednesday 08 April 2026 00:44:52 +0000 (0:00:02.688) 0:03:06.392 ******* 2026-04-08 00:52:17.975027 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.975033 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.975039 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.975045 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.975051 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.975057 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.975063 | orchestrator | 2026-04-08 00:52:17.975069 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-08 00:52:17.975075 | orchestrator | Wednesday 08 April 2026 00:44:53 +0000 (0:00:00.520) 0:03:06.913 ******* 2026-04-08 00:52:17.975081 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.975088 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.975094 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.975153 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.975158 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.975162 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.975165 | orchestrator | 2026-04-08 00:52:17.975169 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-08 00:52:17.975173 | orchestrator | Wednesday 08 April 2026 00:44:54 +0000 (0:00:00.689) 0:03:07.602 ******* 2026-04-08 00:52:17.975177 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.975181 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.975185 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.975189 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.975193 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.975196 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.975201 | orchestrator | 2026-04-08 00:52:17.975204 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-08 00:52:17.975208 | orchestrator | Wednesday 08 April 2026 00:44:54 +0000 (0:00:00.634) 0:03:08.236 ******* 2026-04-08 00:52:17.975213 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-08 00:52:17.975217 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-08 00:52:17.975221 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-08 00:52:17.975225 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.975269 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.975278 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.975284 | orchestrator | 2026-04-08 00:52:17.975290 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-08 00:52:17.975296 | orchestrator | Wednesday 08 April 2026 00:44:55 +0000 (0:00:00.645) 0:03:08.882 ******* 2026-04-08 00:52:17.975304 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-04-08 00:52:17.975313 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-04-08 00:52:17.975320 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.975327 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-04-08 00:52:17.975339 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-04-08 00:52:17.975346 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-04-08 00:52:17.975353 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-04-08 00:52:17.975363 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.975367 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.975371 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.975375 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.975379 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.975383 | orchestrator | 2026-04-08 00:52:17.975387 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-08 00:52:17.975390 | orchestrator | Wednesday 08 April 2026 00:44:56 +0000 (0:00:00.791) 0:03:09.673 ******* 2026-04-08 00:52:17.975394 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.975398 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.975402 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.975406 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.975410 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.975414 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.975418 | orchestrator | 2026-04-08 00:52:17.975422 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-08 00:52:17.975425 | orchestrator | Wednesday 08 April 2026 00:44:57 +0000 (0:00:00.954) 0:03:10.628 ******* 2026-04-08 00:52:17.975429 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.975433 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.975437 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.975441 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.975445 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.975449 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.975453 | orchestrator | 2026-04-08 00:52:17.975457 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-08 00:52:17.975461 | orchestrator | Wednesday 08 April 2026 00:44:57 +0000 (0:00:00.633) 0:03:11.262 ******* 2026-04-08 00:52:17.975464 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.975468 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.975472 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.975476 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.975479 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.975483 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.975487 | orchestrator | 2026-04-08 00:52:17.975490 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-08 00:52:17.975494 | orchestrator | Wednesday 08 April 2026 00:44:58 +0000 (0:00:00.584) 0:03:11.846 ******* 2026-04-08 00:52:17.975498 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.975501 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.975505 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.975511 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.975517 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.975522 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.975528 | orchestrator | 2026-04-08 00:52:17.975534 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-08 00:52:17.975565 | orchestrator | Wednesday 08 April 2026 00:44:59 +0000 (0:00:00.673) 0:03:12.520 ******* 2026-04-08 00:52:17.975572 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.975578 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.975583 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.975589 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.975595 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.975601 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.975607 | orchestrator | 2026-04-08 00:52:17.975614 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-08 00:52:17.975618 | orchestrator | Wednesday 08 April 2026 00:44:59 +0000 (0:00:00.565) 0:03:13.085 ******* 2026-04-08 00:52:17.975622 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.975626 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.975635 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.975639 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.975643 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.975646 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.975650 | orchestrator | 2026-04-08 00:52:17.975654 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-08 00:52:17.975658 | orchestrator | Wednesday 08 April 2026 00:45:00 +0000 (0:00:00.916) 0:03:14.002 ******* 2026-04-08 00:52:17.975661 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-08 00:52:17.975665 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-08 00:52:17.975669 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-08 00:52:17.975673 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.975677 | orchestrator | 2026-04-08 00:52:17.975680 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-08 00:52:17.975684 | orchestrator | Wednesday 08 April 2026 00:45:01 +0000 (0:00:00.616) 0:03:14.618 ******* 2026-04-08 00:52:17.975688 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-08 00:52:17.975692 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-08 00:52:17.975695 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-08 00:52:17.975700 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.975706 | orchestrator | 2026-04-08 00:52:17.975717 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-08 00:52:17.975723 | orchestrator | Wednesday 08 April 2026 00:45:01 +0000 (0:00:00.336) 0:03:14.955 ******* 2026-04-08 00:52:17.975729 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-08 00:52:17.975734 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-08 00:52:17.975740 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-08 00:52:17.975746 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.975751 | orchestrator | 2026-04-08 00:52:17.975758 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-08 00:52:17.975764 | orchestrator | Wednesday 08 April 2026 00:45:01 +0000 (0:00:00.335) 0:03:15.290 ******* 2026-04-08 00:52:17.975770 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.975777 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.975781 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.975785 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.975789 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.975792 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.975796 | orchestrator | 2026-04-08 00:52:17.975800 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-08 00:52:17.975803 | orchestrator | Wednesday 08 April 2026 00:45:02 +0000 (0:00:01.029) 0:03:16.320 ******* 2026-04-08 00:52:17.975807 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-08 00:52:17.975811 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-08 00:52:17.975815 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-08 00:52:17.975818 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-04-08 00:52:17.975822 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-04-08 00:52:17.975826 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.975866 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.975871 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-04-08 00:52:17.975875 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.975882 | orchestrator | 2026-04-08 00:52:17.975888 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-08 00:52:17.975894 | orchestrator | Wednesday 08 April 2026 00:45:05 +0000 (0:00:02.563) 0:03:18.883 ******* 2026-04-08 00:52:17.975901 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:52:17.975908 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:52:17.975914 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:52:17.975921 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:17.975933 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:17.975937 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:17.975941 | orchestrator | 2026-04-08 00:52:17.975945 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-08 00:52:17.975948 | orchestrator | Wednesday 08 April 2026 00:45:07 +0000 (0:00:01.878) 0:03:20.762 ******* 2026-04-08 00:52:17.975952 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:52:17.975956 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:52:17.975959 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:52:17.975963 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:17.975967 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:17.975971 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:17.975974 | orchestrator | 2026-04-08 00:52:17.975978 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-08 00:52:17.975982 | orchestrator | Wednesday 08 April 2026 00:45:08 +0000 (0:00:00.948) 0:03:21.710 ******* 2026-04-08 00:52:17.975985 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.975989 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.975993 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.975997 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:17.976001 | orchestrator | 2026-04-08 00:52:17.976005 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-08 00:52:17.976037 | orchestrator | Wednesday 08 April 2026 00:45:08 +0000 (0:00:00.696) 0:03:22.407 ******* 2026-04-08 00:52:17.976044 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.976050 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.976055 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.976061 | orchestrator | 2026-04-08 00:52:17.976068 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-08 00:52:17.976074 | orchestrator | Wednesday 08 April 2026 00:45:09 +0000 (0:00:00.276) 0:03:22.684 ******* 2026-04-08 00:52:17.976080 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:17.976086 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:17.976090 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:17.976093 | orchestrator | 2026-04-08 00:52:17.976114 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-08 00:52:17.976118 | orchestrator | Wednesday 08 April 2026 00:45:10 +0000 (0:00:01.225) 0:03:23.910 ******* 2026-04-08 00:52:17.976122 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-08 00:52:17.976126 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-08 00:52:17.976130 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-08 00:52:17.976133 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.976137 | orchestrator | 2026-04-08 00:52:17.976141 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-08 00:52:17.976145 | orchestrator | Wednesday 08 April 2026 00:45:10 +0000 (0:00:00.528) 0:03:24.438 ******* 2026-04-08 00:52:17.976148 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.976152 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.976156 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.976160 | orchestrator | 2026-04-08 00:52:17.976163 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-08 00:52:17.976167 | orchestrator | Wednesday 08 April 2026 00:45:11 +0000 (0:00:00.317) 0:03:24.756 ******* 2026-04-08 00:52:17.976171 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.976175 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.976178 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.976182 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:52:17.976186 | orchestrator | 2026-04-08 00:52:17.976194 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-08 00:52:17.976198 | orchestrator | Wednesday 08 April 2026 00:45:12 +0000 (0:00:00.822) 0:03:25.579 ******* 2026-04-08 00:52:17.976208 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-08 00:52:17.976212 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-08 00:52:17.976216 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-08 00:52:17.976220 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.976223 | orchestrator | 2026-04-08 00:52:17.976227 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-08 00:52:17.976231 | orchestrator | Wednesday 08 April 2026 00:45:12 +0000 (0:00:00.344) 0:03:25.923 ******* 2026-04-08 00:52:17.976234 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.976238 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.976242 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.976246 | orchestrator | 2026-04-08 00:52:17.976249 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-08 00:52:17.976253 | orchestrator | Wednesday 08 April 2026 00:45:12 +0000 (0:00:00.377) 0:03:26.301 ******* 2026-04-08 00:52:17.976257 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.976260 | orchestrator | 2026-04-08 00:52:17.976264 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-08 00:52:17.976268 | orchestrator | Wednesday 08 April 2026 00:45:13 +0000 (0:00:00.175) 0:03:26.477 ******* 2026-04-08 00:52:17.976272 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.976275 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.976279 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.976283 | orchestrator | 2026-04-08 00:52:17.976287 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-08 00:52:17.976290 | orchestrator | Wednesday 08 April 2026 00:45:13 +0000 (0:00:00.280) 0:03:26.757 ******* 2026-04-08 00:52:17.976294 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.976298 | orchestrator | 2026-04-08 00:52:17.976301 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-08 00:52:17.976305 | orchestrator | Wednesday 08 April 2026 00:45:13 +0000 (0:00:00.236) 0:03:26.993 ******* 2026-04-08 00:52:17.976309 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.976313 | orchestrator | 2026-04-08 00:52:17.976316 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-08 00:52:17.976320 | orchestrator | Wednesday 08 April 2026 00:45:14 +0000 (0:00:00.786) 0:03:27.779 ******* 2026-04-08 00:52:17.976324 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.976328 | orchestrator | 2026-04-08 00:52:17.976331 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-08 00:52:17.976335 | orchestrator | Wednesday 08 April 2026 00:45:14 +0000 (0:00:00.131) 0:03:27.911 ******* 2026-04-08 00:52:17.976339 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.976342 | orchestrator | 2026-04-08 00:52:17.976346 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-08 00:52:17.976350 | orchestrator | Wednesday 08 April 2026 00:45:14 +0000 (0:00:00.241) 0:03:28.152 ******* 2026-04-08 00:52:17.976354 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.976357 | orchestrator | 2026-04-08 00:52:17.976361 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-08 00:52:17.976365 | orchestrator | Wednesday 08 April 2026 00:45:14 +0000 (0:00:00.237) 0:03:28.390 ******* 2026-04-08 00:52:17.976369 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-08 00:52:17.976372 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-08 00:52:17.976376 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-08 00:52:17.976380 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.976383 | orchestrator | 2026-04-08 00:52:17.976387 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-08 00:52:17.976409 | orchestrator | Wednesday 08 April 2026 00:45:15 +0000 (0:00:00.393) 0:03:28.784 ******* 2026-04-08 00:52:17.976414 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.976421 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.976425 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.976429 | orchestrator | 2026-04-08 00:52:17.976433 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-08 00:52:17.976436 | orchestrator | Wednesday 08 April 2026 00:45:15 +0000 (0:00:00.354) 0:03:29.139 ******* 2026-04-08 00:52:17.976440 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.976444 | orchestrator | 2026-04-08 00:52:17.976448 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-08 00:52:17.976454 | orchestrator | Wednesday 08 April 2026 00:45:15 +0000 (0:00:00.251) 0:03:29.390 ******* 2026-04-08 00:52:17.976459 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.976465 | orchestrator | 2026-04-08 00:52:17.976470 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-08 00:52:17.976476 | orchestrator | Wednesday 08 April 2026 00:45:16 +0000 (0:00:00.221) 0:03:29.612 ******* 2026-04-08 00:52:17.976481 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.976487 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.976493 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.976498 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:52:17.976504 | orchestrator | 2026-04-08 00:52:17.976510 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-08 00:52:17.976516 | orchestrator | Wednesday 08 April 2026 00:45:17 +0000 (0:00:01.098) 0:03:30.710 ******* 2026-04-08 00:52:17.976522 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.976528 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.976534 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.976540 | orchestrator | 2026-04-08 00:52:17.976545 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-08 00:52:17.976549 | orchestrator | Wednesday 08 April 2026 00:45:17 +0000 (0:00:00.344) 0:03:31.054 ******* 2026-04-08 00:52:17.976553 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:52:17.976557 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:52:17.976560 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:52:17.976564 | orchestrator | 2026-04-08 00:52:17.976572 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-08 00:52:17.976576 | orchestrator | Wednesday 08 April 2026 00:45:18 +0000 (0:00:01.387) 0:03:32.442 ******* 2026-04-08 00:52:17.976579 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-08 00:52:17.976583 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-08 00:52:17.976587 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-08 00:52:17.976591 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.976594 | orchestrator | 2026-04-08 00:52:17.976598 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-08 00:52:17.976602 | orchestrator | Wednesday 08 April 2026 00:45:19 +0000 (0:00:00.631) 0:03:33.073 ******* 2026-04-08 00:52:17.976605 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.976609 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.976613 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.976617 | orchestrator | 2026-04-08 00:52:17.976620 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-08 00:52:17.976624 | orchestrator | Wednesday 08 April 2026 00:45:19 +0000 (0:00:00.322) 0:03:33.396 ******* 2026-04-08 00:52:17.976628 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.976631 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.976635 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.976639 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:52:17.976643 | orchestrator | 2026-04-08 00:52:17.976646 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-08 00:52:17.976650 | orchestrator | Wednesday 08 April 2026 00:45:21 +0000 (0:00:01.073) 0:03:34.470 ******* 2026-04-08 00:52:17.976659 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.976663 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.976666 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.976670 | orchestrator | 2026-04-08 00:52:17.976674 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-08 00:52:17.976678 | orchestrator | Wednesday 08 April 2026 00:45:21 +0000 (0:00:00.345) 0:03:34.815 ******* 2026-04-08 00:52:17.976681 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:52:17.976685 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:52:17.976689 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:52:17.976692 | orchestrator | 2026-04-08 00:52:17.976696 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-08 00:52:17.976700 | orchestrator | Wednesday 08 April 2026 00:45:22 +0000 (0:00:01.409) 0:03:36.224 ******* 2026-04-08 00:52:17.976703 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-08 00:52:17.976707 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-08 00:52:17.976711 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-08 00:52:17.976714 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.976718 | orchestrator | 2026-04-08 00:52:17.976722 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-08 00:52:17.976725 | orchestrator | Wednesday 08 April 2026 00:45:23 +0000 (0:00:00.834) 0:03:37.059 ******* 2026-04-08 00:52:17.976729 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.976733 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.976737 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.976740 | orchestrator | 2026-04-08 00:52:17.976744 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-04-08 00:52:17.976748 | orchestrator | Wednesday 08 April 2026 00:45:23 +0000 (0:00:00.289) 0:03:37.348 ******* 2026-04-08 00:52:17.976752 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.976755 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.976759 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.976763 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.976766 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.976787 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.976792 | orchestrator | 2026-04-08 00:52:17.976795 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-08 00:52:17.976799 | orchestrator | Wednesday 08 April 2026 00:45:24 +0000 (0:00:00.968) 0:03:38.316 ******* 2026-04-08 00:52:17.976803 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.976807 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.976810 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.976814 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:17.976818 | orchestrator | 2026-04-08 00:52:17.976822 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-08 00:52:17.976826 | orchestrator | Wednesday 08 April 2026 00:45:25 +0000 (0:00:00.983) 0:03:39.300 ******* 2026-04-08 00:52:17.976829 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.976833 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.976837 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.976841 | orchestrator | 2026-04-08 00:52:17.976844 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-08 00:52:17.976848 | orchestrator | Wednesday 08 April 2026 00:45:26 +0000 (0:00:00.304) 0:03:39.604 ******* 2026-04-08 00:52:17.976852 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:17.976856 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:17.976859 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:17.976863 | orchestrator | 2026-04-08 00:52:17.976867 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-08 00:52:17.976871 | orchestrator | Wednesday 08 April 2026 00:45:27 +0000 (0:00:01.285) 0:03:40.890 ******* 2026-04-08 00:52:17.976878 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-08 00:52:17.976881 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-08 00:52:17.976885 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-08 00:52:17.976889 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.976895 | orchestrator | 2026-04-08 00:52:17.976901 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-08 00:52:17.976911 | orchestrator | Wednesday 08 April 2026 00:45:28 +0000 (0:00:00.855) 0:03:41.746 ******* 2026-04-08 00:52:17.976917 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.976924 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.976930 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.976937 | orchestrator | 2026-04-08 00:52:17.976944 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-04-08 00:52:17.976950 | orchestrator | 2026-04-08 00:52:17.976957 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-08 00:52:17.976963 | orchestrator | Wednesday 08 April 2026 00:45:29 +0000 (0:00:00.825) 0:03:42.571 ******* 2026-04-08 00:52:17.976968 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:17.976972 | orchestrator | 2026-04-08 00:52:17.976975 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-08 00:52:17.976979 | orchestrator | Wednesday 08 April 2026 00:45:29 +0000 (0:00:00.503) 0:03:43.074 ******* 2026-04-08 00:52:17.976983 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:17.976987 | orchestrator | 2026-04-08 00:52:17.976990 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-08 00:52:17.976994 | orchestrator | Wednesday 08 April 2026 00:45:30 +0000 (0:00:00.728) 0:03:43.803 ******* 2026-04-08 00:52:17.976998 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.977002 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.977005 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.977009 | orchestrator | 2026-04-08 00:52:17.977013 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-08 00:52:17.977016 | orchestrator | Wednesday 08 April 2026 00:45:31 +0000 (0:00:00.774) 0:03:44.577 ******* 2026-04-08 00:52:17.977020 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.977024 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.977028 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.977031 | orchestrator | 2026-04-08 00:52:17.977035 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-08 00:52:17.977039 | orchestrator | Wednesday 08 April 2026 00:45:31 +0000 (0:00:00.367) 0:03:44.945 ******* 2026-04-08 00:52:17.977043 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.977046 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.977050 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.977054 | orchestrator | 2026-04-08 00:52:17.977057 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-08 00:52:17.977061 | orchestrator | Wednesday 08 April 2026 00:45:31 +0000 (0:00:00.325) 0:03:45.270 ******* 2026-04-08 00:52:17.977065 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.977069 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.977072 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.977076 | orchestrator | 2026-04-08 00:52:17.977080 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-08 00:52:17.977083 | orchestrator | Wednesday 08 April 2026 00:45:32 +0000 (0:00:00.325) 0:03:45.595 ******* 2026-04-08 00:52:17.977087 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.977091 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.977095 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.977116 | orchestrator | 2026-04-08 00:52:17.977120 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-08 00:52:17.977128 | orchestrator | Wednesday 08 April 2026 00:45:33 +0000 (0:00:01.007) 0:03:46.602 ******* 2026-04-08 00:52:17.977132 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.977135 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.977139 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.977143 | orchestrator | 2026-04-08 00:52:17.977147 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-08 00:52:17.977150 | orchestrator | Wednesday 08 April 2026 00:45:33 +0000 (0:00:00.312) 0:03:46.915 ******* 2026-04-08 00:52:17.977180 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.977185 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.977189 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.977193 | orchestrator | 2026-04-08 00:52:17.977197 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-08 00:52:17.977201 | orchestrator | Wednesday 08 April 2026 00:45:33 +0000 (0:00:00.345) 0:03:47.260 ******* 2026-04-08 00:52:17.977204 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.977208 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.977212 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.977216 | orchestrator | 2026-04-08 00:52:17.977220 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-08 00:52:17.977223 | orchestrator | Wednesday 08 April 2026 00:45:34 +0000 (0:00:00.662) 0:03:47.923 ******* 2026-04-08 00:52:17.977227 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.977231 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.977235 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.977239 | orchestrator | 2026-04-08 00:52:17.977242 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-08 00:52:17.977246 | orchestrator | Wednesday 08 April 2026 00:45:35 +0000 (0:00:00.843) 0:03:48.767 ******* 2026-04-08 00:52:17.977250 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.977254 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.977258 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.977261 | orchestrator | 2026-04-08 00:52:17.977265 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-08 00:52:17.977269 | orchestrator | Wednesday 08 April 2026 00:45:35 +0000 (0:00:00.281) 0:03:49.048 ******* 2026-04-08 00:52:17.977273 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.977277 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.977280 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.977284 | orchestrator | 2026-04-08 00:52:17.977288 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-08 00:52:17.977292 | orchestrator | Wednesday 08 April 2026 00:45:35 +0000 (0:00:00.297) 0:03:49.346 ******* 2026-04-08 00:52:17.977295 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.977299 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.977303 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.977307 | orchestrator | 2026-04-08 00:52:17.977314 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-08 00:52:17.977317 | orchestrator | Wednesday 08 April 2026 00:45:36 +0000 (0:00:00.265) 0:03:49.611 ******* 2026-04-08 00:52:17.977321 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.977325 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.977329 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.977333 | orchestrator | 2026-04-08 00:52:17.977336 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-08 00:52:17.977340 | orchestrator | Wednesday 08 April 2026 00:45:36 +0000 (0:00:00.302) 0:03:49.913 ******* 2026-04-08 00:52:17.977344 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.977348 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.977351 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.977355 | orchestrator | 2026-04-08 00:52:17.977359 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-08 00:52:17.977363 | orchestrator | Wednesday 08 April 2026 00:45:36 +0000 (0:00:00.476) 0:03:50.390 ******* 2026-04-08 00:52:17.977370 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.977374 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.977377 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.977381 | orchestrator | 2026-04-08 00:52:17.977385 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-08 00:52:17.977389 | orchestrator | Wednesday 08 April 2026 00:45:37 +0000 (0:00:00.283) 0:03:50.673 ******* 2026-04-08 00:52:17.977392 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.977396 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.977400 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.977404 | orchestrator | 2026-04-08 00:52:17.977407 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-08 00:52:17.977411 | orchestrator | Wednesday 08 April 2026 00:45:37 +0000 (0:00:00.248) 0:03:50.921 ******* 2026-04-08 00:52:17.977415 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.977419 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.977423 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.977426 | orchestrator | 2026-04-08 00:52:17.977430 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-08 00:52:17.977434 | orchestrator | Wednesday 08 April 2026 00:45:37 +0000 (0:00:00.309) 0:03:51.230 ******* 2026-04-08 00:52:17.977438 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.977441 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.977445 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.977449 | orchestrator | 2026-04-08 00:52:17.977453 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-08 00:52:17.977457 | orchestrator | Wednesday 08 April 2026 00:45:38 +0000 (0:00:00.578) 0:03:51.809 ******* 2026-04-08 00:52:17.977460 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.977464 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.977468 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.977472 | orchestrator | 2026-04-08 00:52:17.977475 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-04-08 00:52:17.977479 | orchestrator | Wednesday 08 April 2026 00:45:38 +0000 (0:00:00.461) 0:03:52.270 ******* 2026-04-08 00:52:17.977483 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.977487 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.977491 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.977494 | orchestrator | 2026-04-08 00:52:17.977498 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-04-08 00:52:17.977502 | orchestrator | Wednesday 08 April 2026 00:45:39 +0000 (0:00:00.359) 0:03:52.629 ******* 2026-04-08 00:52:17.977506 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:17.977510 | orchestrator | 2026-04-08 00:52:17.977513 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-04-08 00:52:17.977517 | orchestrator | Wednesday 08 April 2026 00:45:39 +0000 (0:00:00.786) 0:03:53.416 ******* 2026-04-08 00:52:17.977521 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.977525 | orchestrator | 2026-04-08 00:52:17.977541 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-04-08 00:52:17.977546 | orchestrator | Wednesday 08 April 2026 00:45:40 +0000 (0:00:00.172) 0:03:53.589 ******* 2026-04-08 00:52:17.977549 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-08 00:52:17.977553 | orchestrator | 2026-04-08 00:52:17.977557 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-04-08 00:52:17.977561 | orchestrator | Wednesday 08 April 2026 00:45:41 +0000 (0:00:01.125) 0:03:54.714 ******* 2026-04-08 00:52:17.977565 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.977568 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.977572 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.977576 | orchestrator | 2026-04-08 00:52:17.977580 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-04-08 00:52:17.977583 | orchestrator | Wednesday 08 April 2026 00:45:41 +0000 (0:00:00.353) 0:03:55.068 ******* 2026-04-08 00:52:17.977590 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.977594 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.977598 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.977602 | orchestrator | 2026-04-08 00:52:17.977606 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-04-08 00:52:17.977609 | orchestrator | Wednesday 08 April 2026 00:45:41 +0000 (0:00:00.337) 0:03:55.405 ******* 2026-04-08 00:52:17.977613 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:17.977617 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:17.977621 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:17.977625 | orchestrator | 2026-04-08 00:52:17.977628 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-04-08 00:52:17.977632 | orchestrator | Wednesday 08 April 2026 00:45:43 +0000 (0:00:01.365) 0:03:56.770 ******* 2026-04-08 00:52:17.977636 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:17.977639 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:17.977643 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:17.977647 | orchestrator | 2026-04-08 00:52:17.977651 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-04-08 00:52:17.977655 | orchestrator | Wednesday 08 April 2026 00:45:43 +0000 (0:00:00.634) 0:03:57.405 ******* 2026-04-08 00:52:17.977658 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:17.977662 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:17.977669 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:17.977675 | orchestrator | 2026-04-08 00:52:17.977681 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-04-08 00:52:17.977690 | orchestrator | Wednesday 08 April 2026 00:45:44 +0000 (0:00:00.588) 0:03:57.994 ******* 2026-04-08 00:52:17.977698 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.977705 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.977711 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.977717 | orchestrator | 2026-04-08 00:52:17.977723 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-04-08 00:52:17.977730 | orchestrator | Wednesday 08 April 2026 00:45:45 +0000 (0:00:00.743) 0:03:58.737 ******* 2026-04-08 00:52:17.977736 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:17.977742 | orchestrator | 2026-04-08 00:52:17.977748 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-04-08 00:52:17.977754 | orchestrator | Wednesday 08 April 2026 00:45:46 +0000 (0:00:01.116) 0:03:59.853 ******* 2026-04-08 00:52:17.977760 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.977766 | orchestrator | 2026-04-08 00:52:17.977773 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-04-08 00:52:17.977779 | orchestrator | Wednesday 08 April 2026 00:45:47 +0000 (0:00:00.810) 0:04:00.663 ******* 2026-04-08 00:52:17.977785 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-08 00:52:17.977791 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:52:17.977798 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:52:17.977804 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-08 00:52:17.977810 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-04-08 00:52:17.977818 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-08 00:52:17.977824 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-08 00:52:17.977831 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-04-08 00:52:17.977835 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-08 00:52:17.977839 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-04-08 00:52:17.977843 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-04-08 00:52:17.977846 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-04-08 00:52:17.977850 | orchestrator | 2026-04-08 00:52:17.977854 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-04-08 00:52:17.977862 | orchestrator | Wednesday 08 April 2026 00:45:50 +0000 (0:00:03.114) 0:04:03.778 ******* 2026-04-08 00:52:17.977866 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:17.977869 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:17.977873 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:17.977877 | orchestrator | 2026-04-08 00:52:17.977880 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-04-08 00:52:17.977884 | orchestrator | Wednesday 08 April 2026 00:45:51 +0000 (0:00:01.367) 0:04:05.146 ******* 2026-04-08 00:52:17.977888 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.977892 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.977895 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.977899 | orchestrator | 2026-04-08 00:52:17.977903 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-04-08 00:52:17.977907 | orchestrator | Wednesday 08 April 2026 00:45:51 +0000 (0:00:00.275) 0:04:05.421 ******* 2026-04-08 00:52:17.977910 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.977914 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.977918 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.977922 | orchestrator | 2026-04-08 00:52:17.977925 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-04-08 00:52:17.977929 | orchestrator | Wednesday 08 April 2026 00:45:52 +0000 (0:00:00.298) 0:04:05.720 ******* 2026-04-08 00:52:17.977933 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:17.977955 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:17.977960 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:17.977966 | orchestrator | 2026-04-08 00:52:17.977972 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-04-08 00:52:17.977978 | orchestrator | Wednesday 08 April 2026 00:45:54 +0000 (0:00:01.920) 0:04:07.640 ******* 2026-04-08 00:52:17.977984 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:17.977990 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:17.977996 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:17.978001 | orchestrator | 2026-04-08 00:52:17.978007 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-04-08 00:52:17.978048 | orchestrator | Wednesday 08 April 2026 00:45:55 +0000 (0:00:01.305) 0:04:08.945 ******* 2026-04-08 00:52:17.978056 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.978060 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.978064 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.978068 | orchestrator | 2026-04-08 00:52:17.978072 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-04-08 00:52:17.978075 | orchestrator | Wednesday 08 April 2026 00:45:55 +0000 (0:00:00.301) 0:04:09.247 ******* 2026-04-08 00:52:17.978079 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:17.978083 | orchestrator | 2026-04-08 00:52:17.978087 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-04-08 00:52:17.978091 | orchestrator | Wednesday 08 April 2026 00:45:56 +0000 (0:00:00.731) 0:04:09.978 ******* 2026-04-08 00:52:17.978095 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.978136 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.978140 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.978146 | orchestrator | 2026-04-08 00:52:17.978153 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-04-08 00:52:17.978159 | orchestrator | Wednesday 08 April 2026 00:45:56 +0000 (0:00:00.339) 0:04:10.318 ******* 2026-04-08 00:52:17.978165 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.978171 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.978179 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.978185 | orchestrator | 2026-04-08 00:52:17.978195 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-04-08 00:52:17.978199 | orchestrator | Wednesday 08 April 2026 00:45:57 +0000 (0:00:00.291) 0:04:10.610 ******* 2026-04-08 00:52:17.978208 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:17.978213 | orchestrator | 2026-04-08 00:52:17.978216 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-04-08 00:52:17.978220 | orchestrator | Wednesday 08 April 2026 00:45:57 +0000 (0:00:00.550) 0:04:11.161 ******* 2026-04-08 00:52:17.978224 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:17.978228 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:17.978231 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:17.978235 | orchestrator | 2026-04-08 00:52:17.978239 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-04-08 00:52:17.978242 | orchestrator | Wednesday 08 April 2026 00:45:59 +0000 (0:00:01.959) 0:04:13.120 ******* 2026-04-08 00:52:17.978246 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:17.978250 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:17.978253 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:17.978257 | orchestrator | 2026-04-08 00:52:17.978261 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-04-08 00:52:17.978265 | orchestrator | Wednesday 08 April 2026 00:46:00 +0000 (0:00:01.015) 0:04:14.135 ******* 2026-04-08 00:52:17.978268 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:17.978272 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:17.978276 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:17.978279 | orchestrator | 2026-04-08 00:52:17.978283 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-04-08 00:52:17.978287 | orchestrator | Wednesday 08 April 2026 00:46:02 +0000 (0:00:01.661) 0:04:15.796 ******* 2026-04-08 00:52:17.978291 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:17.978294 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:17.978298 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:17.978302 | orchestrator | 2026-04-08 00:52:17.978305 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-04-08 00:52:17.978309 | orchestrator | Wednesday 08 April 2026 00:46:04 +0000 (0:00:02.644) 0:04:18.441 ******* 2026-04-08 00:52:17.978314 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:17.978321 | orchestrator | 2026-04-08 00:52:17.978326 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-04-08 00:52:17.978336 | orchestrator | Wednesday 08 April 2026 00:46:05 +0000 (0:00:00.723) 0:04:19.165 ******* 2026-04-08 00:52:17.978344 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-04-08 00:52:17.978350 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.978355 | orchestrator | 2026-04-08 00:52:17.978362 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-04-08 00:52:17.978367 | orchestrator | Wednesday 08 April 2026 00:46:27 +0000 (0:00:21.969) 0:04:41.135 ******* 2026-04-08 00:52:17.978373 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.978379 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.978384 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.978389 | orchestrator | 2026-04-08 00:52:17.978395 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-04-08 00:52:17.978400 | orchestrator | Wednesday 08 April 2026 00:46:36 +0000 (0:00:08.441) 0:04:49.577 ******* 2026-04-08 00:52:17.978407 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.978413 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.978418 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.978424 | orchestrator | 2026-04-08 00:52:17.978430 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-04-08 00:52:17.978466 | orchestrator | Wednesday 08 April 2026 00:46:36 +0000 (0:00:00.291) 0:04:49.868 ******* 2026-04-08 00:52:17.978473 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__200d4bbd265ef7485548730ae2ddece9fff12d47'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-08 00:52:17.978483 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__200d4bbd265ef7485548730ae2ddece9fff12d47'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-08 00:52:17.978489 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__200d4bbd265ef7485548730ae2ddece9fff12d47'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-08 00:52:17.978501 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__200d4bbd265ef7485548730ae2ddece9fff12d47'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-08 00:52:17.978505 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__200d4bbd265ef7485548730ae2ddece9fff12d47'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-08 00:52:17.978510 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__200d4bbd265ef7485548730ae2ddece9fff12d47'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__200d4bbd265ef7485548730ae2ddece9fff12d47'}])  2026-04-08 00:52:17.978515 | orchestrator | 2026-04-08 00:52:17.978519 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-08 00:52:17.978523 | orchestrator | Wednesday 08 April 2026 00:46:51 +0000 (0:00:14.953) 0:05:04.822 ******* 2026-04-08 00:52:17.978526 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.978530 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.978534 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.978537 | orchestrator | 2026-04-08 00:52:17.978541 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-08 00:52:17.978545 | orchestrator | Wednesday 08 April 2026 00:46:51 +0000 (0:00:00.329) 0:05:05.151 ******* 2026-04-08 00:52:17.978549 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:17.978553 | orchestrator | 2026-04-08 00:52:17.978560 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-08 00:52:17.978566 | orchestrator | Wednesday 08 April 2026 00:46:52 +0000 (0:00:00.745) 0:05:05.897 ******* 2026-04-08 00:52:17.978572 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.978582 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.978589 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.978595 | orchestrator | 2026-04-08 00:52:17.978601 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-08 00:52:17.978607 | orchestrator | Wednesday 08 April 2026 00:46:52 +0000 (0:00:00.272) 0:05:06.170 ******* 2026-04-08 00:52:17.978613 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.978619 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.978625 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.978636 | orchestrator | 2026-04-08 00:52:17.978642 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-08 00:52:17.978648 | orchestrator | Wednesday 08 April 2026 00:46:52 +0000 (0:00:00.264) 0:05:06.434 ******* 2026-04-08 00:52:17.978654 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-08 00:52:17.978660 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-08 00:52:17.978667 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-08 00:52:17.978673 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.978679 | orchestrator | 2026-04-08 00:52:17.978685 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-08 00:52:17.978692 | orchestrator | Wednesday 08 April 2026 00:46:53 +0000 (0:00:00.690) 0:05:07.124 ******* 2026-04-08 00:52:17.978698 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.978702 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.978729 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.978736 | orchestrator | 2026-04-08 00:52:17.978742 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-04-08 00:52:17.978748 | orchestrator | 2026-04-08 00:52:17.978754 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-08 00:52:17.978760 | orchestrator | Wednesday 08 April 2026 00:46:54 +0000 (0:00:00.629) 0:05:07.754 ******* 2026-04-08 00:52:17.978766 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:17.978773 | orchestrator | 2026-04-08 00:52:17.978779 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-08 00:52:17.978785 | orchestrator | Wednesday 08 April 2026 00:46:54 +0000 (0:00:00.420) 0:05:08.174 ******* 2026-04-08 00:52:17.978792 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:17.978798 | orchestrator | 2026-04-08 00:52:17.978805 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-08 00:52:17.978810 | orchestrator | Wednesday 08 April 2026 00:46:55 +0000 (0:00:00.584) 0:05:08.758 ******* 2026-04-08 00:52:17.978813 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.978817 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.978821 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.978825 | orchestrator | 2026-04-08 00:52:17.978829 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-08 00:52:17.978833 | orchestrator | Wednesday 08 April 2026 00:46:55 +0000 (0:00:00.632) 0:05:09.391 ******* 2026-04-08 00:52:17.978839 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.978845 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.978854 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.978861 | orchestrator | 2026-04-08 00:52:17.978867 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-08 00:52:17.978873 | orchestrator | Wednesday 08 April 2026 00:46:56 +0000 (0:00:00.251) 0:05:09.643 ******* 2026-04-08 00:52:17.978878 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.978884 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.978895 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.978900 | orchestrator | 2026-04-08 00:52:17.978905 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-08 00:52:17.978911 | orchestrator | Wednesday 08 April 2026 00:46:56 +0000 (0:00:00.274) 0:05:09.917 ******* 2026-04-08 00:52:17.978917 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.978922 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.978928 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.978933 | orchestrator | 2026-04-08 00:52:17.978939 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-08 00:52:17.978945 | orchestrator | Wednesday 08 April 2026 00:46:56 +0000 (0:00:00.412) 0:05:10.330 ******* 2026-04-08 00:52:17.978951 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.978963 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.978969 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.978975 | orchestrator | 2026-04-08 00:52:17.978981 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-08 00:52:17.978986 | orchestrator | Wednesday 08 April 2026 00:46:57 +0000 (0:00:00.595) 0:05:10.925 ******* 2026-04-08 00:52:17.978992 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.978998 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.979004 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.979009 | orchestrator | 2026-04-08 00:52:17.979015 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-08 00:52:17.979020 | orchestrator | Wednesday 08 April 2026 00:46:57 +0000 (0:00:00.284) 0:05:11.209 ******* 2026-04-08 00:52:17.979025 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.979030 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.979036 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.979042 | orchestrator | 2026-04-08 00:52:17.979047 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-08 00:52:17.979053 | orchestrator | Wednesday 08 April 2026 00:46:58 +0000 (0:00:00.256) 0:05:11.465 ******* 2026-04-08 00:52:17.979059 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.979064 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.979071 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.979077 | orchestrator | 2026-04-08 00:52:17.979083 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-08 00:52:17.979090 | orchestrator | Wednesday 08 April 2026 00:46:58 +0000 (0:00:00.599) 0:05:12.065 ******* 2026-04-08 00:52:17.979096 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.979123 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.979129 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.979135 | orchestrator | 2026-04-08 00:52:17.979140 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-08 00:52:17.979146 | orchestrator | Wednesday 08 April 2026 00:46:59 +0000 (0:00:00.856) 0:05:12.922 ******* 2026-04-08 00:52:17.979151 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.979157 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.979163 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.979170 | orchestrator | 2026-04-08 00:52:17.979176 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-08 00:52:17.979182 | orchestrator | Wednesday 08 April 2026 00:46:59 +0000 (0:00:00.244) 0:05:13.166 ******* 2026-04-08 00:52:17.979188 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.979195 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.979199 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.979203 | orchestrator | 2026-04-08 00:52:17.979207 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-08 00:52:17.979211 | orchestrator | Wednesday 08 April 2026 00:46:59 +0000 (0:00:00.278) 0:05:13.444 ******* 2026-04-08 00:52:17.979215 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.979218 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.979222 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.979226 | orchestrator | 2026-04-08 00:52:17.979230 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-08 00:52:17.979267 | orchestrator | Wednesday 08 April 2026 00:47:00 +0000 (0:00:00.298) 0:05:13.743 ******* 2026-04-08 00:52:17.979271 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.979275 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.979279 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.979283 | orchestrator | 2026-04-08 00:52:17.979287 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-08 00:52:17.979290 | orchestrator | Wednesday 08 April 2026 00:47:00 +0000 (0:00:00.428) 0:05:14.172 ******* 2026-04-08 00:52:17.979294 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.979298 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.979309 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.979313 | orchestrator | 2026-04-08 00:52:17.979317 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-08 00:52:17.979321 | orchestrator | Wednesday 08 April 2026 00:47:00 +0000 (0:00:00.254) 0:05:14.426 ******* 2026-04-08 00:52:17.979324 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.979328 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.979332 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.979336 | orchestrator | 2026-04-08 00:52:17.979339 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-08 00:52:17.979343 | orchestrator | Wednesday 08 April 2026 00:47:01 +0000 (0:00:00.330) 0:05:14.757 ******* 2026-04-08 00:52:17.979347 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.979351 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.979354 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.979358 | orchestrator | 2026-04-08 00:52:17.979362 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-08 00:52:17.979366 | orchestrator | Wednesday 08 April 2026 00:47:01 +0000 (0:00:00.246) 0:05:15.003 ******* 2026-04-08 00:52:17.979369 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.979373 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.979377 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.979381 | orchestrator | 2026-04-08 00:52:17.979384 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-08 00:52:17.979388 | orchestrator | Wednesday 08 April 2026 00:47:01 +0000 (0:00:00.425) 0:05:15.428 ******* 2026-04-08 00:52:17.979392 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.979396 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.979399 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.979403 | orchestrator | 2026-04-08 00:52:17.979411 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-08 00:52:17.979415 | orchestrator | Wednesday 08 April 2026 00:47:02 +0000 (0:00:00.291) 0:05:15.720 ******* 2026-04-08 00:52:17.979419 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.979423 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.979426 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.979430 | orchestrator | 2026-04-08 00:52:17.979434 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-08 00:52:17.979438 | orchestrator | Wednesday 08 April 2026 00:47:02 +0000 (0:00:00.496) 0:05:16.217 ******* 2026-04-08 00:52:17.979442 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-08 00:52:17.979446 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-08 00:52:17.979450 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-08 00:52:17.979455 | orchestrator | 2026-04-08 00:52:17.979462 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-08 00:52:17.979468 | orchestrator | Wednesday 08 April 2026 00:47:03 +0000 (0:00:00.729) 0:05:16.947 ******* 2026-04-08 00:52:17.979478 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:17.979485 | orchestrator | 2026-04-08 00:52:17.979490 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-04-08 00:52:17.979497 | orchestrator | Wednesday 08 April 2026 00:47:04 +0000 (0:00:00.623) 0:05:17.570 ******* 2026-04-08 00:52:17.979503 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:17.979509 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:17.979515 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:17.979521 | orchestrator | 2026-04-08 00:52:17.979527 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-04-08 00:52:17.979533 | orchestrator | Wednesday 08 April 2026 00:47:04 +0000 (0:00:00.637) 0:05:18.208 ******* 2026-04-08 00:52:17.979539 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.979545 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.979550 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.979562 | orchestrator | 2026-04-08 00:52:17.979570 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-04-08 00:52:17.979576 | orchestrator | Wednesday 08 April 2026 00:47:05 +0000 (0:00:00.271) 0:05:18.480 ******* 2026-04-08 00:52:17.979582 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-08 00:52:17.979588 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-08 00:52:17.979594 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-08 00:52:17.979600 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-04-08 00:52:17.979605 | orchestrator | 2026-04-08 00:52:17.979611 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-04-08 00:52:17.979617 | orchestrator | Wednesday 08 April 2026 00:47:15 +0000 (0:00:10.209) 0:05:28.689 ******* 2026-04-08 00:52:17.979623 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.979629 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.979634 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.979640 | orchestrator | 2026-04-08 00:52:17.979646 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-04-08 00:52:17.979652 | orchestrator | Wednesday 08 April 2026 00:47:15 +0000 (0:00:00.513) 0:05:29.203 ******* 2026-04-08 00:52:17.979659 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-08 00:52:17.979665 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-08 00:52:17.979671 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-08 00:52:17.979677 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-08 00:52:17.979683 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:52:17.979716 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:52:17.979723 | orchestrator | 2026-04-08 00:52:17.979728 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-04-08 00:52:17.979734 | orchestrator | Wednesday 08 April 2026 00:47:17 +0000 (0:00:02.101) 0:05:31.304 ******* 2026-04-08 00:52:17.979740 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-08 00:52:17.979746 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-08 00:52:17.979752 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-08 00:52:17.979757 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-08 00:52:17.979763 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-08 00:52:17.979769 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-08 00:52:17.979775 | orchestrator | 2026-04-08 00:52:17.979780 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-04-08 00:52:17.979786 | orchestrator | Wednesday 08 April 2026 00:47:19 +0000 (0:00:01.282) 0:05:32.587 ******* 2026-04-08 00:52:17.979791 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.979797 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.979803 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.979809 | orchestrator | 2026-04-08 00:52:17.979816 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-04-08 00:52:17.979822 | orchestrator | Wednesday 08 April 2026 00:47:19 +0000 (0:00:00.708) 0:05:33.295 ******* 2026-04-08 00:52:17.979827 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.979833 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.979839 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.979844 | orchestrator | 2026-04-08 00:52:17.979852 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-08 00:52:17.979858 | orchestrator | Wednesday 08 April 2026 00:47:20 +0000 (0:00:00.576) 0:05:33.872 ******* 2026-04-08 00:52:17.979865 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.979872 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.979878 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.979884 | orchestrator | 2026-04-08 00:52:17.979891 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-08 00:52:17.979904 | orchestrator | Wednesday 08 April 2026 00:47:20 +0000 (0:00:00.332) 0:05:34.205 ******* 2026-04-08 00:52:17.979916 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:17.979923 | orchestrator | 2026-04-08 00:52:17.979928 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-04-08 00:52:17.979934 | orchestrator | Wednesday 08 April 2026 00:47:21 +0000 (0:00:00.496) 0:05:34.701 ******* 2026-04-08 00:52:17.979940 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.979946 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.979952 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.979958 | orchestrator | 2026-04-08 00:52:17.979964 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-04-08 00:52:17.979969 | orchestrator | Wednesday 08 April 2026 00:47:21 +0000 (0:00:00.313) 0:05:35.015 ******* 2026-04-08 00:52:17.979975 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.979981 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.979987 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.979990 | orchestrator | 2026-04-08 00:52:17.979994 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-04-08 00:52:17.979998 | orchestrator | Wednesday 08 April 2026 00:47:22 +0000 (0:00:00.545) 0:05:35.560 ******* 2026-04-08 00:52:17.980002 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:17.980006 | orchestrator | 2026-04-08 00:52:17.980009 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-04-08 00:52:17.980013 | orchestrator | Wednesday 08 April 2026 00:47:22 +0000 (0:00:00.519) 0:05:36.080 ******* 2026-04-08 00:52:17.980017 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:17.980021 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:17.980024 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:17.980028 | orchestrator | 2026-04-08 00:52:17.980032 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-04-08 00:52:17.980035 | orchestrator | Wednesday 08 April 2026 00:47:23 +0000 (0:00:01.199) 0:05:37.280 ******* 2026-04-08 00:52:17.980039 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:17.980043 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:17.980046 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:17.980050 | orchestrator | 2026-04-08 00:52:17.980054 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-04-08 00:52:17.980057 | orchestrator | Wednesday 08 April 2026 00:47:25 +0000 (0:00:01.398) 0:05:38.678 ******* 2026-04-08 00:52:17.980061 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:17.980065 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:17.980069 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:17.980072 | orchestrator | 2026-04-08 00:52:17.980076 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-04-08 00:52:17.980080 | orchestrator | Wednesday 08 April 2026 00:47:26 +0000 (0:00:01.705) 0:05:40.384 ******* 2026-04-08 00:52:17.980083 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:17.980087 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:17.980091 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:17.980094 | orchestrator | 2026-04-08 00:52:17.980255 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-08 00:52:17.980267 | orchestrator | Wednesday 08 April 2026 00:47:28 +0000 (0:00:01.981) 0:05:42.366 ******* 2026-04-08 00:52:17.980271 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.980275 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.980279 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-04-08 00:52:17.980283 | orchestrator | 2026-04-08 00:52:17.980287 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-04-08 00:52:17.980290 | orchestrator | Wednesday 08 April 2026 00:47:29 +0000 (0:00:00.450) 0:05:42.817 ******* 2026-04-08 00:52:17.980334 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-04-08 00:52:17.980346 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-04-08 00:52:17.980350 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-04-08 00:52:17.980354 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-04-08 00:52:17.980358 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-08 00:52:17.980362 | orchestrator | 2026-04-08 00:52:17.980366 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-04-08 00:52:17.980370 | orchestrator | Wednesday 08 April 2026 00:47:53 +0000 (0:00:24.480) 0:06:07.297 ******* 2026-04-08 00:52:17.980373 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-08 00:52:17.980377 | orchestrator | 2026-04-08 00:52:17.980381 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-04-08 00:52:17.980385 | orchestrator | Wednesday 08 April 2026 00:47:55 +0000 (0:00:01.718) 0:06:09.016 ******* 2026-04-08 00:52:17.980388 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.980392 | orchestrator | 2026-04-08 00:52:17.980396 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-04-08 00:52:17.980400 | orchestrator | Wednesday 08 April 2026 00:47:55 +0000 (0:00:00.312) 0:06:09.328 ******* 2026-04-08 00:52:17.980404 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.980408 | orchestrator | 2026-04-08 00:52:17.980411 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-04-08 00:52:17.980415 | orchestrator | Wednesday 08 April 2026 00:47:56 +0000 (0:00:00.146) 0:06:09.475 ******* 2026-04-08 00:52:17.980419 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-04-08 00:52:17.980423 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-04-08 00:52:17.980430 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-04-08 00:52:17.980434 | orchestrator | 2026-04-08 00:52:17.980438 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-04-08 00:52:17.980442 | orchestrator | Wednesday 08 April 2026 00:48:03 +0000 (0:00:07.416) 0:06:16.892 ******* 2026-04-08 00:52:17.980446 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-04-08 00:52:17.980449 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-04-08 00:52:17.980453 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-04-08 00:52:17.980457 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-04-08 00:52:17.980461 | orchestrator | 2026-04-08 00:52:17.980464 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-08 00:52:17.980468 | orchestrator | Wednesday 08 April 2026 00:48:08 +0000 (0:00:04.673) 0:06:21.565 ******* 2026-04-08 00:52:17.980472 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:17.980476 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:17.980479 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:17.980483 | orchestrator | 2026-04-08 00:52:17.980487 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-08 00:52:17.980491 | orchestrator | Wednesday 08 April 2026 00:48:08 +0000 (0:00:00.775) 0:06:22.340 ******* 2026-04-08 00:52:17.980494 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:17.980498 | orchestrator | 2026-04-08 00:52:17.980502 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-08 00:52:17.980506 | orchestrator | Wednesday 08 April 2026 00:48:09 +0000 (0:00:00.481) 0:06:22.822 ******* 2026-04-08 00:52:17.980509 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.980513 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.980517 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.980523 | orchestrator | 2026-04-08 00:52:17.980527 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-08 00:52:17.980531 | orchestrator | Wednesday 08 April 2026 00:48:09 +0000 (0:00:00.280) 0:06:23.102 ******* 2026-04-08 00:52:17.980535 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:17.980538 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:17.980542 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:17.980546 | orchestrator | 2026-04-08 00:52:17.980550 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-08 00:52:17.980553 | orchestrator | Wednesday 08 April 2026 00:48:10 +0000 (0:00:01.156) 0:06:24.258 ******* 2026-04-08 00:52:17.980557 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-08 00:52:17.980561 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-08 00:52:17.980565 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-08 00:52:17.980568 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.980572 | orchestrator | 2026-04-08 00:52:17.980576 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-08 00:52:17.980579 | orchestrator | Wednesday 08 April 2026 00:48:11 +0000 (0:00:00.537) 0:06:24.795 ******* 2026-04-08 00:52:17.980583 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.980587 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.980591 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.980595 | orchestrator | 2026-04-08 00:52:17.980598 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-04-08 00:52:17.980602 | orchestrator | 2026-04-08 00:52:17.980606 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-08 00:52:17.980610 | orchestrator | Wednesday 08 April 2026 00:48:11 +0000 (0:00:00.508) 0:06:25.304 ******* 2026-04-08 00:52:17.980614 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:52:17.980618 | orchestrator | 2026-04-08 00:52:17.980636 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-08 00:52:17.980641 | orchestrator | Wednesday 08 April 2026 00:48:12 +0000 (0:00:00.645) 0:06:25.949 ******* 2026-04-08 00:52:17.980645 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:52:17.980648 | orchestrator | 2026-04-08 00:52:17.980652 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-08 00:52:17.980656 | orchestrator | Wednesday 08 April 2026 00:48:13 +0000 (0:00:00.522) 0:06:26.471 ******* 2026-04-08 00:52:17.980660 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.980663 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.980667 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.980671 | orchestrator | 2026-04-08 00:52:17.980674 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-08 00:52:17.980678 | orchestrator | Wednesday 08 April 2026 00:48:13 +0000 (0:00:00.268) 0:06:26.740 ******* 2026-04-08 00:52:17.980682 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.980686 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.980689 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.980693 | orchestrator | 2026-04-08 00:52:17.980697 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-08 00:52:17.980701 | orchestrator | Wednesday 08 April 2026 00:48:14 +0000 (0:00:00.857) 0:06:27.597 ******* 2026-04-08 00:52:17.980704 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.980708 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.980712 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.980715 | orchestrator | 2026-04-08 00:52:17.980719 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-08 00:52:17.980723 | orchestrator | Wednesday 08 April 2026 00:48:14 +0000 (0:00:00.639) 0:06:28.237 ******* 2026-04-08 00:52:17.980727 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.980733 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.980737 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.980741 | orchestrator | 2026-04-08 00:52:17.980745 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-08 00:52:17.980751 | orchestrator | Wednesday 08 April 2026 00:48:15 +0000 (0:00:00.626) 0:06:28.863 ******* 2026-04-08 00:52:17.980754 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.980758 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.980762 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.980766 | orchestrator | 2026-04-08 00:52:17.980770 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-08 00:52:17.980773 | orchestrator | Wednesday 08 April 2026 00:48:15 +0000 (0:00:00.261) 0:06:29.125 ******* 2026-04-08 00:52:17.980777 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.980781 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.980784 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.980788 | orchestrator | 2026-04-08 00:52:17.980792 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-08 00:52:17.980796 | orchestrator | Wednesday 08 April 2026 00:48:16 +0000 (0:00:00.442) 0:06:29.568 ******* 2026-04-08 00:52:17.980799 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.980803 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.980807 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.980810 | orchestrator | 2026-04-08 00:52:17.980814 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-08 00:52:17.980818 | orchestrator | Wednesday 08 April 2026 00:48:16 +0000 (0:00:00.256) 0:06:29.824 ******* 2026-04-08 00:52:17.980822 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.980825 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.980829 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.980833 | orchestrator | 2026-04-08 00:52:17.980837 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-08 00:52:17.980840 | orchestrator | Wednesday 08 April 2026 00:48:17 +0000 (0:00:00.645) 0:06:30.470 ******* 2026-04-08 00:52:17.980844 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.980848 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.980851 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.980855 | orchestrator | 2026-04-08 00:52:17.980859 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-08 00:52:17.980863 | orchestrator | Wednesday 08 April 2026 00:48:17 +0000 (0:00:00.610) 0:06:31.081 ******* 2026-04-08 00:52:17.980866 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.980870 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.980874 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.980878 | orchestrator | 2026-04-08 00:52:17.980881 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-08 00:52:17.980885 | orchestrator | Wednesday 08 April 2026 00:48:18 +0000 (0:00:00.444) 0:06:31.525 ******* 2026-04-08 00:52:17.980889 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.980892 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.980896 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.980900 | orchestrator | 2026-04-08 00:52:17.980904 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-08 00:52:17.980907 | orchestrator | Wednesday 08 April 2026 00:48:18 +0000 (0:00:00.261) 0:06:31.787 ******* 2026-04-08 00:52:17.980911 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.980915 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.980919 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.980922 | orchestrator | 2026-04-08 00:52:17.980926 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-08 00:52:17.980930 | orchestrator | Wednesday 08 April 2026 00:48:18 +0000 (0:00:00.278) 0:06:32.065 ******* 2026-04-08 00:52:17.980934 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.980938 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.980941 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.980949 | orchestrator | 2026-04-08 00:52:17.980953 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-08 00:52:17.980957 | orchestrator | Wednesday 08 April 2026 00:48:18 +0000 (0:00:00.292) 0:06:32.358 ******* 2026-04-08 00:52:17.980961 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.980964 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.980968 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.980972 | orchestrator | 2026-04-08 00:52:17.980976 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-08 00:52:17.980982 | orchestrator | Wednesday 08 April 2026 00:48:19 +0000 (0:00:00.437) 0:06:32.796 ******* 2026-04-08 00:52:17.980986 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.980990 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.980993 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.980997 | orchestrator | 2026-04-08 00:52:17.981001 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-08 00:52:17.981005 | orchestrator | Wednesday 08 April 2026 00:48:19 +0000 (0:00:00.264) 0:06:33.060 ******* 2026-04-08 00:52:17.981008 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.981012 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.981016 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.981020 | orchestrator | 2026-04-08 00:52:17.981023 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-08 00:52:17.981027 | orchestrator | Wednesday 08 April 2026 00:48:19 +0000 (0:00:00.291) 0:06:33.351 ******* 2026-04-08 00:52:17.981031 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.981035 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.981038 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.981042 | orchestrator | 2026-04-08 00:52:17.981046 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-08 00:52:17.981050 | orchestrator | Wednesday 08 April 2026 00:48:20 +0000 (0:00:00.254) 0:06:33.605 ******* 2026-04-08 00:52:17.981053 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.981057 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.981061 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.981064 | orchestrator | 2026-04-08 00:52:17.981068 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-08 00:52:17.981072 | orchestrator | Wednesday 08 April 2026 00:48:20 +0000 (0:00:00.478) 0:06:34.084 ******* 2026-04-08 00:52:17.981076 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.981079 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.981083 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.981087 | orchestrator | 2026-04-08 00:52:17.981091 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-04-08 00:52:17.981094 | orchestrator | Wednesday 08 April 2026 00:48:21 +0000 (0:00:00.456) 0:06:34.541 ******* 2026-04-08 00:52:17.981117 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.981126 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.981130 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.981134 | orchestrator | 2026-04-08 00:52:17.981138 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-04-08 00:52:17.981142 | orchestrator | Wednesday 08 April 2026 00:48:21 +0000 (0:00:00.263) 0:06:34.805 ******* 2026-04-08 00:52:17.981146 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-08 00:52:17.981150 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-08 00:52:17.981153 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-08 00:52:17.981157 | orchestrator | 2026-04-08 00:52:17.981161 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-04-08 00:52:17.981165 | orchestrator | Wednesday 08 April 2026 00:48:21 +0000 (0:00:00.616) 0:06:35.421 ******* 2026-04-08 00:52:17.981168 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:52:17.981176 | orchestrator | 2026-04-08 00:52:17.981180 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-04-08 00:52:17.981183 | orchestrator | Wednesday 08 April 2026 00:48:22 +0000 (0:00:00.609) 0:06:36.030 ******* 2026-04-08 00:52:17.981187 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.981191 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.981195 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.981198 | orchestrator | 2026-04-08 00:52:17.981202 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-04-08 00:52:17.981206 | orchestrator | Wednesday 08 April 2026 00:48:22 +0000 (0:00:00.253) 0:06:36.284 ******* 2026-04-08 00:52:17.981210 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.981214 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.981217 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.981221 | orchestrator | 2026-04-08 00:52:17.981225 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-04-08 00:52:17.981229 | orchestrator | Wednesday 08 April 2026 00:48:23 +0000 (0:00:00.240) 0:06:36.525 ******* 2026-04-08 00:52:17.981233 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.981236 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.981240 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.981244 | orchestrator | 2026-04-08 00:52:17.981248 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-04-08 00:52:17.981252 | orchestrator | Wednesday 08 April 2026 00:48:23 +0000 (0:00:00.697) 0:06:37.222 ******* 2026-04-08 00:52:17.981255 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.981259 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.981263 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.981267 | orchestrator | 2026-04-08 00:52:17.981271 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-04-08 00:52:17.981274 | orchestrator | Wednesday 08 April 2026 00:48:24 +0000 (0:00:00.280) 0:06:37.502 ******* 2026-04-08 00:52:17.981278 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-08 00:52:17.981282 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-08 00:52:17.981286 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-08 00:52:17.981290 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-08 00:52:17.981294 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-08 00:52:17.981298 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-08 00:52:17.981308 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-08 00:52:17.981311 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-08 00:52:17.981315 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-08 00:52:17.981319 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-08 00:52:17.981323 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-08 00:52:17.981327 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-08 00:52:17.981330 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-08 00:52:17.981334 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-08 00:52:17.981338 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-08 00:52:17.981342 | orchestrator | 2026-04-08 00:52:17.981345 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-04-08 00:52:17.981349 | orchestrator | Wednesday 08 April 2026 00:48:26 +0000 (0:00:01.969) 0:06:39.471 ******* 2026-04-08 00:52:17.981356 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.981360 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.981364 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.981368 | orchestrator | 2026-04-08 00:52:17.981372 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-04-08 00:52:17.981375 | orchestrator | Wednesday 08 April 2026 00:48:26 +0000 (0:00:00.275) 0:06:39.747 ******* 2026-04-08 00:52:17.981379 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:52:17.981383 | orchestrator | 2026-04-08 00:52:17.981387 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-04-08 00:52:17.981394 | orchestrator | Wednesday 08 April 2026 00:48:27 +0000 (0:00:00.787) 0:06:40.534 ******* 2026-04-08 00:52:17.981398 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-08 00:52:17.981401 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-08 00:52:17.981405 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-08 00:52:17.981409 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-04-08 00:52:17.981413 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-04-08 00:52:17.981417 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-04-08 00:52:17.981421 | orchestrator | 2026-04-08 00:52:17.981424 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-04-08 00:52:17.981428 | orchestrator | Wednesday 08 April 2026 00:48:28 +0000 (0:00:01.031) 0:06:41.566 ******* 2026-04-08 00:52:17.981432 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:52:17.981436 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-08 00:52:17.981440 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-08 00:52:17.981443 | orchestrator | 2026-04-08 00:52:17.981447 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-04-08 00:52:17.981451 | orchestrator | Wednesday 08 April 2026 00:48:30 +0000 (0:00:02.184) 0:06:43.750 ******* 2026-04-08 00:52:17.981455 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-08 00:52:17.981459 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-08 00:52:17.981462 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:52:17.981466 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-08 00:52:17.981470 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-08 00:52:17.981474 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:52:17.981478 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-08 00:52:17.981481 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-08 00:52:17.981485 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:52:17.981489 | orchestrator | 2026-04-08 00:52:17.981493 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-04-08 00:52:17.981496 | orchestrator | Wednesday 08 April 2026 00:48:31 +0000 (0:00:01.114) 0:06:44.864 ******* 2026-04-08 00:52:17.981500 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-08 00:52:17.981504 | orchestrator | 2026-04-08 00:52:17.981508 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-04-08 00:52:17.981511 | orchestrator | Wednesday 08 April 2026 00:48:33 +0000 (0:00:02.508) 0:06:47.373 ******* 2026-04-08 00:52:17.981515 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:52:17.981519 | orchestrator | 2026-04-08 00:52:17.981523 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-04-08 00:52:17.981527 | orchestrator | Wednesday 08 April 2026 00:48:34 +0000 (0:00:00.617) 0:06:47.991 ******* 2026-04-08 00:52:17.981530 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c5eee886-e951-5b32-a4a0-4842fe7aed13', 'data_vg': 'ceph-c5eee886-e951-5b32-a4a0-4842fe7aed13'}) 2026-04-08 00:52:17.981538 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8', 'data_vg': 'ceph-19ae3695-7a84-5d0f-ba8d-a81d8fecc8c8'}) 2026-04-08 00:52:17.981542 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c80af5d6-1159-5955-8f01-035b314db1bd', 'data_vg': 'ceph-c80af5d6-1159-5955-8f01-035b314db1bd'}) 2026-04-08 00:52:17.981549 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e', 'data_vg': 'ceph-16b9c52d-170e-5f8d-b9c1-c30752bb4b9e'}) 2026-04-08 00:52:17.981553 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9c748ac0-b7ad-5284-8a6e-a168bddd5b66', 'data_vg': 'ceph-9c748ac0-b7ad-5284-8a6e-a168bddd5b66'}) 2026-04-08 00:52:17.981557 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d7d0ff5a-46f9-53d2-8425-61ef59e49033', 'data_vg': 'ceph-d7d0ff5a-46f9-53d2-8425-61ef59e49033'}) 2026-04-08 00:52:17.981561 | orchestrator | 2026-04-08 00:52:17.981565 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-04-08 00:52:17.981569 | orchestrator | Wednesday 08 April 2026 00:49:13 +0000 (0:00:39.439) 0:07:27.430 ******* 2026-04-08 00:52:17.981572 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.981576 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.981580 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.981584 | orchestrator | 2026-04-08 00:52:17.981588 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-04-08 00:52:17.981591 | orchestrator | Wednesday 08 April 2026 00:49:14 +0000 (0:00:00.543) 0:07:27.973 ******* 2026-04-08 00:52:17.981595 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:52:17.981599 | orchestrator | 2026-04-08 00:52:17.981603 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-04-08 00:52:17.981607 | orchestrator | Wednesday 08 April 2026 00:49:15 +0000 (0:00:00.488) 0:07:28.462 ******* 2026-04-08 00:52:17.981610 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.981614 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.981618 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.981622 | orchestrator | 2026-04-08 00:52:17.981626 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-04-08 00:52:17.981629 | orchestrator | Wednesday 08 April 2026 00:49:15 +0000 (0:00:00.640) 0:07:29.102 ******* 2026-04-08 00:52:17.981633 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.981637 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.981643 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.981647 | orchestrator | 2026-04-08 00:52:17.981651 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-04-08 00:52:17.981654 | orchestrator | Wednesday 08 April 2026 00:49:18 +0000 (0:00:02.715) 0:07:31.818 ******* 2026-04-08 00:52:17.981658 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:52:17.981662 | orchestrator | 2026-04-08 00:52:17.981666 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-04-08 00:52:17.981670 | orchestrator | Wednesday 08 April 2026 00:49:18 +0000 (0:00:00.537) 0:07:32.355 ******* 2026-04-08 00:52:17.981673 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:52:17.981677 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:52:17.981681 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:52:17.981685 | orchestrator | 2026-04-08 00:52:17.981689 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-04-08 00:52:17.981692 | orchestrator | Wednesday 08 April 2026 00:49:20 +0000 (0:00:01.158) 0:07:33.513 ******* 2026-04-08 00:52:17.981696 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:52:17.981700 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:52:17.981704 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:52:17.981707 | orchestrator | 2026-04-08 00:52:17.981711 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-04-08 00:52:17.981718 | orchestrator | Wednesday 08 April 2026 00:49:21 +0000 (0:00:01.414) 0:07:34.928 ******* 2026-04-08 00:52:17.981722 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:52:17.981726 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:52:17.981729 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:52:17.981733 | orchestrator | 2026-04-08 00:52:17.981737 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-04-08 00:52:17.981741 | orchestrator | Wednesday 08 April 2026 00:49:23 +0000 (0:00:01.659) 0:07:36.588 ******* 2026-04-08 00:52:17.981744 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.981748 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.981752 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.981756 | orchestrator | 2026-04-08 00:52:17.981759 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-04-08 00:52:17.981763 | orchestrator | Wednesday 08 April 2026 00:49:23 +0000 (0:00:00.339) 0:07:36.928 ******* 2026-04-08 00:52:17.981767 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.981771 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.981775 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.981778 | orchestrator | 2026-04-08 00:52:17.981782 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-04-08 00:52:17.981786 | orchestrator | Wednesday 08 April 2026 00:49:23 +0000 (0:00:00.333) 0:07:37.261 ******* 2026-04-08 00:52:17.981790 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-04-08 00:52:17.981796 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-04-08 00:52:17.981800 | orchestrator | ok: [testbed-node-5] => (item=1) 2026-04-08 00:52:17.981803 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-08 00:52:17.981807 | orchestrator | ok: [testbed-node-4] => (item=2) 2026-04-08 00:52:17.981811 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-04-08 00:52:17.981815 | orchestrator | 2026-04-08 00:52:17.981818 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-04-08 00:52:17.981822 | orchestrator | Wednesday 08 April 2026 00:49:25 +0000 (0:00:01.412) 0:07:38.674 ******* 2026-04-08 00:52:17.981826 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-04-08 00:52:17.981830 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-04-08 00:52:17.981833 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-04-08 00:52:17.981837 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-04-08 00:52:17.981841 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-04-08 00:52:17.981845 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-04-08 00:52:17.981848 | orchestrator | 2026-04-08 00:52:17.981855 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-04-08 00:52:17.981859 | orchestrator | Wednesday 08 April 2026 00:49:27 +0000 (0:00:01.976) 0:07:40.651 ******* 2026-04-08 00:52:17.981863 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-04-08 00:52:17.981867 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-04-08 00:52:17.981870 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-04-08 00:52:17.981874 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-04-08 00:52:17.981878 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-04-08 00:52:17.981882 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-04-08 00:52:17.981885 | orchestrator | 2026-04-08 00:52:17.981889 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-04-08 00:52:17.981893 | orchestrator | Wednesday 08 April 2026 00:49:30 +0000 (0:00:03.214) 0:07:43.866 ******* 2026-04-08 00:52:17.981897 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.981901 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.981904 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-08 00:52:17.981908 | orchestrator | 2026-04-08 00:52:17.981912 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-04-08 00:52:17.981916 | orchestrator | Wednesday 08 April 2026 00:49:33 +0000 (0:00:02.676) 0:07:46.542 ******* 2026-04-08 00:52:17.981924 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.981928 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.981932 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-04-08 00:52:17.981936 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-08 00:52:17.981940 | orchestrator | 2026-04-08 00:52:17.981943 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-04-08 00:52:17.981947 | orchestrator | Wednesday 08 April 2026 00:49:46 +0000 (0:00:12.913) 0:07:59.456 ******* 2026-04-08 00:52:17.981951 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.981955 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.981959 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.981962 | orchestrator | 2026-04-08 00:52:17.981966 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-08 00:52:17.981972 | orchestrator | Wednesday 08 April 2026 00:49:46 +0000 (0:00:00.863) 0:08:00.319 ******* 2026-04-08 00:52:17.981976 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.981980 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.981983 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.981987 | orchestrator | 2026-04-08 00:52:17.981991 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-08 00:52:17.981995 | orchestrator | Wednesday 08 April 2026 00:49:47 +0000 (0:00:00.581) 0:08:00.901 ******* 2026-04-08 00:52:17.981998 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:52:17.982002 | orchestrator | 2026-04-08 00:52:17.982006 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-08 00:52:17.982010 | orchestrator | Wednesday 08 April 2026 00:49:47 +0000 (0:00:00.451) 0:08:01.352 ******* 2026-04-08 00:52:17.982043 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-08 00:52:17.982047 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-08 00:52:17.982051 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-08 00:52:17.982055 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.982059 | orchestrator | 2026-04-08 00:52:17.982062 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-08 00:52:17.982066 | orchestrator | Wednesday 08 April 2026 00:49:48 +0000 (0:00:00.334) 0:08:01.687 ******* 2026-04-08 00:52:17.982070 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.982074 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.982078 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.982081 | orchestrator | 2026-04-08 00:52:17.982085 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-08 00:52:17.982089 | orchestrator | Wednesday 08 April 2026 00:49:48 +0000 (0:00:00.261) 0:08:01.948 ******* 2026-04-08 00:52:17.982093 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.982126 | orchestrator | 2026-04-08 00:52:17.982131 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-08 00:52:17.982135 | orchestrator | Wednesday 08 April 2026 00:49:48 +0000 (0:00:00.191) 0:08:02.140 ******* 2026-04-08 00:52:17.982139 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.982142 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.982146 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.982150 | orchestrator | 2026-04-08 00:52:17.982154 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-08 00:52:17.982158 | orchestrator | Wednesday 08 April 2026 00:49:49 +0000 (0:00:00.406) 0:08:02.547 ******* 2026-04-08 00:52:17.982162 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.982165 | orchestrator | 2026-04-08 00:52:17.982169 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-08 00:52:17.982173 | orchestrator | Wednesday 08 April 2026 00:49:49 +0000 (0:00:00.194) 0:08:02.741 ******* 2026-04-08 00:52:17.982177 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.982184 | orchestrator | 2026-04-08 00:52:17.982188 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-08 00:52:17.982192 | orchestrator | Wednesday 08 April 2026 00:49:49 +0000 (0:00:00.171) 0:08:02.912 ******* 2026-04-08 00:52:17.982196 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.982199 | orchestrator | 2026-04-08 00:52:17.982203 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-08 00:52:17.982207 | orchestrator | Wednesday 08 April 2026 00:49:49 +0000 (0:00:00.113) 0:08:03.026 ******* 2026-04-08 00:52:17.982211 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.982215 | orchestrator | 2026-04-08 00:52:17.982218 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-08 00:52:17.982222 | orchestrator | Wednesday 08 April 2026 00:49:49 +0000 (0:00:00.244) 0:08:03.270 ******* 2026-04-08 00:52:17.982229 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.982233 | orchestrator | 2026-04-08 00:52:17.982237 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-08 00:52:17.982241 | orchestrator | Wednesday 08 April 2026 00:49:50 +0000 (0:00:00.215) 0:08:03.485 ******* 2026-04-08 00:52:17.982245 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-08 00:52:17.982249 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-08 00:52:17.982252 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-08 00:52:17.982256 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.982260 | orchestrator | 2026-04-08 00:52:17.982264 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-08 00:52:17.982267 | orchestrator | Wednesday 08 April 2026 00:49:50 +0000 (0:00:00.384) 0:08:03.870 ******* 2026-04-08 00:52:17.982271 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.982275 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.982279 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.982283 | orchestrator | 2026-04-08 00:52:17.982286 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-08 00:52:17.982290 | orchestrator | Wednesday 08 April 2026 00:49:50 +0000 (0:00:00.288) 0:08:04.159 ******* 2026-04-08 00:52:17.982294 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.982298 | orchestrator | 2026-04-08 00:52:17.982302 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-08 00:52:17.982306 | orchestrator | Wednesday 08 April 2026 00:49:51 +0000 (0:00:00.602) 0:08:04.762 ******* 2026-04-08 00:52:17.982309 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.982313 | orchestrator | 2026-04-08 00:52:17.982317 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-04-08 00:52:17.982321 | orchestrator | 2026-04-08 00:52:17.982324 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-08 00:52:17.982328 | orchestrator | Wednesday 08 April 2026 00:49:51 +0000 (0:00:00.595) 0:08:05.357 ******* 2026-04-08 00:52:17.982335 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:17.982340 | orchestrator | 2026-04-08 00:52:17.982344 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-08 00:52:17.982348 | orchestrator | Wednesday 08 April 2026 00:49:52 +0000 (0:00:00.989) 0:08:06.347 ******* 2026-04-08 00:52:17.982351 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:17.982355 | orchestrator | 2026-04-08 00:52:17.982359 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-08 00:52:17.982363 | orchestrator | Wednesday 08 April 2026 00:49:53 +0000 (0:00:00.996) 0:08:07.344 ******* 2026-04-08 00:52:17.982367 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.982371 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.982381 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.982384 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.982388 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.982392 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.982396 | orchestrator | 2026-04-08 00:52:17.982400 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-08 00:52:17.982404 | orchestrator | Wednesday 08 April 2026 00:49:54 +0000 (0:00:00.826) 0:08:08.171 ******* 2026-04-08 00:52:17.982407 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.982411 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.982415 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.982419 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.982423 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.982426 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.982430 | orchestrator | 2026-04-08 00:52:17.982434 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-08 00:52:17.982438 | orchestrator | Wednesday 08 April 2026 00:49:55 +0000 (0:00:00.860) 0:08:09.031 ******* 2026-04-08 00:52:17.982442 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.982445 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.982449 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.982453 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.982457 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.982461 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.982464 | orchestrator | 2026-04-08 00:52:17.982468 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-08 00:52:17.982472 | orchestrator | Wednesday 08 April 2026 00:49:56 +0000 (0:00:00.560) 0:08:09.592 ******* 2026-04-08 00:52:17.982476 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.982480 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.982484 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.982487 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.982491 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.982495 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.982499 | orchestrator | 2026-04-08 00:52:17.982503 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-08 00:52:17.982507 | orchestrator | Wednesday 08 April 2026 00:49:56 +0000 (0:00:00.836) 0:08:10.428 ******* 2026-04-08 00:52:17.982510 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.982514 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.982518 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.982522 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.982526 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.982530 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.982533 | orchestrator | 2026-04-08 00:52:17.982537 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-08 00:52:17.982541 | orchestrator | Wednesday 08 April 2026 00:49:57 +0000 (0:00:00.857) 0:08:11.285 ******* 2026-04-08 00:52:17.982545 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.982549 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.982552 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.982556 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.982560 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.982566 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.982570 | orchestrator | 2026-04-08 00:52:17.982574 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-08 00:52:17.982578 | orchestrator | Wednesday 08 April 2026 00:49:58 +0000 (0:00:00.753) 0:08:12.038 ******* 2026-04-08 00:52:17.982582 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.982585 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.982589 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.982593 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.982597 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.982600 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.982607 | orchestrator | 2026-04-08 00:52:17.982611 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-08 00:52:17.982615 | orchestrator | Wednesday 08 April 2026 00:49:59 +0000 (0:00:00.554) 0:08:12.593 ******* 2026-04-08 00:52:17.982619 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.982623 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.982627 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.982630 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.982634 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.982638 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.982642 | orchestrator | 2026-04-08 00:52:17.982646 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-08 00:52:17.982649 | orchestrator | Wednesday 08 April 2026 00:50:00 +0000 (0:00:01.067) 0:08:13.661 ******* 2026-04-08 00:52:17.982653 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.982657 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.982661 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.982664 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.982668 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.982672 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.982676 | orchestrator | 2026-04-08 00:52:17.982680 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-08 00:52:17.982683 | orchestrator | Wednesday 08 April 2026 00:50:01 +0000 (0:00:01.018) 0:08:14.680 ******* 2026-04-08 00:52:17.982687 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.982691 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.982695 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.982699 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.982702 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.982709 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.982713 | orchestrator | 2026-04-08 00:52:17.982717 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-08 00:52:17.982721 | orchestrator | Wednesday 08 April 2026 00:50:02 +0000 (0:00:00.949) 0:08:15.629 ******* 2026-04-08 00:52:17.982724 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.982728 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.982732 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.982736 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.982740 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.982743 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.982747 | orchestrator | 2026-04-08 00:52:17.982751 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-08 00:52:17.982755 | orchestrator | Wednesday 08 April 2026 00:50:02 +0000 (0:00:00.556) 0:08:16.186 ******* 2026-04-08 00:52:17.982759 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.982762 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.982766 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.982770 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.982774 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.982778 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.982781 | orchestrator | 2026-04-08 00:52:17.982785 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-08 00:52:17.982789 | orchestrator | Wednesday 08 April 2026 00:50:03 +0000 (0:00:00.680) 0:08:16.866 ******* 2026-04-08 00:52:17.982793 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.982796 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.982800 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.982804 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.982808 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.982812 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.982815 | orchestrator | 2026-04-08 00:52:17.982819 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-08 00:52:17.982823 | orchestrator | Wednesday 08 April 2026 00:50:04 +0000 (0:00:00.592) 0:08:17.458 ******* 2026-04-08 00:52:17.982827 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.982834 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.982837 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.982841 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.982845 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.982849 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.982853 | orchestrator | 2026-04-08 00:52:17.982856 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-08 00:52:17.982860 | orchestrator | Wednesday 08 April 2026 00:50:04 +0000 (0:00:00.758) 0:08:18.216 ******* 2026-04-08 00:52:17.982864 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.982868 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.982871 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.982875 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.982879 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.982883 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.982887 | orchestrator | 2026-04-08 00:52:17.982890 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-08 00:52:17.982894 | orchestrator | Wednesday 08 April 2026 00:50:05 +0000 (0:00:00.565) 0:08:18.782 ******* 2026-04-08 00:52:17.982898 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.982902 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.982906 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.982910 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:52:17.982913 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:52:17.982917 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:52:17.982921 | orchestrator | 2026-04-08 00:52:17.982925 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-08 00:52:17.982929 | orchestrator | Wednesday 08 April 2026 00:50:06 +0000 (0:00:00.689) 0:08:19.472 ******* 2026-04-08 00:52:17.982932 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.982936 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.982940 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.982944 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.982950 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.982954 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.982958 | orchestrator | 2026-04-08 00:52:17.982962 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-08 00:52:17.982965 | orchestrator | Wednesday 08 April 2026 00:50:06 +0000 (0:00:00.568) 0:08:20.040 ******* 2026-04-08 00:52:17.982976 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.982985 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.982989 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.982992 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.982996 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.983000 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.983004 | orchestrator | 2026-04-08 00:52:17.983008 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-08 00:52:17.983011 | orchestrator | Wednesday 08 April 2026 00:50:07 +0000 (0:00:00.886) 0:08:20.926 ******* 2026-04-08 00:52:17.983015 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.983019 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.983023 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.983026 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.983030 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.983034 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.983038 | orchestrator | 2026-04-08 00:52:17.983042 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-04-08 00:52:17.983045 | orchestrator | Wednesday 08 April 2026 00:50:08 +0000 (0:00:01.209) 0:08:22.136 ******* 2026-04-08 00:52:17.983049 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-08 00:52:17.983053 | orchestrator | 2026-04-08 00:52:17.983057 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-04-08 00:52:17.983061 | orchestrator | Wednesday 08 April 2026 00:50:12 +0000 (0:00:03.897) 0:08:26.033 ******* 2026-04-08 00:52:17.983068 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-08 00:52:17.983072 | orchestrator | 2026-04-08 00:52:17.983076 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-04-08 00:52:17.983079 | orchestrator | Wednesday 08 April 2026 00:50:14 +0000 (0:00:02.152) 0:08:28.186 ******* 2026-04-08 00:52:17.983083 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:52:17.983087 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:52:17.983093 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:52:17.983109 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.983113 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:17.983117 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:17.983121 | orchestrator | 2026-04-08 00:52:17.983124 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-04-08 00:52:17.983128 | orchestrator | Wednesday 08 April 2026 00:50:16 +0000 (0:00:01.508) 0:08:29.694 ******* 2026-04-08 00:52:17.983132 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:52:17.983136 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:52:17.983140 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:52:17.983143 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:17.983147 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:17.983151 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:17.983155 | orchestrator | 2026-04-08 00:52:17.983158 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-04-08 00:52:17.983162 | orchestrator | Wednesday 08 April 2026 00:50:17 +0000 (0:00:01.265) 0:08:30.960 ******* 2026-04-08 00:52:17.983166 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:17.983172 | orchestrator | 2026-04-08 00:52:17.983176 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-04-08 00:52:17.983179 | orchestrator | Wednesday 08 April 2026 00:50:18 +0000 (0:00:01.250) 0:08:32.210 ******* 2026-04-08 00:52:17.983183 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:52:17.983187 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:52:17.983191 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:52:17.983194 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:17.983198 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:17.983202 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:17.983206 | orchestrator | 2026-04-08 00:52:17.983210 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-04-08 00:52:17.983214 | orchestrator | Wednesday 08 April 2026 00:50:20 +0000 (0:00:01.471) 0:08:33.681 ******* 2026-04-08 00:52:17.983217 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:52:17.983221 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:52:17.983225 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:52:17.983229 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:17.983232 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:17.983236 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:17.983240 | orchestrator | 2026-04-08 00:52:17.983244 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-04-08 00:52:17.983247 | orchestrator | Wednesday 08 April 2026 00:50:23 +0000 (0:00:03.472) 0:08:37.153 ******* 2026-04-08 00:52:17.983251 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:52:17.983255 | orchestrator | 2026-04-08 00:52:17.983259 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-04-08 00:52:17.983263 | orchestrator | Wednesday 08 April 2026 00:50:24 +0000 (0:00:01.279) 0:08:38.433 ******* 2026-04-08 00:52:17.983267 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.983270 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.983274 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.983278 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.983282 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.983288 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.983292 | orchestrator | 2026-04-08 00:52:17.983296 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-04-08 00:52:17.983300 | orchestrator | Wednesday 08 April 2026 00:50:25 +0000 (0:00:00.619) 0:08:39.053 ******* 2026-04-08 00:52:17.983304 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:52:17.983308 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:52:17.983311 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:52:17.983315 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:52:17.983321 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:52:17.983325 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:52:17.983329 | orchestrator | 2026-04-08 00:52:17.983333 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-04-08 00:52:17.983337 | orchestrator | Wednesday 08 April 2026 00:50:28 +0000 (0:00:02.576) 0:08:41.630 ******* 2026-04-08 00:52:17.983340 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.983344 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.983348 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.983352 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:52:17.983356 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:52:17.983359 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:52:17.983363 | orchestrator | 2026-04-08 00:52:17.983367 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-04-08 00:52:17.983371 | orchestrator | 2026-04-08 00:52:17.983375 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-08 00:52:17.983378 | orchestrator | Wednesday 08 April 2026 00:50:29 +0000 (0:00:00.940) 0:08:42.570 ******* 2026-04-08 00:52:17.983382 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:52:17.983386 | orchestrator | 2026-04-08 00:52:17.983390 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-08 00:52:17.983394 | orchestrator | Wednesday 08 April 2026 00:50:29 +0000 (0:00:00.779) 0:08:43.350 ******* 2026-04-08 00:52:17.983397 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:52:17.983401 | orchestrator | 2026-04-08 00:52:17.983405 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-08 00:52:17.983409 | orchestrator | Wednesday 08 April 2026 00:50:30 +0000 (0:00:00.553) 0:08:43.903 ******* 2026-04-08 00:52:17.983413 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.983416 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.983420 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.983424 | orchestrator | 2026-04-08 00:52:17.983428 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-08 00:52:17.983440 | orchestrator | Wednesday 08 April 2026 00:50:30 +0000 (0:00:00.548) 0:08:44.451 ******* 2026-04-08 00:52:17.983444 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.983448 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.983452 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.983456 | orchestrator | 2026-04-08 00:52:17.983459 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-08 00:52:17.983463 | orchestrator | Wednesday 08 April 2026 00:50:31 +0000 (0:00:00.787) 0:08:45.239 ******* 2026-04-08 00:52:17.983467 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.983471 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.983474 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.983478 | orchestrator | 2026-04-08 00:52:17.983482 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-08 00:52:17.983486 | orchestrator | Wednesday 08 April 2026 00:50:32 +0000 (0:00:00.781) 0:08:46.021 ******* 2026-04-08 00:52:17.983490 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.983493 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.983497 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.983501 | orchestrator | 2026-04-08 00:52:17.983508 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-08 00:52:17.983512 | orchestrator | Wednesday 08 April 2026 00:50:33 +0000 (0:00:00.748) 0:08:46.769 ******* 2026-04-08 00:52:17.983516 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.983520 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.983523 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.983527 | orchestrator | 2026-04-08 00:52:17.983531 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-08 00:52:17.983535 | orchestrator | Wednesday 08 April 2026 00:50:33 +0000 (0:00:00.425) 0:08:47.195 ******* 2026-04-08 00:52:17.983538 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.983542 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.983546 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.983550 | orchestrator | 2026-04-08 00:52:17.983553 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-08 00:52:17.983557 | orchestrator | Wednesday 08 April 2026 00:50:33 +0000 (0:00:00.252) 0:08:47.447 ******* 2026-04-08 00:52:17.983561 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.983565 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.983569 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.983572 | orchestrator | 2026-04-08 00:52:17.983576 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-08 00:52:17.983580 | orchestrator | Wednesday 08 April 2026 00:50:34 +0000 (0:00:00.269) 0:08:47.717 ******* 2026-04-08 00:52:17.983584 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.983587 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.983591 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.983595 | orchestrator | 2026-04-08 00:52:17.983599 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-08 00:52:17.983602 | orchestrator | Wednesday 08 April 2026 00:50:34 +0000 (0:00:00.593) 0:08:48.311 ******* 2026-04-08 00:52:17.983606 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.983610 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.983614 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.983617 | orchestrator | 2026-04-08 00:52:17.983621 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-08 00:52:17.983625 | orchestrator | Wednesday 08 April 2026 00:50:35 +0000 (0:00:00.804) 0:08:49.115 ******* 2026-04-08 00:52:17.983629 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.983633 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.983636 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.983640 | orchestrator | 2026-04-08 00:52:17.983644 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-08 00:52:17.983648 | orchestrator | Wednesday 08 April 2026 00:50:35 +0000 (0:00:00.277) 0:08:49.392 ******* 2026-04-08 00:52:17.983652 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.983655 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.983659 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.983663 | orchestrator | 2026-04-08 00:52:17.983669 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-08 00:52:17.983673 | orchestrator | Wednesday 08 April 2026 00:50:36 +0000 (0:00:00.288) 0:08:49.680 ******* 2026-04-08 00:52:17.983677 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.983681 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.983684 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.983688 | orchestrator | 2026-04-08 00:52:17.983692 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-08 00:52:17.983696 | orchestrator | Wednesday 08 April 2026 00:50:36 +0000 (0:00:00.247) 0:08:49.928 ******* 2026-04-08 00:52:17.983700 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.983703 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.983707 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.983711 | orchestrator | 2026-04-08 00:52:17.983715 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-08 00:52:17.983721 | orchestrator | Wednesday 08 April 2026 00:50:36 +0000 (0:00:00.448) 0:08:50.376 ******* 2026-04-08 00:52:17.983725 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.983729 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.983733 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.983736 | orchestrator | 2026-04-08 00:52:17.983740 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-08 00:52:17.983744 | orchestrator | Wednesday 08 April 2026 00:50:37 +0000 (0:00:00.295) 0:08:50.672 ******* 2026-04-08 00:52:17.983748 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.983752 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.983756 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.983759 | orchestrator | 2026-04-08 00:52:17.983763 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-08 00:52:17.983767 | orchestrator | Wednesday 08 April 2026 00:50:37 +0000 (0:00:00.268) 0:08:50.940 ******* 2026-04-08 00:52:17.983771 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.983774 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.983778 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.983782 | orchestrator | 2026-04-08 00:52:17.983786 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-08 00:52:17.983789 | orchestrator | Wednesday 08 April 2026 00:50:37 +0000 (0:00:00.266) 0:08:51.206 ******* 2026-04-08 00:52:17.983796 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.983799 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.983803 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.983807 | orchestrator | 2026-04-08 00:52:17.983811 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-08 00:52:17.983815 | orchestrator | Wednesday 08 April 2026 00:50:38 +0000 (0:00:00.281) 0:08:51.488 ******* 2026-04-08 00:52:17.983818 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.983822 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.983826 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.983830 | orchestrator | 2026-04-08 00:52:17.983834 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-08 00:52:17.983837 | orchestrator | Wednesday 08 April 2026 00:50:38 +0000 (0:00:00.468) 0:08:51.956 ******* 2026-04-08 00:52:17.983841 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.983845 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.983849 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.983852 | orchestrator | 2026-04-08 00:52:17.983856 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-04-08 00:52:17.983860 | orchestrator | Wednesday 08 April 2026 00:50:38 +0000 (0:00:00.491) 0:08:52.448 ******* 2026-04-08 00:52:17.983864 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.983868 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.983872 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-04-08 00:52:17.983875 | orchestrator | 2026-04-08 00:52:17.983879 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-04-08 00:52:17.983883 | orchestrator | Wednesday 08 April 2026 00:50:39 +0000 (0:00:00.639) 0:08:53.088 ******* 2026-04-08 00:52:17.983887 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-08 00:52:17.983890 | orchestrator | 2026-04-08 00:52:17.983894 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-04-08 00:52:17.983898 | orchestrator | Wednesday 08 April 2026 00:50:41 +0000 (0:00:02.035) 0:08:55.123 ******* 2026-04-08 00:52:17.983902 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-04-08 00:52:17.983908 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.983912 | orchestrator | 2026-04-08 00:52:17.983915 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-04-08 00:52:17.983922 | orchestrator | Wednesday 08 April 2026 00:50:41 +0000 (0:00:00.214) 0:08:55.338 ******* 2026-04-08 00:52:17.983927 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-08 00:52:17.983936 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-08 00:52:17.983940 | orchestrator | 2026-04-08 00:52:17.983944 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-04-08 00:52:17.983948 | orchestrator | Wednesday 08 April 2026 00:50:50 +0000 (0:00:08.453) 0:09:03.792 ******* 2026-04-08 00:52:17.983951 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-08 00:52:17.983955 | orchestrator | 2026-04-08 00:52:17.983961 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-04-08 00:52:17.983965 | orchestrator | Wednesday 08 April 2026 00:50:53 +0000 (0:00:03.518) 0:09:07.311 ******* 2026-04-08 00:52:17.983969 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:52:17.983973 | orchestrator | 2026-04-08 00:52:17.983977 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-04-08 00:52:17.983980 | orchestrator | Wednesday 08 April 2026 00:50:54 +0000 (0:00:00.522) 0:09:07.833 ******* 2026-04-08 00:52:17.983984 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-08 00:52:17.983988 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-04-08 00:52:17.983992 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-08 00:52:17.983996 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-08 00:52:17.983999 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-04-08 00:52:17.984003 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-04-08 00:52:17.984007 | orchestrator | 2026-04-08 00:52:17.984011 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-04-08 00:52:17.984014 | orchestrator | Wednesday 08 April 2026 00:50:55 +0000 (0:00:01.402) 0:09:09.236 ******* 2026-04-08 00:52:17.984018 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:52:17.984022 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-08 00:52:17.984026 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-08 00:52:17.984029 | orchestrator | 2026-04-08 00:52:17.984033 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-04-08 00:52:17.984037 | orchestrator | Wednesday 08 April 2026 00:50:57 +0000 (0:00:02.095) 0:09:11.332 ******* 2026-04-08 00:52:17.984041 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-08 00:52:17.984047 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-08 00:52:17.984051 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:52:17.984054 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-08 00:52:17.984058 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-08 00:52:17.984062 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:52:17.984066 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-08 00:52:17.984070 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-08 00:52:17.984073 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:52:17.984077 | orchestrator | 2026-04-08 00:52:17.984081 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-04-08 00:52:17.984085 | orchestrator | Wednesday 08 April 2026 00:50:59 +0000 (0:00:01.230) 0:09:12.563 ******* 2026-04-08 00:52:17.984092 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:52:17.984104 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:52:17.984108 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:52:17.984112 | orchestrator | 2026-04-08 00:52:17.984116 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-04-08 00:52:17.984119 | orchestrator | Wednesday 08 April 2026 00:51:01 +0000 (0:00:02.475) 0:09:15.038 ******* 2026-04-08 00:52:17.984123 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.984127 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.984131 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.984135 | orchestrator | 2026-04-08 00:52:17.984138 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-04-08 00:52:17.984142 | orchestrator | Wednesday 08 April 2026 00:51:02 +0000 (0:00:00.429) 0:09:15.467 ******* 2026-04-08 00:52:17.984146 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:52:17.984150 | orchestrator | 2026-04-08 00:52:17.984154 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-04-08 00:52:17.984157 | orchestrator | Wednesday 08 April 2026 00:51:02 +0000 (0:00:00.464) 0:09:15.932 ******* 2026-04-08 00:52:17.984161 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:52:17.984165 | orchestrator | 2026-04-08 00:52:17.984169 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-04-08 00:52:17.984173 | orchestrator | Wednesday 08 April 2026 00:51:03 +0000 (0:00:00.640) 0:09:16.572 ******* 2026-04-08 00:52:17.984176 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:52:17.984180 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:52:17.984184 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:52:17.984188 | orchestrator | 2026-04-08 00:52:17.984192 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-04-08 00:52:17.984195 | orchestrator | Wednesday 08 April 2026 00:51:04 +0000 (0:00:01.102) 0:09:17.675 ******* 2026-04-08 00:52:17.984199 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:52:17.984203 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:52:17.984207 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:52:17.984211 | orchestrator | 2026-04-08 00:52:17.984214 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-04-08 00:52:17.984218 | orchestrator | Wednesday 08 April 2026 00:51:05 +0000 (0:00:01.014) 0:09:18.690 ******* 2026-04-08 00:52:17.984222 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:52:17.984226 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:52:17.984229 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:52:17.984233 | orchestrator | 2026-04-08 00:52:17.984237 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-04-08 00:52:17.984241 | orchestrator | Wednesday 08 April 2026 00:51:06 +0000 (0:00:01.542) 0:09:20.232 ******* 2026-04-08 00:52:17.984245 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:52:17.984248 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:52:17.984252 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:52:17.984256 | orchestrator | 2026-04-08 00:52:17.984262 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-04-08 00:52:17.984266 | orchestrator | Wednesday 08 April 2026 00:51:08 +0000 (0:00:01.963) 0:09:22.196 ******* 2026-04-08 00:52:17.984270 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.984274 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.984278 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.984281 | orchestrator | 2026-04-08 00:52:17.984285 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-08 00:52:17.984289 | orchestrator | Wednesday 08 April 2026 00:51:09 +0000 (0:00:01.119) 0:09:23.316 ******* 2026-04-08 00:52:17.984293 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:52:17.984297 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:52:17.984301 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:52:17.984307 | orchestrator | 2026-04-08 00:52:17.984311 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-08 00:52:17.984315 | orchestrator | Wednesday 08 April 2026 00:51:10 +0000 (0:00:00.770) 0:09:24.087 ******* 2026-04-08 00:52:17.984318 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:52:17.984322 | orchestrator | 2026-04-08 00:52:17.984326 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-08 00:52:17.984330 | orchestrator | Wednesday 08 April 2026 00:51:11 +0000 (0:00:00.463) 0:09:24.550 ******* 2026-04-08 00:52:17.984333 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.984337 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.984341 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.984345 | orchestrator | 2026-04-08 00:52:17.984349 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-08 00:52:17.984353 | orchestrator | Wednesday 08 April 2026 00:51:11 +0000 (0:00:00.275) 0:09:24.826 ******* 2026-04-08 00:52:17.984356 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:52:17.984360 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:52:17.984364 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:52:17.984368 | orchestrator | 2026-04-08 00:52:17.984371 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-08 00:52:17.984375 | orchestrator | Wednesday 08 April 2026 00:51:12 +0000 (0:00:01.252) 0:09:26.079 ******* 2026-04-08 00:52:17.984379 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-08 00:52:17.984383 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-08 00:52:17.984387 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-08 00:52:17.984391 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.984395 | orchestrator | 2026-04-08 00:52:17.984399 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-08 00:52:17.984403 | orchestrator | Wednesday 08 April 2026 00:51:13 +0000 (0:00:00.551) 0:09:26.631 ******* 2026-04-08 00:52:17.984407 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.984411 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.984415 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.984418 | orchestrator | 2026-04-08 00:52:17.984422 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-04-08 00:52:17.984426 | orchestrator | 2026-04-08 00:52:17.984430 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-08 00:52:17.984434 | orchestrator | Wednesday 08 April 2026 00:51:13 +0000 (0:00:00.473) 0:09:27.104 ******* 2026-04-08 00:52:17.984437 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:52:17.984441 | orchestrator | 2026-04-08 00:52:17.984445 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-08 00:52:17.984449 | orchestrator | Wednesday 08 April 2026 00:51:14 +0000 (0:00:00.592) 0:09:27.696 ******* 2026-04-08 00:52:17.984453 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:52:17.984456 | orchestrator | 2026-04-08 00:52:17.984460 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-08 00:52:17.984464 | orchestrator | Wednesday 08 April 2026 00:51:14 +0000 (0:00:00.445) 0:09:28.141 ******* 2026-04-08 00:52:17.984468 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.984472 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.984475 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.984479 | orchestrator | 2026-04-08 00:52:17.984483 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-08 00:52:17.984487 | orchestrator | Wednesday 08 April 2026 00:51:14 +0000 (0:00:00.276) 0:09:28.418 ******* 2026-04-08 00:52:17.984490 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.984494 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.984501 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.984505 | orchestrator | 2026-04-08 00:52:17.984508 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-08 00:52:17.984512 | orchestrator | Wednesday 08 April 2026 00:51:15 +0000 (0:00:00.815) 0:09:29.234 ******* 2026-04-08 00:52:17.984516 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.984520 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.984524 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.984527 | orchestrator | 2026-04-08 00:52:17.984531 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-08 00:52:17.984535 | orchestrator | Wednesday 08 April 2026 00:51:16 +0000 (0:00:00.592) 0:09:29.826 ******* 2026-04-08 00:52:17.984539 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.984543 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.984546 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.984550 | orchestrator | 2026-04-08 00:52:17.984554 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-08 00:52:17.984558 | orchestrator | Wednesday 08 April 2026 00:51:16 +0000 (0:00:00.624) 0:09:30.451 ******* 2026-04-08 00:52:17.984562 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.984565 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.984569 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.984573 | orchestrator | 2026-04-08 00:52:17.984577 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-08 00:52:17.984584 | orchestrator | Wednesday 08 April 2026 00:51:17 +0000 (0:00:00.268) 0:09:30.719 ******* 2026-04-08 00:52:17.984588 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.984592 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.984595 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.984599 | orchestrator | 2026-04-08 00:52:17.984603 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-08 00:52:17.984607 | orchestrator | Wednesday 08 April 2026 00:51:17 +0000 (0:00:00.421) 0:09:31.141 ******* 2026-04-08 00:52:17.984610 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.984614 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.984618 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.984622 | orchestrator | 2026-04-08 00:52:17.984650 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-08 00:52:17.984654 | orchestrator | Wednesday 08 April 2026 00:51:17 +0000 (0:00:00.286) 0:09:31.428 ******* 2026-04-08 00:52:17.984658 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.984662 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.984666 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.984669 | orchestrator | 2026-04-08 00:52:17.984673 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-08 00:52:17.984677 | orchestrator | Wednesday 08 April 2026 00:51:18 +0000 (0:00:00.624) 0:09:32.053 ******* 2026-04-08 00:52:17.984681 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.984685 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.984688 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.984692 | orchestrator | 2026-04-08 00:52:17.984696 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-08 00:52:17.984700 | orchestrator | Wednesday 08 April 2026 00:51:19 +0000 (0:00:00.657) 0:09:32.710 ******* 2026-04-08 00:52:17.984703 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.984707 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.984711 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.984715 | orchestrator | 2026-04-08 00:52:17.984718 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-08 00:52:17.984722 | orchestrator | Wednesday 08 April 2026 00:51:19 +0000 (0:00:00.413) 0:09:33.124 ******* 2026-04-08 00:52:17.984726 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.984730 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.984734 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.984737 | orchestrator | 2026-04-08 00:52:17.984746 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-08 00:52:17.984750 | orchestrator | Wednesday 08 April 2026 00:51:19 +0000 (0:00:00.287) 0:09:33.412 ******* 2026-04-08 00:52:17.984754 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.984758 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.984762 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.984765 | orchestrator | 2026-04-08 00:52:17.984769 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-08 00:52:17.984773 | orchestrator | Wednesday 08 April 2026 00:51:20 +0000 (0:00:00.295) 0:09:33.708 ******* 2026-04-08 00:52:17.984777 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.984780 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.984784 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.984788 | orchestrator | 2026-04-08 00:52:17.984792 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-08 00:52:17.984795 | orchestrator | Wednesday 08 April 2026 00:51:20 +0000 (0:00:00.298) 0:09:34.007 ******* 2026-04-08 00:52:17.984799 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.984803 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.984807 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.984810 | orchestrator | 2026-04-08 00:52:17.984814 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-08 00:52:17.984818 | orchestrator | Wednesday 08 April 2026 00:51:21 +0000 (0:00:00.589) 0:09:34.596 ******* 2026-04-08 00:52:17.984822 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.984826 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.984829 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.984833 | orchestrator | 2026-04-08 00:52:17.984837 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-08 00:52:17.984840 | orchestrator | Wednesday 08 April 2026 00:51:21 +0000 (0:00:00.309) 0:09:34.905 ******* 2026-04-08 00:52:17.984844 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.984848 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.984852 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.984855 | orchestrator | 2026-04-08 00:52:17.984859 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-08 00:52:17.984863 | orchestrator | Wednesday 08 April 2026 00:51:21 +0000 (0:00:00.310) 0:09:35.216 ******* 2026-04-08 00:52:17.984867 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.984871 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.984874 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.984878 | orchestrator | 2026-04-08 00:52:17.984882 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-08 00:52:17.984885 | orchestrator | Wednesday 08 April 2026 00:51:22 +0000 (0:00:00.276) 0:09:35.493 ******* 2026-04-08 00:52:17.984889 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.984893 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.984897 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.984900 | orchestrator | 2026-04-08 00:52:17.984904 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-08 00:52:17.984908 | orchestrator | Wednesday 08 April 2026 00:51:22 +0000 (0:00:00.456) 0:09:35.949 ******* 2026-04-08 00:52:17.984912 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.984915 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.984919 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.984923 | orchestrator | 2026-04-08 00:52:17.984926 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-04-08 00:52:17.984930 | orchestrator | Wednesday 08 April 2026 00:51:22 +0000 (0:00:00.464) 0:09:36.414 ******* 2026-04-08 00:52:17.984934 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:52:17.984938 | orchestrator | 2026-04-08 00:52:17.984942 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-08 00:52:17.984945 | orchestrator | Wednesday 08 April 2026 00:51:23 +0000 (0:00:00.597) 0:09:37.011 ******* 2026-04-08 00:52:17.984957 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:52:17.984961 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-08 00:52:17.984965 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-08 00:52:17.984968 | orchestrator | 2026-04-08 00:52:17.984972 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-08 00:52:17.984976 | orchestrator | Wednesday 08 April 2026 00:51:25 +0000 (0:00:01.891) 0:09:38.903 ******* 2026-04-08 00:52:17.984980 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-08 00:52:17.984983 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-08 00:52:17.984987 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:52:17.984991 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-08 00:52:17.984995 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-08 00:52:17.984998 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:52:17.985002 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-08 00:52:17.985006 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-08 00:52:17.985010 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:52:17.985013 | orchestrator | 2026-04-08 00:52:17.985017 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-04-08 00:52:17.985021 | orchestrator | Wednesday 08 April 2026 00:51:26 +0000 (0:00:01.050) 0:09:39.954 ******* 2026-04-08 00:52:17.985025 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.985028 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.985032 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.985036 | orchestrator | 2026-04-08 00:52:17.985039 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-04-08 00:52:17.985043 | orchestrator | Wednesday 08 April 2026 00:51:26 +0000 (0:00:00.284) 0:09:40.239 ******* 2026-04-08 00:52:17.985047 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:52:17.985051 | orchestrator | 2026-04-08 00:52:17.985055 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-04-08 00:52:17.985058 | orchestrator | Wednesday 08 April 2026 00:51:27 +0000 (0:00:00.672) 0:09:40.911 ******* 2026-04-08 00:52:17.985064 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-08 00:52:17.985068 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-08 00:52:17.985072 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-08 00:52:17.985076 | orchestrator | 2026-04-08 00:52:17.985080 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-04-08 00:52:17.985083 | orchestrator | Wednesday 08 April 2026 00:51:28 +0000 (0:00:00.783) 0:09:41.695 ******* 2026-04-08 00:52:17.985087 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:52:17.985091 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-08 00:52:17.985095 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:52:17.985114 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-08 00:52:17.985118 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:52:17.985122 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-08 00:52:17.985126 | orchestrator | 2026-04-08 00:52:17.985134 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-08 00:52:17.985138 | orchestrator | Wednesday 08 April 2026 00:51:32 +0000 (0:00:04.302) 0:09:45.998 ******* 2026-04-08 00:52:17.985143 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:52:17.985149 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-08 00:52:17.985155 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:52:17.985161 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-08 00:52:17.985166 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:52:17.985172 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-08 00:52:17.985179 | orchestrator | 2026-04-08 00:52:17.985186 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-08 00:52:17.985195 | orchestrator | Wednesday 08 April 2026 00:51:34 +0000 (0:00:02.262) 0:09:48.260 ******* 2026-04-08 00:52:17.985203 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-08 00:52:17.985211 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:52:17.985217 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-08 00:52:17.985222 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:52:17.985227 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-08 00:52:17.985232 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:52:17.985238 | orchestrator | 2026-04-08 00:52:17.985243 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-04-08 00:52:17.985248 | orchestrator | Wednesday 08 April 2026 00:51:36 +0000 (0:00:01.637) 0:09:49.897 ******* 2026-04-08 00:52:17.985258 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-04-08 00:52:17.985263 | orchestrator | 2026-04-08 00:52:17.985269 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-04-08 00:52:17.985274 | orchestrator | Wednesday 08 April 2026 00:51:36 +0000 (0:00:00.231) 0:09:50.129 ******* 2026-04-08 00:52:17.985279 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-08 00:52:17.985285 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-08 00:52:17.985291 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-08 00:52:17.985297 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-08 00:52:17.985302 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-08 00:52:17.985308 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.985314 | orchestrator | 2026-04-08 00:52:17.985320 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-04-08 00:52:17.985326 | orchestrator | Wednesday 08 April 2026 00:51:37 +0000 (0:00:00.556) 0:09:50.685 ******* 2026-04-08 00:52:17.985331 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-08 00:52:17.985337 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-08 00:52:17.985342 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-08 00:52:17.985353 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-08 00:52:17.985358 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-08 00:52:17.985370 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.985376 | orchestrator | 2026-04-08 00:52:17.985382 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-04-08 00:52:17.985388 | orchestrator | Wednesday 08 April 2026 00:51:37 +0000 (0:00:00.539) 0:09:51.224 ******* 2026-04-08 00:52:17.985394 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-08 00:52:17.985401 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-08 00:52:17.985405 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-08 00:52:17.985409 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-08 00:52:17.985413 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-08 00:52:17.985417 | orchestrator | 2026-04-08 00:52:17.985420 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-04-08 00:52:17.985424 | orchestrator | Wednesday 08 April 2026 00:52:05 +0000 (0:00:28.217) 0:10:19.442 ******* 2026-04-08 00:52:17.985428 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.985432 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.985435 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.985439 | orchestrator | 2026-04-08 00:52:17.985443 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-04-08 00:52:17.985447 | orchestrator | Wednesday 08 April 2026 00:52:06 +0000 (0:00:00.283) 0:10:19.726 ******* 2026-04-08 00:52:17.985450 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.985454 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.985458 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.985461 | orchestrator | 2026-04-08 00:52:17.985465 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-04-08 00:52:17.985469 | orchestrator | Wednesday 08 April 2026 00:52:06 +0000 (0:00:00.416) 0:10:20.142 ******* 2026-04-08 00:52:17.985473 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:52:17.985477 | orchestrator | 2026-04-08 00:52:17.985480 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-04-08 00:52:17.985484 | orchestrator | Wednesday 08 April 2026 00:52:07 +0000 (0:00:00.463) 0:10:20.605 ******* 2026-04-08 00:52:17.985488 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:52:17.985492 | orchestrator | 2026-04-08 00:52:17.985495 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-04-08 00:52:17.985499 | orchestrator | Wednesday 08 April 2026 00:52:07 +0000 (0:00:00.615) 0:10:21.221 ******* 2026-04-08 00:52:17.985506 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:52:17.985510 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:52:17.985514 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:52:17.985518 | orchestrator | 2026-04-08 00:52:17.985521 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-04-08 00:52:17.985525 | orchestrator | Wednesday 08 April 2026 00:52:08 +0000 (0:00:01.113) 0:10:22.334 ******* 2026-04-08 00:52:17.985529 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:52:17.985533 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:52:17.985537 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:52:17.985540 | orchestrator | 2026-04-08 00:52:17.985544 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-04-08 00:52:17.985551 | orchestrator | Wednesday 08 April 2026 00:52:10 +0000 (0:00:01.172) 0:10:23.507 ******* 2026-04-08 00:52:17.985555 | orchestrator | changed: [testbed-node-3] 2026-04-08 00:52:17.985559 | orchestrator | changed: [testbed-node-4] 2026-04-08 00:52:17.985562 | orchestrator | changed: [testbed-node-5] 2026-04-08 00:52:17.985566 | orchestrator | 2026-04-08 00:52:17.985570 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-04-08 00:52:17.985574 | orchestrator | Wednesday 08 April 2026 00:52:11 +0000 (0:00:01.736) 0:10:25.244 ******* 2026-04-08 00:52:17.985577 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-08 00:52:17.985581 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-08 00:52:17.985585 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-08 00:52:17.985589 | orchestrator | 2026-04-08 00:52:17.985593 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-08 00:52:17.985596 | orchestrator | Wednesday 08 April 2026 00:52:14 +0000 (0:00:02.584) 0:10:27.828 ******* 2026-04-08 00:52:17.985600 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.985604 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.985610 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.985614 | orchestrator | 2026-04-08 00:52:17.985618 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-08 00:52:17.985622 | orchestrator | Wednesday 08 April 2026 00:52:14 +0000 (0:00:00.334) 0:10:28.162 ******* 2026-04-08 00:52:17.985625 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:52:17.985629 | orchestrator | 2026-04-08 00:52:17.985633 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-08 00:52:17.985637 | orchestrator | Wednesday 08 April 2026 00:52:15 +0000 (0:00:00.810) 0:10:28.973 ******* 2026-04-08 00:52:17.985641 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.985644 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.985648 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.985652 | orchestrator | 2026-04-08 00:52:17.985656 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-08 00:52:17.985660 | orchestrator | Wednesday 08 April 2026 00:52:15 +0000 (0:00:00.321) 0:10:29.294 ******* 2026-04-08 00:52:17.985663 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.985667 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:52:17.985671 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:52:17.985675 | orchestrator | 2026-04-08 00:52:17.985679 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-08 00:52:17.985682 | orchestrator | Wednesday 08 April 2026 00:52:16 +0000 (0:00:00.374) 0:10:29.669 ******* 2026-04-08 00:52:17.985686 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-08 00:52:17.985690 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-08 00:52:17.985694 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-08 00:52:17.985697 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:52:17.985701 | orchestrator | 2026-04-08 00:52:17.985705 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-08 00:52:17.985709 | orchestrator | Wednesday 08 April 2026 00:52:16 +0000 (0:00:00.760) 0:10:30.429 ******* 2026-04-08 00:52:17.985712 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:52:17.985716 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:52:17.985720 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:52:17.985724 | orchestrator | 2026-04-08 00:52:17.985727 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:52:17.985731 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-04-08 00:52:17.985738 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-04-08 00:52:17.985742 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-04-08 00:52:17.985746 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-04-08 00:52:17.985750 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-04-08 00:52:17.985753 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-04-08 00:52:17.985757 | orchestrator | 2026-04-08 00:52:17.985761 | orchestrator | 2026-04-08 00:52:17.985765 | orchestrator | 2026-04-08 00:52:17.985770 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:52:17.985774 | orchestrator | Wednesday 08 April 2026 00:52:17 +0000 (0:00:00.399) 0:10:30.828 ******* 2026-04-08 00:52:17.985778 | orchestrator | =============================================================================== 2026-04-08 00:52:17.985782 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 64.76s 2026-04-08 00:52:17.985786 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 39.44s 2026-04-08 00:52:17.985789 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 28.22s 2026-04-08 00:52:17.985793 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 24.48s 2026-04-08 00:52:17.985797 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.97s 2026-04-08 00:52:17.985801 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.95s 2026-04-08 00:52:17.985805 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.91s 2026-04-08 00:52:17.985808 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.21s 2026-04-08 00:52:17.985812 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.45s 2026-04-08 00:52:17.985816 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 8.44s 2026-04-08 00:52:17.985820 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 7.42s 2026-04-08 00:52:17.985823 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 5.94s 2026-04-08 00:52:17.985827 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.67s 2026-04-08 00:52:17.985831 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.30s 2026-04-08 00:52:17.985835 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 4.21s 2026-04-08 00:52:17.985838 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.90s 2026-04-08 00:52:17.985845 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.52s 2026-04-08 00:52:17.985849 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.47s 2026-04-08 00:52:17.985853 | orchestrator | ceph-facts : Set_fact _container_exec_cmd ------------------------------- 3.37s 2026-04-08 00:52:17.985856 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.21s 2026-04-08 00:52:17.985860 | orchestrator | 2026-04-08 00:52:17 | INFO  | Task 2f44c112-10bb-4737-8023-23b9c7a4a2c8 is in state STARTED 2026-04-08 00:52:17.985864 | orchestrator | 2026-04-08 00:52:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:52:17.985868 | orchestrator | 2026-04-08 00:52:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:52:21.027284 | orchestrator | 2026-04-08 00:52:21 | INFO  | Task 695cf97c-6bff-47f7-a9f1-2c02682e52d3 is in state STARTED 2026-04-08 00:52:21.029190 | orchestrator | 2026-04-08 00:52:21 | INFO  | Task 2f44c112-10bb-4737-8023-23b9c7a4a2c8 is in state STARTED 2026-04-08 00:52:21.031317 | orchestrator | 2026-04-08 00:52:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:52:21.031369 | orchestrator | 2026-04-08 00:52:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:52:24.086263 | orchestrator | 2026-04-08 00:52:24 | INFO  | Task 695cf97c-6bff-47f7-a9f1-2c02682e52d3 is in state STARTED 2026-04-08 00:52:24.088989 | orchestrator | 2026-04-08 00:52:24 | INFO  | Task 2f44c112-10bb-4737-8023-23b9c7a4a2c8 is in state STARTED 2026-04-08 00:52:24.090615 | orchestrator | 2026-04-08 00:52:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:52:24.090653 | orchestrator | 2026-04-08 00:52:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:52:27.131894 | orchestrator | 2026-04-08 00:52:27 | INFO  | Task 695cf97c-6bff-47f7-a9f1-2c02682e52d3 is in state STARTED 2026-04-08 00:52:27.132951 | orchestrator | 2026-04-08 00:52:27 | INFO  | Task 2f44c112-10bb-4737-8023-23b9c7a4a2c8 is in state SUCCESS 2026-04-08 00:52:27.135134 | orchestrator | 2026-04-08 00:52:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:52:27.135177 | orchestrator | 2026-04-08 00:52:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:52:30.179305 | orchestrator | 2026-04-08 00:52:30 | INFO  | Task 695cf97c-6bff-47f7-a9f1-2c02682e52d3 is in state STARTED 2026-04-08 00:52:30.181246 | orchestrator | 2026-04-08 00:52:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:52:30.181311 | orchestrator | 2026-04-08 00:52:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:52:33.228508 | orchestrator | 2026-04-08 00:52:33 | INFO  | Task 695cf97c-6bff-47f7-a9f1-2c02682e52d3 is in state STARTED 2026-04-08 00:52:33.229778 | orchestrator | 2026-04-08 00:52:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:52:33.230050 | orchestrator | 2026-04-08 00:52:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:52:36.276026 | orchestrator | 2026-04-08 00:52:36 | INFO  | Task 695cf97c-6bff-47f7-a9f1-2c02682e52d3 is in state STARTED 2026-04-08 00:52:36.277640 | orchestrator | 2026-04-08 00:52:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:52:36.277712 | orchestrator | 2026-04-08 00:52:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:52:39.316219 | orchestrator | 2026-04-08 00:52:39 | INFO  | Task 695cf97c-6bff-47f7-a9f1-2c02682e52d3 is in state STARTED 2026-04-08 00:52:39.318434 | orchestrator | 2026-04-08 00:52:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:52:39.318495 | orchestrator | 2026-04-08 00:52:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:52:42.354679 | orchestrator | 2026-04-08 00:52:42 | INFO  | Task 695cf97c-6bff-47f7-a9f1-2c02682e52d3 is in state STARTED 2026-04-08 00:52:42.356306 | orchestrator | 2026-04-08 00:52:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:52:42.356583 | orchestrator | 2026-04-08 00:52:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:52:45.391408 | orchestrator | 2026-04-08 00:52:45 | INFO  | Task 695cf97c-6bff-47f7-a9f1-2c02682e52d3 is in state STARTED 2026-04-08 00:52:45.391636 | orchestrator | 2026-04-08 00:52:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:52:45.391651 | orchestrator | 2026-04-08 00:52:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:52:48.435950 | orchestrator | 2026-04-08 00:52:48 | INFO  | Task 695cf97c-6bff-47f7-a9f1-2c02682e52d3 is in state STARTED 2026-04-08 00:52:48.436600 | orchestrator | 2026-04-08 00:52:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:52:48.436652 | orchestrator | 2026-04-08 00:52:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:52:51.484462 | orchestrator | 2026-04-08 00:52:51 | INFO  | Task 695cf97c-6bff-47f7-a9f1-2c02682e52d3 is in state STARTED 2026-04-08 00:52:51.486459 | orchestrator | 2026-04-08 00:52:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:52:51.486529 | orchestrator | 2026-04-08 00:52:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:52:54.528611 | orchestrator | 2026-04-08 00:52:54 | INFO  | Task 695cf97c-6bff-47f7-a9f1-2c02682e52d3 is in state STARTED 2026-04-08 00:52:54.530248 | orchestrator | 2026-04-08 00:52:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:52:54.530311 | orchestrator | 2026-04-08 00:52:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:52:57.568775 | orchestrator | 2026-04-08 00:52:57 | INFO  | Task 695cf97c-6bff-47f7-a9f1-2c02682e52d3 is in state STARTED 2026-04-08 00:52:57.570671 | orchestrator | 2026-04-08 00:52:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:52:57.570830 | orchestrator | 2026-04-08 00:52:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:53:00.608630 | orchestrator | 2026-04-08 00:53:00 | INFO  | Task 695cf97c-6bff-47f7-a9f1-2c02682e52d3 is in state STARTED 2026-04-08 00:53:00.609362 | orchestrator | 2026-04-08 00:53:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:53:00.609400 | orchestrator | 2026-04-08 00:53:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:53:03.663282 | orchestrator | 2026-04-08 00:53:03 | INFO  | Task 695cf97c-6bff-47f7-a9f1-2c02682e52d3 is in state STARTED 2026-04-08 00:53:03.664078 | orchestrator | 2026-04-08 00:53:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:53:03.664116 | orchestrator | 2026-04-08 00:53:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:53:06.715558 | orchestrator | 2026-04-08 00:53:06 | INFO  | Task 695cf97c-6bff-47f7-a9f1-2c02682e52d3 is in state STARTED 2026-04-08 00:53:06.717965 | orchestrator | 2026-04-08 00:53:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:53:06.718133 | orchestrator | 2026-04-08 00:53:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:53:09.763512 | orchestrator | 2026-04-08 00:53:09 | INFO  | Task 695cf97c-6bff-47f7-a9f1-2c02682e52d3 is in state STARTED 2026-04-08 00:53:09.764730 | orchestrator | 2026-04-08 00:53:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:53:09.764783 | orchestrator | 2026-04-08 00:53:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:53:12.812142 | orchestrator | 2026-04-08 00:53:12 | INFO  | Task 695cf97c-6bff-47f7-a9f1-2c02682e52d3 is in state STARTED 2026-04-08 00:53:12.813908 | orchestrator | 2026-04-08 00:53:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:53:12.813999 | orchestrator | 2026-04-08 00:53:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:53:15.860345 | orchestrator | 2026-04-08 00:53:15 | INFO  | Task 695cf97c-6bff-47f7-a9f1-2c02682e52d3 is in state STARTED 2026-04-08 00:53:15.861497 | orchestrator | 2026-04-08 00:53:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:53:15.861605 | orchestrator | 2026-04-08 00:53:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:53:18.916493 | orchestrator | 2026-04-08 00:53:18 | INFO  | Task 695cf97c-6bff-47f7-a9f1-2c02682e52d3 is in state STARTED 2026-04-08 00:53:18.917975 | orchestrator | 2026-04-08 00:53:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:53:18.917994 | orchestrator | 2026-04-08 00:53:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:53:21.965116 | orchestrator | 2026-04-08 00:53:21 | INFO  | Task 695cf97c-6bff-47f7-a9f1-2c02682e52d3 is in state STARTED 2026-04-08 00:53:21.967867 | orchestrator | 2026-04-08 00:53:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:53:21.967920 | orchestrator | 2026-04-08 00:53:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:53:25.029143 | orchestrator | 2026-04-08 00:53:25 | INFO  | Task 695cf97c-6bff-47f7-a9f1-2c02682e52d3 is in state STARTED 2026-04-08 00:53:25.033069 | orchestrator | 2026-04-08 00:53:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:53:25.033128 | orchestrator | 2026-04-08 00:53:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:53:28.073556 | orchestrator | 2026-04-08 00:53:28 | INFO  | Task 695cf97c-6bff-47f7-a9f1-2c02682e52d3 is in state STARTED 2026-04-08 00:53:28.074142 | orchestrator | 2026-04-08 00:53:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:53:28.074178 | orchestrator | 2026-04-08 00:53:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:53:31.122984 | orchestrator | 2026-04-08 00:53:31 | INFO  | Task 695cf97c-6bff-47f7-a9f1-2c02682e52d3 is in state STARTED 2026-04-08 00:53:31.125649 | orchestrator | 2026-04-08 00:53:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:53:31.125684 | orchestrator | 2026-04-08 00:53:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:53:34.176362 | orchestrator | 2026-04-08 00:53:34 | INFO  | Task 695cf97c-6bff-47f7-a9f1-2c02682e52d3 is in state STARTED 2026-04-08 00:53:34.177467 | orchestrator | 2026-04-08 00:53:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:53:34.177506 | orchestrator | 2026-04-08 00:53:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:53:37.227167 | orchestrator | 2026-04-08 00:53:37 | INFO  | Task 695cf97c-6bff-47f7-a9f1-2c02682e52d3 is in state STARTED 2026-04-08 00:53:37.229290 | orchestrator | 2026-04-08 00:53:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:53:37.229354 | orchestrator | 2026-04-08 00:53:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:53:40.276518 | orchestrator | 2026-04-08 00:53:40 | INFO  | Task 695cf97c-6bff-47f7-a9f1-2c02682e52d3 is in state STARTED 2026-04-08 00:53:40.278320 | orchestrator | 2026-04-08 00:53:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:53:40.278370 | orchestrator | 2026-04-08 00:53:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:53:43.324602 | orchestrator | 2026-04-08 00:53:43 | INFO  | Task 695cf97c-6bff-47f7-a9f1-2c02682e52d3 is in state STARTED 2026-04-08 00:53:43.326597 | orchestrator | 2026-04-08 00:53:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:53:43.327004 | orchestrator | 2026-04-08 00:53:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:53:46.369302 | orchestrator | 2026-04-08 00:53:46 | INFO  | Task 695cf97c-6bff-47f7-a9f1-2c02682e52d3 is in state STARTED 2026-04-08 00:53:46.371565 | orchestrator | 2026-04-08 00:53:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:53:46.371619 | orchestrator | 2026-04-08 00:53:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:53:49.427537 | orchestrator | 2026-04-08 00:53:49 | INFO  | Task 695cf97c-6bff-47f7-a9f1-2c02682e52d3 is in state STARTED 2026-04-08 00:53:49.428711 | orchestrator | 2026-04-08 00:53:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:53:49.428745 | orchestrator | 2026-04-08 00:53:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:53:52.477654 | orchestrator | 2026-04-08 00:53:52 | INFO  | Task 695cf97c-6bff-47f7-a9f1-2c02682e52d3 is in state STARTED 2026-04-08 00:53:52.483059 | orchestrator | 2026-04-08 00:53:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:53:52.483144 | orchestrator | 2026-04-08 00:53:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:53:55.530733 | orchestrator | 2026-04-08 00:53:55 | INFO  | Task 695cf97c-6bff-47f7-a9f1-2c02682e52d3 is in state STARTED 2026-04-08 00:53:55.532755 | orchestrator | 2026-04-08 00:53:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:53:55.532813 | orchestrator | 2026-04-08 00:53:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:53:58.575791 | orchestrator | 2026-04-08 00:53:58 | INFO  | Task 695cf97c-6bff-47f7-a9f1-2c02682e52d3 is in state STARTED 2026-04-08 00:53:58.576946 | orchestrator | 2026-04-08 00:53:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:53:58.577000 | orchestrator | 2026-04-08 00:53:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:54:01.625289 | orchestrator | 2026-04-08 00:54:01 | INFO  | Task 695cf97c-6bff-47f7-a9f1-2c02682e52d3 is in state STARTED 2026-04-08 00:54:01.627008 | orchestrator | 2026-04-08 00:54:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:54:01.627081 | orchestrator | 2026-04-08 00:54:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:54:04.671906 | orchestrator | 2026-04-08 00:54:04 | INFO  | Task 695cf97c-6bff-47f7-a9f1-2c02682e52d3 is in state STARTED 2026-04-08 00:54:04.673025 | orchestrator | 2026-04-08 00:54:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:54:04.673072 | orchestrator | 2026-04-08 00:54:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:54:07.726322 | orchestrator | 2026-04-08 00:54:07 | INFO  | Task 695cf97c-6bff-47f7-a9f1-2c02682e52d3 is in state STARTED 2026-04-08 00:54:07.728468 | orchestrator | 2026-04-08 00:54:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:54:07.728735 | orchestrator | 2026-04-08 00:54:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:54:10.776361 | orchestrator | 2026-04-08 00:54:10 | INFO  | Task 695cf97c-6bff-47f7-a9f1-2c02682e52d3 is in state STARTED 2026-04-08 00:54:10.776825 | orchestrator | 2026-04-08 00:54:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:54:10.776865 | orchestrator | 2026-04-08 00:54:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:54:13.826392 | orchestrator | 2026-04-08 00:54:13 | INFO  | Task 695cf97c-6bff-47f7-a9f1-2c02682e52d3 is in state STARTED 2026-04-08 00:54:13.827772 | orchestrator | 2026-04-08 00:54:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:54:13.827809 | orchestrator | 2026-04-08 00:54:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:54:16.872126 | orchestrator | 2026-04-08 00:54:16 | INFO  | Task 695cf97c-6bff-47f7-a9f1-2c02682e52d3 is in state STARTED 2026-04-08 00:54:16.872669 | orchestrator | 2026-04-08 00:54:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:54:16.872702 | orchestrator | 2026-04-08 00:54:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:54:19.913307 | orchestrator | 2026-04-08 00:54:19 | INFO  | Task 695cf97c-6bff-47f7-a9f1-2c02682e52d3 is in state STARTED 2026-04-08 00:54:19.914567 | orchestrator | 2026-04-08 00:54:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:54:19.914621 | orchestrator | 2026-04-08 00:54:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:54:22.956761 | orchestrator | 2026-04-08 00:54:22 | INFO  | Task 695cf97c-6bff-47f7-a9f1-2c02682e52d3 is in state STARTED 2026-04-08 00:54:22.957082 | orchestrator | 2026-04-08 00:54:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:54:22.957105 | orchestrator | 2026-04-08 00:54:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:54:26.007528 | orchestrator | 2026-04-08 00:54:26 | INFO  | Task 695cf97c-6bff-47f7-a9f1-2c02682e52d3 is in state SUCCESS 2026-04-08 00:54:26.008389 | orchestrator | 2026-04-08 00:54:26.008433 | orchestrator | 2026-04-08 00:54:26.008443 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-08 00:54:26.008450 | orchestrator | 2026-04-08 00:54:26.008457 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-08 00:54:26.008464 | orchestrator | Wednesday 08 April 2026 00:51:31 +0000 (0:00:00.331) 0:00:00.331 ******* 2026-04-08 00:54:26.008472 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:54:26.008479 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:54:26.008486 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:54:26.008493 | orchestrator | 2026-04-08 00:54:26.008501 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-08 00:54:26.008509 | orchestrator | Wednesday 08 April 2026 00:51:31 +0000 (0:00:00.302) 0:00:00.633 ******* 2026-04-08 00:54:26.008517 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-04-08 00:54:26.008525 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-04-08 00:54:26.008533 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-04-08 00:54:26.008540 | orchestrator | 2026-04-08 00:54:26.008548 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-04-08 00:54:26.008555 | orchestrator | 2026-04-08 00:54:26.008563 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-08 00:54:26.008570 | orchestrator | Wednesday 08 April 2026 00:51:31 +0000 (0:00:00.273) 0:00:00.906 ******* 2026-04-08 00:54:26.008578 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:54:26.008586 | orchestrator | 2026-04-08 00:54:26.008594 | orchestrator | TASK [service-ks-register : magnum | Creating/deleting services] *************** 2026-04-08 00:54:26.008601 | orchestrator | Wednesday 08 April 2026 00:51:32 +0000 (0:00:00.605) 0:00:01.512 ******* 2026-04-08 00:54:26.008609 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (5 retries left). 2026-04-08 00:54:26.008617 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (4 retries left). 2026-04-08 00:54:26.008635 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (3 retries left). 2026-04-08 00:54:26.008642 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (2 retries left). 2026-04-08 00:54:26.008650 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (1 retries left). 2026-04-08 00:54:26.008659 | orchestrator | failed: [testbed-node-0] (item=magnum (container-infra)) => {"ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Container Infrastructure Management Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9511/v1"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9511/v1"}], "name": "magnum", "type": "container-infra"}, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-08 00:54:26.008682 | orchestrator | 2026-04-08 00:54:26.008690 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:54:26.008698 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-04-08 00:54:26.008706 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:54:26.008714 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:54:26.008721 | orchestrator | 2026-04-08 00:54:26.008733 | orchestrator | 2026-04-08 00:54:26.008740 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:54:26.008747 | orchestrator | Wednesday 08 April 2026 00:52:25 +0000 (0:00:53.224) 0:00:54.737 ******* 2026-04-08 00:54:26.008754 | orchestrator | =============================================================================== 2026-04-08 00:54:26.008761 | orchestrator | service-ks-register : magnum | Creating/deleting services -------------- 53.22s 2026-04-08 00:54:26.008769 | orchestrator | magnum : include_tasks -------------------------------------------------- 0.61s 2026-04-08 00:54:26.008776 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2026-04-08 00:54:26.008786 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.27s 2026-04-08 00:54:26.008800 | orchestrator | 2026-04-08 00:54:26.008811 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-08 00:54:26.008816 | orchestrator | 2.16.14 2026-04-08 00:54:26.008821 | orchestrator | 2026-04-08 00:54:26.008826 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-04-08 00:54:26.008830 | orchestrator | 2026-04-08 00:54:26.008834 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-08 00:54:26.008839 | orchestrator | Wednesday 08 April 2026 00:52:22 +0000 (0:00:00.582) 0:00:00.582 ******* 2026-04-08 00:54:26.008843 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:54:26.008848 | orchestrator | 2026-04-08 00:54:26.008852 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-08 00:54:26.008857 | orchestrator | Wednesday 08 April 2026 00:52:22 +0000 (0:00:00.604) 0:00:01.186 ******* 2026-04-08 00:54:26.008861 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:26.008865 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:26.008870 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:26.008874 | orchestrator | 2026-04-08 00:54:26.008879 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-08 00:54:26.008892 | orchestrator | Wednesday 08 April 2026 00:52:23 +0000 (0:00:01.015) 0:00:02.202 ******* 2026-04-08 00:54:26.008897 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:26.008901 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:26.008906 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:26.008910 | orchestrator | 2026-04-08 00:54:26.008915 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-08 00:54:26.008919 | orchestrator | Wednesday 08 April 2026 00:52:24 +0000 (0:00:00.291) 0:00:02.493 ******* 2026-04-08 00:54:26.008926 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:26.008930 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:26.008935 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:26.008939 | orchestrator | 2026-04-08 00:54:26.008943 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-08 00:54:26.008953 | orchestrator | Wednesday 08 April 2026 00:52:25 +0000 (0:00:00.783) 0:00:03.277 ******* 2026-04-08 00:54:26.008966 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:26.008974 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:26.008981 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:26.008988 | orchestrator | 2026-04-08 00:54:26.008996 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-08 00:54:26.009004 | orchestrator | Wednesday 08 April 2026 00:52:25 +0000 (0:00:00.331) 0:00:03.608 ******* 2026-04-08 00:54:26.009012 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:26.009018 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:26.009026 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:26.009031 | orchestrator | 2026-04-08 00:54:26.009037 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-08 00:54:26.009042 | orchestrator | Wednesday 08 April 2026 00:52:25 +0000 (0:00:00.318) 0:00:03.927 ******* 2026-04-08 00:54:26.009047 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:26.009052 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:26.009057 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:26.009062 | orchestrator | 2026-04-08 00:54:26.009067 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-08 00:54:26.009073 | orchestrator | Wednesday 08 April 2026 00:52:25 +0000 (0:00:00.305) 0:00:04.233 ******* 2026-04-08 00:54:26.009078 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:26.009083 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:26.009089 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:26.009094 | orchestrator | 2026-04-08 00:54:26.009103 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-08 00:54:26.009108 | orchestrator | Wednesday 08 April 2026 00:52:26 +0000 (0:00:00.480) 0:00:04.713 ******* 2026-04-08 00:54:26.009113 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:26.009119 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:26.009123 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:26.009129 | orchestrator | 2026-04-08 00:54:26.009134 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-08 00:54:26.009139 | orchestrator | Wednesday 08 April 2026 00:52:26 +0000 (0:00:00.282) 0:00:04.996 ******* 2026-04-08 00:54:26.009144 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-08 00:54:26.009149 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-08 00:54:26.009155 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-08 00:54:26.009163 | orchestrator | 2026-04-08 00:54:26.009170 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-08 00:54:26.009177 | orchestrator | Wednesday 08 April 2026 00:52:27 +0000 (0:00:00.681) 0:00:05.678 ******* 2026-04-08 00:54:26.009185 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:26.009192 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:26.009199 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:26.009206 | orchestrator | 2026-04-08 00:54:26.009214 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-08 00:54:26.009222 | orchestrator | Wednesday 08 April 2026 00:52:27 +0000 (0:00:00.376) 0:00:06.054 ******* 2026-04-08 00:54:26.009230 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-08 00:54:26.009238 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-08 00:54:26.009246 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-08 00:54:26.009271 | orchestrator | 2026-04-08 00:54:26.009279 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-08 00:54:26.009287 | orchestrator | Wednesday 08 April 2026 00:52:30 +0000 (0:00:02.780) 0:00:08.834 ******* 2026-04-08 00:54:26.009294 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-08 00:54:26.009302 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-08 00:54:26.009315 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-08 00:54:26.009323 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:26.009331 | orchestrator | 2026-04-08 00:54:26.009339 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-08 00:54:26.009347 | orchestrator | Wednesday 08 April 2026 00:52:31 +0000 (0:00:00.448) 0:00:09.283 ******* 2026-04-08 00:54:26.009355 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.009361 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.009372 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.009377 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:26.009383 | orchestrator | 2026-04-08 00:54:26.009387 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-08 00:54:26.009404 | orchestrator | Wednesday 08 April 2026 00:52:31 +0000 (0:00:00.807) 0:00:10.091 ******* 2026-04-08 00:54:26.009410 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.009418 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.009426 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.009430 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:26.009435 | orchestrator | 2026-04-08 00:54:26.009440 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-08 00:54:26.009444 | orchestrator | Wednesday 08 April 2026 00:52:31 +0000 (0:00:00.143) 0:00:10.234 ******* 2026-04-08 00:54:26.009450 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'f8151b2560db', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-08 00:52:28.722709', 'end': '2026-04-08 00:52:28.761519', 'delta': '0:00:00.038810', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f8151b2560db'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-08 00:54:26.009457 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'c117ceb68913', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-08 00:52:29.659444', 'end': '2026-04-08 00:52:29.701046', 'delta': '0:00:00.041602', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c117ceb68913'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-08 00:54:26.009465 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '487ffed37766', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-08 00:52:30.439169', 'end': '2026-04-08 00:52:30.473122', 'delta': '0:00:00.033953', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['487ffed37766'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-08 00:54:26.009470 | orchestrator | 2026-04-08 00:54:26.009474 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-08 00:54:26.009482 | orchestrator | Wednesday 08 April 2026 00:52:32 +0000 (0:00:00.347) 0:00:10.582 ******* 2026-04-08 00:54:26.009487 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:26.009492 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:26.009496 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:26.009500 | orchestrator | 2026-04-08 00:54:26.009505 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-08 00:54:26.009509 | orchestrator | Wednesday 08 April 2026 00:52:32 +0000 (0:00:00.406) 0:00:10.988 ******* 2026-04-08 00:54:26.009514 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-04-08 00:54:26.009518 | orchestrator | 2026-04-08 00:54:26.009523 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-08 00:54:26.009527 | orchestrator | Wednesday 08 April 2026 00:52:34 +0000 (0:00:01.763) 0:00:12.751 ******* 2026-04-08 00:54:26.009531 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:26.009536 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:26.009540 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:26.009545 | orchestrator | 2026-04-08 00:54:26.009552 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-08 00:54:26.009556 | orchestrator | Wednesday 08 April 2026 00:52:34 +0000 (0:00:00.278) 0:00:13.030 ******* 2026-04-08 00:54:26.009563 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:26.009570 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:26.009580 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:26.009594 | orchestrator | 2026-04-08 00:54:26.009601 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-08 00:54:26.009607 | orchestrator | Wednesday 08 April 2026 00:52:35 +0000 (0:00:00.442) 0:00:13.473 ******* 2026-04-08 00:54:26.009614 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:26.009620 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:26.009627 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:26.009634 | orchestrator | 2026-04-08 00:54:26.009641 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-08 00:54:26.009648 | orchestrator | Wednesday 08 April 2026 00:52:35 +0000 (0:00:00.513) 0:00:13.987 ******* 2026-04-08 00:54:26.009655 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:26.009661 | orchestrator | 2026-04-08 00:54:26.009668 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-08 00:54:26.009686 | orchestrator | Wednesday 08 April 2026 00:52:35 +0000 (0:00:00.134) 0:00:14.122 ******* 2026-04-08 00:54:26.009694 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:26.009701 | orchestrator | 2026-04-08 00:54:26.009707 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-08 00:54:26.009714 | orchestrator | Wednesday 08 April 2026 00:52:36 +0000 (0:00:00.219) 0:00:14.341 ******* 2026-04-08 00:54:26.009721 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:26.009729 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:26.009737 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:26.009745 | orchestrator | 2026-04-08 00:54:26.009753 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-08 00:54:26.009759 | orchestrator | Wednesday 08 April 2026 00:52:36 +0000 (0:00:00.277) 0:00:14.619 ******* 2026-04-08 00:54:26.009763 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:26.009767 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:26.009772 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:26.009776 | orchestrator | 2026-04-08 00:54:26.009780 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-08 00:54:26.009785 | orchestrator | Wednesday 08 April 2026 00:52:36 +0000 (0:00:00.311) 0:00:14.931 ******* 2026-04-08 00:54:26.009789 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:26.009793 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:26.009798 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:26.009802 | orchestrator | 2026-04-08 00:54:26.009806 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-08 00:54:26.009811 | orchestrator | Wednesday 08 April 2026 00:52:37 +0000 (0:00:00.486) 0:00:15.417 ******* 2026-04-08 00:54:26.009815 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:26.009819 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:26.009824 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:26.009828 | orchestrator | 2026-04-08 00:54:26.009832 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-08 00:54:26.009837 | orchestrator | Wednesday 08 April 2026 00:52:37 +0000 (0:00:00.306) 0:00:15.724 ******* 2026-04-08 00:54:26.009841 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:26.009845 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:26.009849 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:26.009854 | orchestrator | 2026-04-08 00:54:26.009858 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-08 00:54:26.009862 | orchestrator | Wednesday 08 April 2026 00:52:37 +0000 (0:00:00.320) 0:00:16.044 ******* 2026-04-08 00:54:26.009867 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:26.009871 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:26.009875 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:26.009880 | orchestrator | 2026-04-08 00:54:26.009884 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-08 00:54:26.009888 | orchestrator | Wednesday 08 April 2026 00:52:38 +0000 (0:00:00.316) 0:00:16.361 ******* 2026-04-08 00:54:26.009893 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:26.009897 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:26.009901 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:26.009906 | orchestrator | 2026-04-08 00:54:26.009910 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-08 00:54:26.009914 | orchestrator | Wednesday 08 April 2026 00:52:38 +0000 (0:00:00.506) 0:00:16.867 ******* 2026-04-08 00:54:26.009980 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--19ae3695--7a84--5d0f--ba8d--a81d8fecc8c8-osd--block--19ae3695--7a84--5d0f--ba8d--a81d8fecc8c8', 'dm-uuid-LVM-xwCsGlDwFfkxburlVqB5NLDI6n7sZpTvjhaJzMQa8eJCFjLlT410JpbIrJ5LtPNv'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:26.009999 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9c748ac0--b7ad--5284--8a6e--a168bddd5b66-osd--block--9c748ac0--b7ad--5284--8a6e--a168bddd5b66', 'dm-uuid-LVM-XLVRyFhPs4iyEi8xqu03f7y4c8kn3scmlHnu77STip8Ug3VlNS1rlqeaSKGQ5WqB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:26.010007 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:26.010049 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:26.010055 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:26.010060 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:26.010065 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:26.010069 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:26.010074 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:26.010082 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:26.010093 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4596a618-b0c7-4f6c-b3f8-3bb0eece7c92', 'scsi-SQEMU_QEMU_HARDDISK_4596a618-b0c7-4f6c-b3f8-3bb0eece7c92'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4596a618-b0c7-4f6c-b3f8-3bb0eece7c92-part1', 'scsi-SQEMU_QEMU_HARDDISK_4596a618-b0c7-4f6c-b3f8-3bb0eece7c92-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4596a618-b0c7-4f6c-b3f8-3bb0eece7c92-part14', 'scsi-SQEMU_QEMU_HARDDISK_4596a618-b0c7-4f6c-b3f8-3bb0eece7c92-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4596a618-b0c7-4f6c-b3f8-3bb0eece7c92-part15', 'scsi-SQEMU_QEMU_HARDDISK_4596a618-b0c7-4f6c-b3f8-3bb0eece7c92-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4596a618-b0c7-4f6c-b3f8-3bb0eece7c92-part16', 'scsi-SQEMU_QEMU_HARDDISK_4596a618-b0c7-4f6c-b3f8-3bb0eece7c92-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:54:26.010099 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--19ae3695--7a84--5d0f--ba8d--a81d8fecc8c8-osd--block--19ae3695--7a84--5d0f--ba8d--a81d8fecc8c8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-KIB693-1MXL-Jsrw-Vj0a-y756-IACV-bcAZ1n', 'scsi-0QEMU_QEMU_HARDDISK_9851de66-42ac-4afe-9f6b-65921d8ebe77', 'scsi-SQEMU_QEMU_HARDDISK_9851de66-42ac-4afe-9f6b-65921d8ebe77'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:54:26.010105 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--9c748ac0--b7ad--5284--8a6e--a168bddd5b66-osd--block--9c748ac0--b7ad--5284--8a6e--a168bddd5b66'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fF1fTX-jpev-cNOf-sWvF-b0nY-2dsf-dsD3cE', 'scsi-0QEMU_QEMU_HARDDISK_3c9bb5e0-782f-4e13-9d09-525e18a95d4a', 'scsi-SQEMU_QEMU_HARDDISK_3c9bb5e0-782f-4e13-9d09-525e18a95d4a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:54:26.010116 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c5eee886--e951--5b32--a4a0--4842fe7aed13-osd--block--c5eee886--e951--5b32--a4a0--4842fe7aed13', 'dm-uuid-LVM-hSJJjoTW0i9cqMB7qnzyDSUuFdptcJJbpgOsaXvL3Qzue28rxFzgg6iQ1OJLNey5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:26.010125 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10ca2d35-3b66-46f3-ab0f-253d8a66f2e6', 'scsi-SQEMU_QEMU_HARDDISK_10ca2d35-3b66-46f3-ab0f-253d8a66f2e6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:54:26.010135 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--16b9c52d--170e--5f8d--b9c1--c30752bb4b9e-osd--block--16b9c52d--170e--5f8d--b9c1--c30752bb4b9e', 'dm-uuid-LVM-MNYRr1GmUlANIkrAm8Q1XiTJ6Tj3RDwVlEQKgEfBtVKj0DMgSbGsmSH0IckhcMP5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:26.010140 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-08-00-02-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:54:26.010145 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:26.010149 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:26.010154 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:26.010158 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:26.010166 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:26.010193 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:26.010199 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:26.010204 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:26.010210 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c80af5d6--1159--5955--8f01--035b314db1bd-osd--block--c80af5d6--1159--5955--8f01--035b314db1bd', 'dm-uuid-LVM-KlTrF1EDIjiTHHK8zRzK8yCGxCI0DGQ1CUnwoXChyL021HsQR4VIfiu0fYA0jc6C'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:26.010215 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:26.010220 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d7d0ff5a--46f9--53d2--8425--61ef59e49033-osd--block--d7d0ff5a--46f9--53d2--8425--61ef59e49033', 'dm-uuid-LVM-rXS6OKBks0F68YdHLhvZFzeH4w2Md7iuhu1erBcrjvjJQBIjk4II21gfgMcpkuKL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:26.010228 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f87f253-d467-48bc-bac0-692ec5abf0aa', 'scsi-SQEMU_QEMU_HARDDISK_1f87f253-d467-48bc-bac0-692ec5abf0aa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f87f253-d467-48bc-bac0-692ec5abf0aa-part1', 'scsi-SQEMU_QEMU_HARDDISK_1f87f253-d467-48bc-bac0-692ec5abf0aa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f87f253-d467-48bc-bac0-692ec5abf0aa-part14', 'scsi-SQEMU_QEMU_HARDDISK_1f87f253-d467-48bc-bac0-692ec5abf0aa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f87f253-d467-48bc-bac0-692ec5abf0aa-part15', 'scsi-SQEMU_QEMU_HARDDISK_1f87f253-d467-48bc-bac0-692ec5abf0aa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f87f253-d467-48bc-bac0-692ec5abf0aa-part16', 'scsi-SQEMU_QEMU_HARDDISK_1f87f253-d467-48bc-bac0-692ec5abf0aa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:54:26.010237 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:26.010244 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c5eee886--e951--5b32--a4a0--4842fe7aed13-osd--block--c5eee886--e951--5b32--a4a0--4842fe7aed13'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-T24umF-dGrh-Fo0n-yTgT-OrMV-jVVv-MHbK0G', 'scsi-0QEMU_QEMU_HARDDISK_49047f2d-69c1-4fac-a475-f46440c51814', 'scsi-SQEMU_QEMU_HARDDISK_49047f2d-69c1-4fac-a475-f46440c51814'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:54:26.010262 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:26.010268 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--16b9c52d--170e--5f8d--b9c1--c30752bb4b9e-osd--block--16b9c52d--170e--5f8d--b9c1--c30752bb4b9e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-c54kMR-Ip2S-4T1g-ey67-uSnv-3dsN-HVYVia', 'scsi-0QEMU_QEMU_HARDDISK_0a17ff12-522e-4235-8e4c-edb4898b90f5', 'scsi-SQEMU_QEMU_HARDDISK_0a17ff12-522e-4235-8e4c-edb4898b90f5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:54:26.010272 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:26.010284 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_22a44c82-a679-4e37-857c-f96ffb845a8a', 'scsi-SQEMU_QEMU_HARDDISK_22a44c82-a679-4e37-857c-f96ffb845a8a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:54:26.010289 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:26.010294 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-08-00-02-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:54:26.010298 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:26.010303 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:26.010310 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:26.010314 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:26.010319 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-08 00:54:26.010327 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab6e22-4bb9-43f8-8e9a-32a0c49ce343', 'scsi-SQEMU_QEMU_HARDDISK_e5ab6e22-4bb9-43f8-8e9a-32a0c49ce343'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab6e22-4bb9-43f8-8e9a-32a0c49ce343-part1', 'scsi-SQEMU_QEMU_HARDDISK_e5ab6e22-4bb9-43f8-8e9a-32a0c49ce343-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab6e22-4bb9-43f8-8e9a-32a0c49ce343-part14', 'scsi-SQEMU_QEMU_HARDDISK_e5ab6e22-4bb9-43f8-8e9a-32a0c49ce343-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab6e22-4bb9-43f8-8e9a-32a0c49ce343-part15', 'scsi-SQEMU_QEMU_HARDDISK_e5ab6e22-4bb9-43f8-8e9a-32a0c49ce343-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab6e22-4bb9-43f8-8e9a-32a0c49ce343-part16', 'scsi-SQEMU_QEMU_HARDDISK_e5ab6e22-4bb9-43f8-8e9a-32a0c49ce343-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:54:26.010338 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c80af5d6--1159--5955--8f01--035b314db1bd-osd--block--c80af5d6--1159--5955--8f01--035b314db1bd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-8o3EVe-Utw7-lM15-VLzY-7aD3-pHv9-pl9uyv', 'scsi-0QEMU_QEMU_HARDDISK_8b42dfd4-7d2e-4d67-9c6f-8993b8aa5f54', 'scsi-SQEMU_QEMU_HARDDISK_8b42dfd4-7d2e-4d67-9c6f-8993b8aa5f54'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:54:26.010343 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d7d0ff5a--46f9--53d2--8425--61ef59e49033-osd--block--d7d0ff5a--46f9--53d2--8425--61ef59e49033'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-I6vPbl-qQTp-H4zu-SOPt-3OKc-cfy1-s5oD5i', 'scsi-0QEMU_QEMU_HARDDISK_29911cfa-2062-4b30-9263-aae8438640a0', 'scsi-SQEMU_QEMU_HARDDISK_29911cfa-2062-4b30-9263-aae8438640a0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:54:26.010348 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b1a6d0f-b1ea-4446-8ff8-479db77ebf36', 'scsi-SQEMU_QEMU_HARDDISK_7b1a6d0f-b1ea-4446-8ff8-479db77ebf36'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:54:26.010356 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-08-00-02-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-08 00:54:26.010361 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:26.010365 | orchestrator | 2026-04-08 00:54:26.010370 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-08 00:54:26.010375 | orchestrator | Wednesday 08 April 2026 00:52:39 +0000 (0:00:00.506) 0:00:17.374 ******* 2026-04-08 00:54:26.010382 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--19ae3695--7a84--5d0f--ba8d--a81d8fecc8c8-osd--block--19ae3695--7a84--5d0f--ba8d--a81d8fecc8c8', 'dm-uuid-LVM-xwCsGlDwFfkxburlVqB5NLDI6n7sZpTvjhaJzMQa8eJCFjLlT410JpbIrJ5LtPNv'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.010387 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9c748ac0--b7ad--5284--8a6e--a168bddd5b66-osd--block--9c748ac0--b7ad--5284--8a6e--a168bddd5b66', 'dm-uuid-LVM-XLVRyFhPs4iyEi8xqu03f7y4c8kn3scmlHnu77STip8Ug3VlNS1rlqeaSKGQ5WqB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.010393 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.010398 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.010403 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.010410 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.010417 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.010422 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.010427 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.010433 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c5eee886--e951--5b32--a4a0--4842fe7aed13-osd--block--c5eee886--e951--5b32--a4a0--4842fe7aed13', 'dm-uuid-LVM-hSJJjoTW0i9cqMB7qnzyDSUuFdptcJJbpgOsaXvL3Qzue28rxFzgg6iQ1OJLNey5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.010438 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.010446 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--16b9c52d--170e--5f8d--b9c1--c30752bb4b9e-osd--block--16b9c52d--170e--5f8d--b9c1--c30752bb4b9e', 'dm-uuid-LVM-MNYRr1GmUlANIkrAm8Q1XiTJ6Tj3RDwVlEQKgEfBtVKj0DMgSbGsmSH0IckhcMP5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.010457 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4596a618-b0c7-4f6c-b3f8-3bb0eece7c92', 'scsi-SQEMU_QEMU_HARDDISK_4596a618-b0c7-4f6c-b3f8-3bb0eece7c92'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4596a618-b0c7-4f6c-b3f8-3bb0eece7c92-part1', 'scsi-SQEMU_QEMU_HARDDISK_4596a618-b0c7-4f6c-b3f8-3bb0eece7c92-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4596a618-b0c7-4f6c-b3f8-3bb0eece7c92-part14', 'scsi-SQEMU_QEMU_HARDDISK_4596a618-b0c7-4f6c-b3f8-3bb0eece7c92-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4596a618-b0c7-4f6c-b3f8-3bb0eece7c92-part15', 'scsi-SQEMU_QEMU_HARDDISK_4596a618-b0c7-4f6c-b3f8-3bb0eece7c92-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4596a618-b0c7-4f6c-b3f8-3bb0eece7c92-part16', 'scsi-SQEMU_QEMU_HARDDISK_4596a618-b0c7-4f6c-b3f8-3bb0eece7c92-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.010462 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.010471 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--19ae3695--7a84--5d0f--ba8d--a81d8fecc8c8-osd--block--19ae3695--7a84--5d0f--ba8d--a81d8fecc8c8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-KIB693-1MXL-Jsrw-Vj0a-y756-IACV-bcAZ1n', 'scsi-0QEMU_QEMU_HARDDISK_9851de66-42ac-4afe-9f6b-65921d8ebe77', 'scsi-SQEMU_QEMU_HARDDISK_9851de66-42ac-4afe-9f6b-65921d8ebe77'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.010476 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.010484 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--9c748ac0--b7ad--5284--8a6e--a168bddd5b66-osd--block--9c748ac0--b7ad--5284--8a6e--a168bddd5b66'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fF1fTX-jpev-cNOf-sWvF-b0nY-2dsf-dsD3cE', 'scsi-0QEMU_QEMU_HARDDISK_3c9bb5e0-782f-4e13-9d09-525e18a95d4a', 'scsi-SQEMU_QEMU_HARDDISK_3c9bb5e0-782f-4e13-9d09-525e18a95d4a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.010488 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.010495 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10ca2d35-3b66-46f3-ab0f-253d8a66f2e6', 'scsi-SQEMU_QEMU_HARDDISK_10ca2d35-3b66-46f3-ab0f-253d8a66f2e6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.010503 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.010508 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-08-00-02-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.010513 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:26.010519 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.010524 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.010529 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.010535 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.010543 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c80af5d6--1159--5955--8f01--035b314db1bd-osd--block--c80af5d6--1159--5955--8f01--035b314db1bd', 'dm-uuid-LVM-KlTrF1EDIjiTHHK8zRzK8yCGxCI0DGQ1CUnwoXChyL021HsQR4VIfiu0fYA0jc6C'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.010551 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f87f253-d467-48bc-bac0-692ec5abf0aa', 'scsi-SQEMU_QEMU_HARDDISK_1f87f253-d467-48bc-bac0-692ec5abf0aa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f87f253-d467-48bc-bac0-692ec5abf0aa-part1', 'scsi-SQEMU_QEMU_HARDDISK_1f87f253-d467-48bc-bac0-692ec5abf0aa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f87f253-d467-48bc-bac0-692ec5abf0aa-part14', 'scsi-SQEMU_QEMU_HARDDISK_1f87f253-d467-48bc-bac0-692ec5abf0aa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f87f253-d467-48bc-bac0-692ec5abf0aa-part15', 'scsi-SQEMU_QEMU_HARDDISK_1f87f253-d467-48bc-bac0-692ec5abf0aa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f87f253-d467-48bc-bac0-692ec5abf0aa-part16', 'scsi-SQEMU_QEMU_HARDDISK_1f87f253-d467-48bc-bac0-692ec5abf0aa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.010558 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d7d0ff5a--46f9--53d2--8425--61ef59e49033-osd--block--d7d0ff5a--46f9--53d2--8425--61ef59e49033', 'dm-uuid-LVM-rXS6OKBks0F68YdHLhvZFzeH4w2Md7iuhu1erBcrjvjJQBIjk4II21gfgMcpkuKL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.010563 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c5eee886--e951--5b32--a4a0--4842fe7aed13-osd--block--c5eee886--e951--5b32--a4a0--4842fe7aed13'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-T24umF-dGrh-Fo0n-yTgT-OrMV-jVVv-MHbK0G', 'scsi-0QEMU_QEMU_HARDDISK_49047f2d-69c1-4fac-a475-f46440c51814', 'scsi-SQEMU_QEMU_HARDDISK_49047f2d-69c1-4fac-a475-f46440c51814'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.010570 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.010577 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--16b9c52d--170e--5f8d--b9c1--c30752bb4b9e-osd--block--16b9c52d--170e--5f8d--b9c1--c30752bb4b9e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-c54kMR-Ip2S-4T1g-ey67-uSnv-3dsN-HVYVia', 'scsi-0QEMU_QEMU_HARDDISK_0a17ff12-522e-4235-8e4c-edb4898b90f5', 'scsi-SQEMU_QEMU_HARDDISK_0a17ff12-522e-4235-8e4c-edb4898b90f5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.010582 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.010589 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_22a44c82-a679-4e37-857c-f96ffb845a8a', 'scsi-SQEMU_QEMU_HARDDISK_22a44c82-a679-4e37-857c-f96ffb845a8a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.010593 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.010601 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-08-00-02-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.010605 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:26.010610 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.010617 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.010621 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.010626 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.010632 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.010642 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab6e22-4bb9-43f8-8e9a-32a0c49ce343', 'scsi-SQEMU_QEMU_HARDDISK_e5ab6e22-4bb9-43f8-8e9a-32a0c49ce343'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab6e22-4bb9-43f8-8e9a-32a0c49ce343-part1', 'scsi-SQEMU_QEMU_HARDDISK_e5ab6e22-4bb9-43f8-8e9a-32a0c49ce343-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab6e22-4bb9-43f8-8e9a-32a0c49ce343-part14', 'scsi-SQEMU_QEMU_HARDDISK_e5ab6e22-4bb9-43f8-8e9a-32a0c49ce343-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab6e22-4bb9-43f8-8e9a-32a0c49ce343-part15', 'scsi-SQEMU_QEMU_HARDDISK_e5ab6e22-4bb9-43f8-8e9a-32a0c49ce343-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab6e22-4bb9-43f8-8e9a-32a0c49ce343-part16', 'scsi-SQEMU_QEMU_HARDDISK_e5ab6e22-4bb9-43f8-8e9a-32a0c49ce343-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.010648 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c80af5d6--1159--5955--8f01--035b314db1bd-osd--block--c80af5d6--1159--5955--8f01--035b314db1bd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-8o3EVe-Utw7-lM15-VLzY-7aD3-pHv9-pl9uyv', 'scsi-0QEMU_QEMU_HARDDISK_8b42dfd4-7d2e-4d67-9c6f-8993b8aa5f54', 'scsi-SQEMU_QEMU_HARDDISK_8b42dfd4-7d2e-4d67-9c6f-8993b8aa5f54'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.010654 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--d7d0ff5a--46f9--53d2--8425--61ef59e49033-osd--block--d7d0ff5a--46f9--53d2--8425--61ef59e49033'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-I6vPbl-qQTp-H4zu-SOPt-3OKc-cfy1-s5oD5i', 'scsi-0QEMU_QEMU_HARDDISK_29911cfa-2062-4b30-9263-aae8438640a0', 'scsi-SQEMU_QEMU_HARDDISK_29911cfa-2062-4b30-9263-aae8438640a0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.010662 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b1a6d0f-b1ea-4446-8ff8-479db77ebf36', 'scsi-SQEMU_QEMU_HARDDISK_7b1a6d0f-b1ea-4446-8ff8-479db77ebf36'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.010666 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-08-00-02-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-08 00:54:26.010671 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:26.010675 | orchestrator | 2026-04-08 00:54:26.010680 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-08 00:54:26.010684 | orchestrator | Wednesday 08 April 2026 00:52:39 +0000 (0:00:00.509) 0:00:17.883 ******* 2026-04-08 00:54:26.010689 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:26.010693 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:26.010698 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:26.010702 | orchestrator | 2026-04-08 00:54:26.010707 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-08 00:54:26.010714 | orchestrator | Wednesday 08 April 2026 00:52:40 +0000 (0:00:00.647) 0:00:18.531 ******* 2026-04-08 00:54:26.010718 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:26.010723 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:26.010727 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:26.010731 | orchestrator | 2026-04-08 00:54:26.010736 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-08 00:54:26.010741 | orchestrator | Wednesday 08 April 2026 00:52:40 +0000 (0:00:00.394) 0:00:18.926 ******* 2026-04-08 00:54:26.010745 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:26.010749 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:26.010754 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:26.010758 | orchestrator | 2026-04-08 00:54:26.010762 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-08 00:54:26.010767 | orchestrator | Wednesday 08 April 2026 00:52:41 +0000 (0:00:00.553) 0:00:19.479 ******* 2026-04-08 00:54:26.010771 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:26.010776 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:26.010780 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:26.010787 | orchestrator | 2026-04-08 00:54:26.010792 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-08 00:54:26.010796 | orchestrator | Wednesday 08 April 2026 00:52:41 +0000 (0:00:00.248) 0:00:19.728 ******* 2026-04-08 00:54:26.010801 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:26.010805 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:26.010809 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:26.010814 | orchestrator | 2026-04-08 00:54:26.010818 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-08 00:54:26.010822 | orchestrator | Wednesday 08 April 2026 00:52:41 +0000 (0:00:00.349) 0:00:20.077 ******* 2026-04-08 00:54:26.010827 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:26.010831 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:26.010835 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:26.010840 | orchestrator | 2026-04-08 00:54:26.010844 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-08 00:54:26.010848 | orchestrator | Wednesday 08 April 2026 00:52:42 +0000 (0:00:00.377) 0:00:20.454 ******* 2026-04-08 00:54:26.010853 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-08 00:54:26.010857 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-08 00:54:26.010864 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-08 00:54:26.010868 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-08 00:54:26.010872 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-08 00:54:26.010877 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-08 00:54:26.010881 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-08 00:54:26.010886 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-08 00:54:26.010890 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-08 00:54:26.010894 | orchestrator | 2026-04-08 00:54:26.010899 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-08 00:54:26.010903 | orchestrator | Wednesday 08 April 2026 00:52:42 +0000 (0:00:00.721) 0:00:21.176 ******* 2026-04-08 00:54:26.010908 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-08 00:54:26.010912 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-08 00:54:26.010916 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-08 00:54:26.010921 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:26.010925 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-08 00:54:26.010929 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-08 00:54:26.010934 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-08 00:54:26.010938 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:26.010942 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-08 00:54:26.010947 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-08 00:54:26.010951 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-08 00:54:26.010956 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:26.010964 | orchestrator | 2026-04-08 00:54:26.010971 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-08 00:54:26.010978 | orchestrator | Wednesday 08 April 2026 00:52:43 +0000 (0:00:00.301) 0:00:21.478 ******* 2026-04-08 00:54:26.010985 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:54:26.010993 | orchestrator | 2026-04-08 00:54:26.011001 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-08 00:54:26.011008 | orchestrator | Wednesday 08 April 2026 00:52:43 +0000 (0:00:00.547) 0:00:22.026 ******* 2026-04-08 00:54:26.011015 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:26.011023 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:26.011030 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:26.011039 | orchestrator | 2026-04-08 00:54:26.011044 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-08 00:54:26.011048 | orchestrator | Wednesday 08 April 2026 00:52:44 +0000 (0:00:00.289) 0:00:22.315 ******* 2026-04-08 00:54:26.011053 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:26.011057 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:26.011061 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:26.011065 | orchestrator | 2026-04-08 00:54:26.011070 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-08 00:54:26.011074 | orchestrator | Wednesday 08 April 2026 00:52:44 +0000 (0:00:00.249) 0:00:22.565 ******* 2026-04-08 00:54:26.011078 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:26.011083 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:26.011087 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:54:26.011095 | orchestrator | 2026-04-08 00:54:26.011103 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-08 00:54:26.011114 | orchestrator | Wednesday 08 April 2026 00:52:44 +0000 (0:00:00.267) 0:00:22.832 ******* 2026-04-08 00:54:26.011125 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:26.011134 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:26.011141 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:26.011148 | orchestrator | 2026-04-08 00:54:26.011156 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-08 00:54:26.011160 | orchestrator | Wednesday 08 April 2026 00:52:45 +0000 (0:00:00.489) 0:00:23.321 ******* 2026-04-08 00:54:26.011165 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-08 00:54:26.011169 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-08 00:54:26.011173 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-08 00:54:26.011178 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:26.011182 | orchestrator | 2026-04-08 00:54:26.011187 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-08 00:54:26.011191 | orchestrator | Wednesday 08 April 2026 00:52:45 +0000 (0:00:00.384) 0:00:23.706 ******* 2026-04-08 00:54:26.011196 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-08 00:54:26.011200 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-08 00:54:26.011204 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-08 00:54:26.011209 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:26.011213 | orchestrator | 2026-04-08 00:54:26.011217 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-08 00:54:26.011222 | orchestrator | Wednesday 08 April 2026 00:52:45 +0000 (0:00:00.333) 0:00:24.039 ******* 2026-04-08 00:54:26.011226 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-08 00:54:26.011230 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-08 00:54:26.011234 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-08 00:54:26.011239 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:26.011243 | orchestrator | 2026-04-08 00:54:26.011247 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-08 00:54:26.011292 | orchestrator | Wednesday 08 April 2026 00:52:46 +0000 (0:00:00.332) 0:00:24.372 ******* 2026-04-08 00:54:26.011297 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:54:26.011301 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:54:26.011309 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:54:26.011314 | orchestrator | 2026-04-08 00:54:26.011318 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-08 00:54:26.011322 | orchestrator | Wednesday 08 April 2026 00:52:46 +0000 (0:00:00.293) 0:00:24.666 ******* 2026-04-08 00:54:26.011327 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-08 00:54:26.011331 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-08 00:54:26.011335 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-08 00:54:26.011340 | orchestrator | 2026-04-08 00:54:26.011349 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-08 00:54:26.011354 | orchestrator | Wednesday 08 April 2026 00:52:46 +0000 (0:00:00.459) 0:00:25.125 ******* 2026-04-08 00:54:26.011359 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-08 00:54:26.011363 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-08 00:54:26.011367 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-08 00:54:26.011372 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-08 00:54:26.011376 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-08 00:54:26.011380 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-08 00:54:26.011385 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-08 00:54:26.011389 | orchestrator | 2026-04-08 00:54:26.011393 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-08 00:54:26.011398 | orchestrator | Wednesday 08 April 2026 00:52:47 +0000 (0:00:00.803) 0:00:25.929 ******* 2026-04-08 00:54:26.011402 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-08 00:54:26.011407 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-08 00:54:26.011411 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-08 00:54:26.011415 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-08 00:54:26.011420 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-08 00:54:26.011424 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-08 00:54:26.011429 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-08 00:54:26.011433 | orchestrator | 2026-04-08 00:54:26.011437 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-04-08 00:54:26.011442 | orchestrator | Wednesday 08 April 2026 00:52:49 +0000 (0:00:01.672) 0:00:27.601 ******* 2026-04-08 00:54:26.011446 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:54:26.011451 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:54:26.011455 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-04-08 00:54:26.011459 | orchestrator | 2026-04-08 00:54:26.011464 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-04-08 00:54:26.011468 | orchestrator | Wednesday 08 April 2026 00:52:49 +0000 (0:00:00.318) 0:00:27.920 ******* 2026-04-08 00:54:26.011476 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-08 00:54:26.011482 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-08 00:54:26.011487 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-08 00:54:26.011491 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-08 00:54:26.011499 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-08 00:54:26.011504 | orchestrator | 2026-04-08 00:54:26.011508 | orchestrator | TASK [generate keys] *********************************************************** 2026-04-08 00:54:26.011513 | orchestrator | Wednesday 08 April 2026 00:53:33 +0000 (0:00:44.095) 0:01:12.015 ******* 2026-04-08 00:54:26.011517 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:54:26.011524 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:54:26.011528 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:54:26.011533 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:54:26.011537 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:54:26.011541 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:54:26.011546 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-04-08 00:54:26.011550 | orchestrator | 2026-04-08 00:54:26.011554 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-04-08 00:54:26.011559 | orchestrator | Wednesday 08 April 2026 00:53:56 +0000 (0:00:22.875) 0:01:34.891 ******* 2026-04-08 00:54:26.011563 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:54:26.011567 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:54:26.011572 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:54:26.011576 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:54:26.011581 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:54:26.011585 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:54:26.011589 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-08 00:54:26.011594 | orchestrator | 2026-04-08 00:54:26.011598 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-04-08 00:54:26.011603 | orchestrator | Wednesday 08 April 2026 00:54:07 +0000 (0:00:10.830) 0:01:45.722 ******* 2026-04-08 00:54:26.011607 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:54:26.011611 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-08 00:54:26.011616 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-08 00:54:26.011620 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:54:26.011624 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-08 00:54:26.011629 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-08 00:54:26.011633 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:54:26.011637 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-08 00:54:26.011642 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-08 00:54:26.011646 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:54:26.011650 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-08 00:54:26.011655 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-08 00:54:26.011659 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:54:26.011667 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-08 00:54:26.011674 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-08 00:54:26.011678 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-08 00:54:26.011683 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-08 00:54:26.011687 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-08 00:54:26.011692 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-04-08 00:54:26.011696 | orchestrator | 2026-04-08 00:54:26.011701 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:54:26.011705 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-08 00:54:26.011710 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-08 00:54:26.011715 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-08 00:54:26.011719 | orchestrator | 2026-04-08 00:54:26.011723 | orchestrator | 2026-04-08 00:54:26.011728 | orchestrator | 2026-04-08 00:54:26.011735 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:54:26.011742 | orchestrator | Wednesday 08 April 2026 00:54:24 +0000 (0:00:16.557) 0:02:02.280 ******* 2026-04-08 00:54:26.011749 | orchestrator | =============================================================================== 2026-04-08 00:54:26.011758 | orchestrator | create openstack pool(s) ----------------------------------------------- 44.10s 2026-04-08 00:54:26.011769 | orchestrator | generate keys ---------------------------------------------------------- 22.88s 2026-04-08 00:54:26.011776 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 16.56s 2026-04-08 00:54:26.011784 | orchestrator | get keys from monitors ------------------------------------------------- 10.83s 2026-04-08 00:54:26.011794 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.78s 2026-04-08 00:54:26.011801 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.76s 2026-04-08 00:54:26.011809 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.67s 2026-04-08 00:54:26.011817 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 1.02s 2026-04-08 00:54:26.011824 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.81s 2026-04-08 00:54:26.011832 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.80s 2026-04-08 00:54:26.011840 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.78s 2026-04-08 00:54:26.011848 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.72s 2026-04-08 00:54:26.011856 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.68s 2026-04-08 00:54:26.011861 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.65s 2026-04-08 00:54:26.011866 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.60s 2026-04-08 00:54:26.011870 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.55s 2026-04-08 00:54:26.011874 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.55s 2026-04-08 00:54:26.011879 | orchestrator | ceph-facts : Set_fact fsid ---------------------------------------------- 0.51s 2026-04-08 00:54:26.011883 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.51s 2026-04-08 00:54:26.011887 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.51s 2026-04-08 00:54:26.011892 | orchestrator | 2026-04-08 00:54:26 | INFO  | Task 380835dc-5813-43a2-8956-5eab84bbe07a is in state STARTED 2026-04-08 00:54:26.011900 | orchestrator | 2026-04-08 00:54:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:54:26.011905 | orchestrator | 2026-04-08 00:54:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:54:29.067132 | orchestrator | 2026-04-08 00:54:29 | INFO  | Task 380835dc-5813-43a2-8956-5eab84bbe07a is in state STARTED 2026-04-08 00:54:29.069084 | orchestrator | 2026-04-08 00:54:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:54:29.069162 | orchestrator | 2026-04-08 00:54:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:54:32.111536 | orchestrator | 2026-04-08 00:54:32 | INFO  | Task 380835dc-5813-43a2-8956-5eab84bbe07a is in state STARTED 2026-04-08 00:54:32.113292 | orchestrator | 2026-04-08 00:54:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:54:32.113377 | orchestrator | 2026-04-08 00:54:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:54:35.154162 | orchestrator | 2026-04-08 00:54:35 | INFO  | Task 380835dc-5813-43a2-8956-5eab84bbe07a is in state STARTED 2026-04-08 00:54:35.156610 | orchestrator | 2026-04-08 00:54:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:54:35.156717 | orchestrator | 2026-04-08 00:54:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:54:38.195146 | orchestrator | 2026-04-08 00:54:38 | INFO  | Task 380835dc-5813-43a2-8956-5eab84bbe07a is in state STARTED 2026-04-08 00:54:38.196383 | orchestrator | 2026-04-08 00:54:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:54:38.196432 | orchestrator | 2026-04-08 00:54:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:54:41.249233 | orchestrator | 2026-04-08 00:54:41 | INFO  | Task 380835dc-5813-43a2-8956-5eab84bbe07a is in state STARTED 2026-04-08 00:54:41.250926 | orchestrator | 2026-04-08 00:54:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:54:41.250968 | orchestrator | 2026-04-08 00:54:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:54:44.304091 | orchestrator | 2026-04-08 00:54:44 | INFO  | Task 380835dc-5813-43a2-8956-5eab84bbe07a is in state STARTED 2026-04-08 00:54:44.307333 | orchestrator | 2026-04-08 00:54:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:54:44.307381 | orchestrator | 2026-04-08 00:54:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:54:47.357158 | orchestrator | 2026-04-08 00:54:47 | INFO  | Task 380835dc-5813-43a2-8956-5eab84bbe07a is in state STARTED 2026-04-08 00:54:47.359510 | orchestrator | 2026-04-08 00:54:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:54:47.359590 | orchestrator | 2026-04-08 00:54:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:54:50.409713 | orchestrator | 2026-04-08 00:54:50 | INFO  | Task 380835dc-5813-43a2-8956-5eab84bbe07a is in state STARTED 2026-04-08 00:54:50.410893 | orchestrator | 2026-04-08 00:54:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:54:50.410931 | orchestrator | 2026-04-08 00:54:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:54:53.447509 | orchestrator | 2026-04-08 00:54:53 | INFO  | Task 380835dc-5813-43a2-8956-5eab84bbe07a is in state STARTED 2026-04-08 00:54:53.452377 | orchestrator | 2026-04-08 00:54:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:54:53.452456 | orchestrator | 2026-04-08 00:54:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:54:56.490216 | orchestrator | 2026-04-08 00:54:56 | INFO  | Task 380835dc-5813-43a2-8956-5eab84bbe07a is in state STARTED 2026-04-08 00:54:56.494998 | orchestrator | 2026-04-08 00:54:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:54:56.495086 | orchestrator | 2026-04-08 00:54:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:54:59.546142 | orchestrator | 2026-04-08 00:54:59 | INFO  | Task 380835dc-5813-43a2-8956-5eab84bbe07a is in state STARTED 2026-04-08 00:54:59.548854 | orchestrator | 2026-04-08 00:54:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:54:59.548923 | orchestrator | 2026-04-08 00:54:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:55:02.599389 | orchestrator | 2026-04-08 00:55:02 | INFO  | Task 380835dc-5813-43a2-8956-5eab84bbe07a is in state SUCCESS 2026-04-08 00:55:02.601654 | orchestrator | 2026-04-08 00:55:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:55:02.604626 | orchestrator | 2026-04-08 00:55:02 | INFO  | Task 14188f82-e985-4029-80ce-fe6041e9f8c2 is in state STARTED 2026-04-08 00:55:02.604690 | orchestrator | 2026-04-08 00:55:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:55:05.654324 | orchestrator | 2026-04-08 00:55:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:55:05.654426 | orchestrator | 2026-04-08 00:55:05 | INFO  | Task 14188f82-e985-4029-80ce-fe6041e9f8c2 is in state STARTED 2026-04-08 00:55:05.654437 | orchestrator | 2026-04-08 00:55:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:55:08.692131 | orchestrator | 2026-04-08 00:55:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:55:08.693544 | orchestrator | 2026-04-08 00:55:08 | INFO  | Task 14188f82-e985-4029-80ce-fe6041e9f8c2 is in state STARTED 2026-04-08 00:55:08.693596 | orchestrator | 2026-04-08 00:55:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:55:11.736866 | orchestrator | 2026-04-08 00:55:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:55:11.737592 | orchestrator | 2026-04-08 00:55:11 | INFO  | Task 14188f82-e985-4029-80ce-fe6041e9f8c2 is in state STARTED 2026-04-08 00:55:11.737620 | orchestrator | 2026-04-08 00:55:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:55:14.775570 | orchestrator | 2026-04-08 00:55:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:55:14.778714 | orchestrator | 2026-04-08 00:55:14 | INFO  | Task 14188f82-e985-4029-80ce-fe6041e9f8c2 is in state STARTED 2026-04-08 00:55:14.778791 | orchestrator | 2026-04-08 00:55:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:55:17.827025 | orchestrator | 2026-04-08 00:55:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:55:17.829125 | orchestrator | 2026-04-08 00:55:17 | INFO  | Task 14188f82-e985-4029-80ce-fe6041e9f8c2 is in state STARTED 2026-04-08 00:55:17.829190 | orchestrator | 2026-04-08 00:55:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:55:20.879117 | orchestrator | 2026-04-08 00:55:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:55:20.880085 | orchestrator | 2026-04-08 00:55:20 | INFO  | Task 14188f82-e985-4029-80ce-fe6041e9f8c2 is in state STARTED 2026-04-08 00:55:20.880441 | orchestrator | 2026-04-08 00:55:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:55:23.924957 | orchestrator | 2026-04-08 00:55:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:55:23.927133 | orchestrator | 2026-04-08 00:55:23 | INFO  | Task 14188f82-e985-4029-80ce-fe6041e9f8c2 is in state STARTED 2026-04-08 00:55:23.927393 | orchestrator | 2026-04-08 00:55:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:55:26.978558 | orchestrator | 2026-04-08 00:55:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:55:26.981216 | orchestrator | 2026-04-08 00:55:26 | INFO  | Task 14188f82-e985-4029-80ce-fe6041e9f8c2 is in state STARTED 2026-04-08 00:55:26.981298 | orchestrator | 2026-04-08 00:55:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:55:30.026441 | orchestrator | 2026-04-08 00:55:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:55:30.028294 | orchestrator | 2026-04-08 00:55:30 | INFO  | Task 14188f82-e985-4029-80ce-fe6041e9f8c2 is in state STARTED 2026-04-08 00:55:30.028347 | orchestrator | 2026-04-08 00:55:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:55:33.071240 | orchestrator | 2026-04-08 00:55:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:55:33.072832 | orchestrator | 2026-04-08 00:55:33 | INFO  | Task 14188f82-e985-4029-80ce-fe6041e9f8c2 is in state STARTED 2026-04-08 00:55:33.072876 | orchestrator | 2026-04-08 00:55:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:55:36.115856 | orchestrator | 2026-04-08 00:55:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:55:36.117972 | orchestrator | 2026-04-08 00:55:36 | INFO  | Task 14188f82-e985-4029-80ce-fe6041e9f8c2 is in state STARTED 2026-04-08 00:55:36.118008 | orchestrator | 2026-04-08 00:55:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:55:39.165926 | orchestrator | 2026-04-08 00:55:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:55:39.167052 | orchestrator | 2026-04-08 00:55:39 | INFO  | Task 14188f82-e985-4029-80ce-fe6041e9f8c2 is in state STARTED 2026-04-08 00:55:39.167099 | orchestrator | 2026-04-08 00:55:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:55:42.214810 | orchestrator | 2026-04-08 00:55:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:55:42.216714 | orchestrator | 2026-04-08 00:55:42 | INFO  | Task 14188f82-e985-4029-80ce-fe6041e9f8c2 is in state STARTED 2026-04-08 00:55:42.216764 | orchestrator | 2026-04-08 00:55:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:55:45.268581 | orchestrator | 2026-04-08 00:55:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:55:45.269805 | orchestrator | 2026-04-08 00:55:45 | INFO  | Task 14188f82-e985-4029-80ce-fe6041e9f8c2 is in state STARTED 2026-04-08 00:55:45.269831 | orchestrator | 2026-04-08 00:55:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:55:48.312976 | orchestrator | 2026-04-08 00:55:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:55:48.315297 | orchestrator | 2026-04-08 00:55:48 | INFO  | Task 14188f82-e985-4029-80ce-fe6041e9f8c2 is in state STARTED 2026-04-08 00:55:48.315346 | orchestrator | 2026-04-08 00:55:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:55:51.359374 | orchestrator | 2026-04-08 00:55:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:55:51.361464 | orchestrator | 2026-04-08 00:55:51 | INFO  | Task 14188f82-e985-4029-80ce-fe6041e9f8c2 is in state STARTED 2026-04-08 00:55:51.361517 | orchestrator | 2026-04-08 00:55:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:55:54.405319 | orchestrator | 2026-04-08 00:55:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:55:54.407815 | orchestrator | 2026-04-08 00:55:54 | INFO  | Task 14188f82-e985-4029-80ce-fe6041e9f8c2 is in state STARTED 2026-04-08 00:55:54.407893 | orchestrator | 2026-04-08 00:55:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:55:57.458342 | orchestrator | 2026-04-08 00:55:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:55:57.458916 | orchestrator | 2026-04-08 00:55:57 | INFO  | Task 14188f82-e985-4029-80ce-fe6041e9f8c2 is in state STARTED 2026-04-08 00:55:57.691647 | orchestrator | 2026-04-08 00:55:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:56:00.506530 | orchestrator | 2026-04-08 00:56:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:56:00.507507 | orchestrator | 2026-04-08 00:56:00 | INFO  | Task 14188f82-e985-4029-80ce-fe6041e9f8c2 is in state STARTED 2026-04-08 00:56:00.507543 | orchestrator | 2026-04-08 00:56:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:56:03.554276 | orchestrator | 2026-04-08 00:56:03 | INFO  | Task fc1cb447-32c5-4fcf-981e-096eb440472c is in state STARTED 2026-04-08 00:56:03.556558 | orchestrator | 2026-04-08 00:56:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:56:03.557983 | orchestrator | 2026-04-08 00:56:03 | INFO  | Task d4ba0406-23a5-4811-a75f-99387af6b44a is in state STARTED 2026-04-08 00:56:03.559656 | orchestrator | 2026-04-08 00:56:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:56:03.562403 | orchestrator | 2026-04-08 00:56:03 | INFO  | Task 14188f82-e985-4029-80ce-fe6041e9f8c2 is in state SUCCESS 2026-04-08 00:56:03.562869 | orchestrator | 2026-04-08 00:56:03.562901 | orchestrator | 2026-04-08 00:56:03.562908 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-04-08 00:56:03.562915 | orchestrator | 2026-04-08 00:56:03.562921 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-04-08 00:56:03.562928 | orchestrator | Wednesday 08 April 2026 00:54:27 +0000 (0:00:00.232) 0:00:00.232 ******* 2026-04-08 00:56:03.562935 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-08 00:56:03.562943 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-08 00:56:03.562949 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-08 00:56:03.562955 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-08 00:56:03.562961 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-08 00:56:03.562967 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-08 00:56:03.562973 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-08 00:56:03.562980 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-08 00:56:03.562986 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-08 00:56:03.563012 | orchestrator | 2026-04-08 00:56:03.563018 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-04-08 00:56:03.563024 | orchestrator | Wednesday 08 April 2026 00:54:32 +0000 (0:00:04.504) 0:00:04.736 ******* 2026-04-08 00:56:03.563067 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-08 00:56:03.563073 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-08 00:56:03.563104 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-08 00:56:03.563111 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-08 00:56:03.563126 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-08 00:56:03.563132 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-08 00:56:03.563139 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-08 00:56:03.563145 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-08 00:56:03.563151 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-08 00:56:03.563157 | orchestrator | 2026-04-08 00:56:03.563163 | orchestrator | TASK [Create share directory] ************************************************** 2026-04-08 00:56:03.563170 | orchestrator | Wednesday 08 April 2026 00:54:36 +0000 (0:00:03.882) 0:00:08.619 ******* 2026-04-08 00:56:03.563175 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-08 00:56:03.563180 | orchestrator | 2026-04-08 00:56:03.563184 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-04-08 00:56:03.563188 | orchestrator | Wednesday 08 April 2026 00:54:37 +0000 (0:00:01.016) 0:00:09.636 ******* 2026-04-08 00:56:03.563192 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-04-08 00:56:03.563196 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-08 00:56:03.563200 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-08 00:56:03.563204 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-04-08 00:56:03.563208 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-08 00:56:03.563211 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-04-08 00:56:03.563216 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-04-08 00:56:03.563219 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-04-08 00:56:03.563282 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-04-08 00:56:03.563289 | orchestrator | 2026-04-08 00:56:03.563293 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-04-08 00:56:03.563296 | orchestrator | Wednesday 08 April 2026 00:54:50 +0000 (0:00:13.418) 0:00:23.055 ******* 2026-04-08 00:56:03.563314 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-04-08 00:56:03.563318 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-04-08 00:56:03.563322 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-08 00:56:03.563326 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-08 00:56:03.563341 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-08 00:56:03.563345 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-08 00:56:03.563349 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-04-08 00:56:03.563353 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-04-08 00:56:03.563357 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-04-08 00:56:03.563366 | orchestrator | 2026-04-08 00:56:03.563370 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-04-08 00:56:03.563374 | orchestrator | Wednesday 08 April 2026 00:54:53 +0000 (0:00:03.111) 0:00:26.167 ******* 2026-04-08 00:56:03.563378 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-04-08 00:56:03.563382 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-08 00:56:03.563386 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-08 00:56:03.563390 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-04-08 00:56:03.563393 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-08 00:56:03.563397 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-04-08 00:56:03.563401 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-04-08 00:56:03.563405 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-04-08 00:56:03.563409 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-04-08 00:56:03.563412 | orchestrator | 2026-04-08 00:56:03.563416 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:56:03.563420 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:56:03.563425 | orchestrator | 2026-04-08 00:56:03.563429 | orchestrator | 2026-04-08 00:56:03.563432 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:56:03.563436 | orchestrator | Wednesday 08 April 2026 00:55:00 +0000 (0:00:06.989) 0:00:33.156 ******* 2026-04-08 00:56:03.563440 | orchestrator | =============================================================================== 2026-04-08 00:56:03.563444 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.42s 2026-04-08 00:56:03.563448 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.99s 2026-04-08 00:56:03.563451 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.50s 2026-04-08 00:56:03.563456 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 3.88s 2026-04-08 00:56:03.563459 | orchestrator | Check if target directories exist --------------------------------------- 3.11s 2026-04-08 00:56:03.563463 | orchestrator | Create share directory -------------------------------------------------- 1.02s 2026-04-08 00:56:03.563467 | orchestrator | 2026-04-08 00:56:03.563471 | orchestrator | 2026-04-08 00:56:03.563475 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-04-08 00:56:03.563479 | orchestrator | 2026-04-08 00:56:03.563482 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-04-08 00:56:03.563486 | orchestrator | Wednesday 08 April 2026 00:55:04 +0000 (0:00:00.305) 0:00:00.305 ******* 2026-04-08 00:56:03.563490 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-04-08 00:56:03.563495 | orchestrator | 2026-04-08 00:56:03.563500 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-04-08 00:56:03.563504 | orchestrator | Wednesday 08 April 2026 00:55:04 +0000 (0:00:00.239) 0:00:00.544 ******* 2026-04-08 00:56:03.563509 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-04-08 00:56:03.563514 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-04-08 00:56:03.563519 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-04-08 00:56:03.563523 | orchestrator | 2026-04-08 00:56:03.563528 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-04-08 00:56:03.563532 | orchestrator | Wednesday 08 April 2026 00:55:06 +0000 (0:00:01.606) 0:00:02.151 ******* 2026-04-08 00:56:03.563537 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-04-08 00:56:03.563545 | orchestrator | 2026-04-08 00:56:03.563550 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-04-08 00:56:03.563555 | orchestrator | Wednesday 08 April 2026 00:55:07 +0000 (0:00:01.165) 0:00:03.316 ******* 2026-04-08 00:56:03.563561 | orchestrator | changed: [testbed-manager] 2026-04-08 00:56:03.563568 | orchestrator | 2026-04-08 00:56:03.563574 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-04-08 00:56:03.563580 | orchestrator | Wednesday 08 April 2026 00:55:08 +0000 (0:00:00.912) 0:00:04.229 ******* 2026-04-08 00:56:03.563589 | orchestrator | changed: [testbed-manager] 2026-04-08 00:56:03.563595 | orchestrator | 2026-04-08 00:56:03.563602 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-04-08 00:56:03.563609 | orchestrator | Wednesday 08 April 2026 00:55:09 +0000 (0:00:00.863) 0:00:05.092 ******* 2026-04-08 00:56:03.563615 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-04-08 00:56:03.563619 | orchestrator | ok: [testbed-manager] 2026-04-08 00:56:03.563624 | orchestrator | 2026-04-08 00:56:03.563628 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-04-08 00:56:03.563637 | orchestrator | Wednesday 08 April 2026 00:55:51 +0000 (0:00:42.617) 0:00:47.710 ******* 2026-04-08 00:56:03.563642 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-04-08 00:56:03.563647 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-04-08 00:56:03.563651 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-04-08 00:56:03.563656 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-04-08 00:56:03.563660 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-04-08 00:56:03.563665 | orchestrator | 2026-04-08 00:56:03.563669 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-04-08 00:56:03.563674 | orchestrator | Wednesday 08 April 2026 00:55:55 +0000 (0:00:03.775) 0:00:51.485 ******* 2026-04-08 00:56:03.563678 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-04-08 00:56:03.563683 | orchestrator | 2026-04-08 00:56:03.563687 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-04-08 00:56:03.563692 | orchestrator | Wednesday 08 April 2026 00:55:56 +0000 (0:00:00.599) 0:00:52.085 ******* 2026-04-08 00:56:03.563696 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:56:03.563701 | orchestrator | 2026-04-08 00:56:03.563705 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-04-08 00:56:03.563710 | orchestrator | Wednesday 08 April 2026 00:55:56 +0000 (0:00:00.136) 0:00:52.221 ******* 2026-04-08 00:56:03.563714 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:56:03.563719 | orchestrator | 2026-04-08 00:56:03.563723 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-04-08 00:56:03.563728 | orchestrator | Wednesday 08 April 2026 00:55:56 +0000 (0:00:00.298) 0:00:52.520 ******* 2026-04-08 00:56:03.563732 | orchestrator | changed: [testbed-manager] 2026-04-08 00:56:03.563737 | orchestrator | 2026-04-08 00:56:03.563741 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-04-08 00:56:03.563746 | orchestrator | Wednesday 08 April 2026 00:55:57 +0000 (0:00:01.467) 0:00:53.988 ******* 2026-04-08 00:56:03.563750 | orchestrator | changed: [testbed-manager] 2026-04-08 00:56:03.563755 | orchestrator | 2026-04-08 00:56:03.563759 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-04-08 00:56:03.563764 | orchestrator | Wednesday 08 April 2026 00:55:58 +0000 (0:00:00.729) 0:00:54.717 ******* 2026-04-08 00:56:03.563768 | orchestrator | changed: [testbed-manager] 2026-04-08 00:56:03.563773 | orchestrator | 2026-04-08 00:56:03.563777 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-04-08 00:56:03.563782 | orchestrator | Wednesday 08 April 2026 00:55:59 +0000 (0:00:00.608) 0:00:55.326 ******* 2026-04-08 00:56:03.563786 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-04-08 00:56:03.563791 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-04-08 00:56:03.563796 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-04-08 00:56:03.563804 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-04-08 00:56:03.563809 | orchestrator | 2026-04-08 00:56:03.563813 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:56:03.563818 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-08 00:56:03.563823 | orchestrator | 2026-04-08 00:56:03.563827 | orchestrator | 2026-04-08 00:56:03.563832 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:56:03.563837 | orchestrator | Wednesday 08 April 2026 00:56:01 +0000 (0:00:02.573) 0:00:57.900 ******* 2026-04-08 00:56:03.563841 | orchestrator | =============================================================================== 2026-04-08 00:56:03.563846 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 42.62s 2026-04-08 00:56:03.563850 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.78s 2026-04-08 00:56:03.563855 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 2.57s 2026-04-08 00:56:03.563859 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.61s 2026-04-08 00:56:03.563863 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.47s 2026-04-08 00:56:03.563867 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.17s 2026-04-08 00:56:03.563871 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.91s 2026-04-08 00:56:03.563875 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.86s 2026-04-08 00:56:03.563881 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.73s 2026-04-08 00:56:03.563887 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.61s 2026-04-08 00:56:03.563893 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.60s 2026-04-08 00:56:03.563899 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.30s 2026-04-08 00:56:03.563904 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.24s 2026-04-08 00:56:03.563910 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.14s 2026-04-08 00:56:03.563916 | orchestrator | 2026-04-08 00:56:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:56:06.612746 | orchestrator | 2026-04-08 00:56:06 | INFO  | Task fc1cb447-32c5-4fcf-981e-096eb440472c is in state STARTED 2026-04-08 00:56:06.614255 | orchestrator | 2026-04-08 00:56:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:56:06.615263 | orchestrator | 2026-04-08 00:56:06 | INFO  | Task d4ba0406-23a5-4811-a75f-99387af6b44a is in state STARTED 2026-04-08 00:56:06.617326 | orchestrator | 2026-04-08 00:56:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:56:06.618771 | orchestrator | 2026-04-08 00:56:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:56:09.660659 | orchestrator | 2026-04-08 00:56:09 | INFO  | Task fc1cb447-32c5-4fcf-981e-096eb440472c is in state STARTED 2026-04-08 00:56:09.661022 | orchestrator | 2026-04-08 00:56:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:56:09.663521 | orchestrator | 2026-04-08 00:56:09 | INFO  | Task d4ba0406-23a5-4811-a75f-99387af6b44a is in state STARTED 2026-04-08 00:56:09.663903 | orchestrator | 2026-04-08 00:56:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:56:09.663922 | orchestrator | 2026-04-08 00:56:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:56:12.704891 | orchestrator | 2026-04-08 00:56:12 | INFO  | Task fc1cb447-32c5-4fcf-981e-096eb440472c is in state STARTED 2026-04-08 00:56:12.707845 | orchestrator | 2026-04-08 00:56:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:56:12.708171 | orchestrator | 2026-04-08 00:56:12 | INFO  | Task d4ba0406-23a5-4811-a75f-99387af6b44a is in state STARTED 2026-04-08 00:56:12.710181 | orchestrator | 2026-04-08 00:56:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:56:12.710250 | orchestrator | 2026-04-08 00:56:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:56:15.749725 | orchestrator | 2026-04-08 00:56:15 | INFO  | Task fc1cb447-32c5-4fcf-981e-096eb440472c is in state STARTED 2026-04-08 00:56:15.750809 | orchestrator | 2026-04-08 00:56:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:56:15.752527 | orchestrator | 2026-04-08 00:56:15 | INFO  | Task d4ba0406-23a5-4811-a75f-99387af6b44a is in state STARTED 2026-04-08 00:56:15.754096 | orchestrator | 2026-04-08 00:56:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:56:15.754142 | orchestrator | 2026-04-08 00:56:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:56:18.794986 | orchestrator | 2026-04-08 00:56:18 | INFO  | Task fc1cb447-32c5-4fcf-981e-096eb440472c is in state STARTED 2026-04-08 00:56:18.796681 | orchestrator | 2026-04-08 00:56:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:56:18.798186 | orchestrator | 2026-04-08 00:56:18 | INFO  | Task d4ba0406-23a5-4811-a75f-99387af6b44a is in state STARTED 2026-04-08 00:56:18.799970 | orchestrator | 2026-04-08 00:56:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:56:18.800038 | orchestrator | 2026-04-08 00:56:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:56:21.840725 | orchestrator | 2026-04-08 00:56:21 | INFO  | Task fc1cb447-32c5-4fcf-981e-096eb440472c is in state STARTED 2026-04-08 00:56:21.840812 | orchestrator | 2026-04-08 00:56:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:56:21.841482 | orchestrator | 2026-04-08 00:56:21 | INFO  | Task d4ba0406-23a5-4811-a75f-99387af6b44a is in state STARTED 2026-04-08 00:56:21.841934 | orchestrator | 2026-04-08 00:56:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:56:21.842092 | orchestrator | 2026-04-08 00:56:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:56:24.880869 | orchestrator | 2026-04-08 00:56:24 | INFO  | Task fc1cb447-32c5-4fcf-981e-096eb440472c is in state STARTED 2026-04-08 00:56:24.884550 | orchestrator | 2026-04-08 00:56:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:56:24.887360 | orchestrator | 2026-04-08 00:56:24 | INFO  | Task d4ba0406-23a5-4811-a75f-99387af6b44a is in state STARTED 2026-04-08 00:56:24.890263 | orchestrator | 2026-04-08 00:56:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:56:24.890873 | orchestrator | 2026-04-08 00:56:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:56:27.942356 | orchestrator | 2026-04-08 00:56:27 | INFO  | Task fc1cb447-32c5-4fcf-981e-096eb440472c is in state STARTED 2026-04-08 00:56:27.943686 | orchestrator | 2026-04-08 00:56:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:56:27.944920 | orchestrator | 2026-04-08 00:56:27 | INFO  | Task d4ba0406-23a5-4811-a75f-99387af6b44a is in state STARTED 2026-04-08 00:56:27.946188 | orchestrator | 2026-04-08 00:56:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:56:27.946304 | orchestrator | 2026-04-08 00:56:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:56:30.992344 | orchestrator | 2026-04-08 00:56:30 | INFO  | Task fc1cb447-32c5-4fcf-981e-096eb440472c is in state STARTED 2026-04-08 00:56:30.992602 | orchestrator | 2026-04-08 00:56:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:56:30.993453 | orchestrator | 2026-04-08 00:56:30 | INFO  | Task d4ba0406-23a5-4811-a75f-99387af6b44a is in state STARTED 2026-04-08 00:56:30.995893 | orchestrator | 2026-04-08 00:56:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:56:30.995934 | orchestrator | 2026-04-08 00:56:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:56:34.030507 | orchestrator | 2026-04-08 00:56:34 | INFO  | Task fc1cb447-32c5-4fcf-981e-096eb440472c is in state STARTED 2026-04-08 00:56:34.033192 | orchestrator | 2026-04-08 00:56:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:56:34.035810 | orchestrator | 2026-04-08 00:56:34 | INFO  | Task d4ba0406-23a5-4811-a75f-99387af6b44a is in state STARTED 2026-04-08 00:56:34.037957 | orchestrator | 2026-04-08 00:56:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:56:34.038119 | orchestrator | 2026-04-08 00:56:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:56:37.086184 | orchestrator | 2026-04-08 00:56:37 | INFO  | Task fc1cb447-32c5-4fcf-981e-096eb440472c is in state STARTED 2026-04-08 00:56:37.087360 | orchestrator | 2026-04-08 00:56:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:56:37.089442 | orchestrator | 2026-04-08 00:56:37 | INFO  | Task d4ba0406-23a5-4811-a75f-99387af6b44a is in state STARTED 2026-04-08 00:56:37.091523 | orchestrator | 2026-04-08 00:56:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:56:37.091580 | orchestrator | 2026-04-08 00:56:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:56:40.142214 | orchestrator | 2026-04-08 00:56:40 | INFO  | Task fc1cb447-32c5-4fcf-981e-096eb440472c is in state STARTED 2026-04-08 00:56:40.144196 | orchestrator | 2026-04-08 00:56:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:56:40.146366 | orchestrator | 2026-04-08 00:56:40 | INFO  | Task d4ba0406-23a5-4811-a75f-99387af6b44a is in state STARTED 2026-04-08 00:56:40.148441 | orchestrator | 2026-04-08 00:56:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:56:40.148482 | orchestrator | 2026-04-08 00:56:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:56:43.200391 | orchestrator | 2026-04-08 00:56:43 | INFO  | Task fc1cb447-32c5-4fcf-981e-096eb440472c is in state STARTED 2026-04-08 00:56:43.202954 | orchestrator | 2026-04-08 00:56:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:56:43.204916 | orchestrator | 2026-04-08 00:56:43 | INFO  | Task d4ba0406-23a5-4811-a75f-99387af6b44a is in state STARTED 2026-04-08 00:56:43.207389 | orchestrator | 2026-04-08 00:56:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:56:43.207500 | orchestrator | 2026-04-08 00:56:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:56:46.260504 | orchestrator | 2026-04-08 00:56:46 | INFO  | Task fc1cb447-32c5-4fcf-981e-096eb440472c is in state STARTED 2026-04-08 00:56:46.266869 | orchestrator | 2026-04-08 00:56:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:56:46.267633 | orchestrator | 2026-04-08 00:56:46 | INFO  | Task d4ba0406-23a5-4811-a75f-99387af6b44a is in state STARTED 2026-04-08 00:56:46.286470 | orchestrator | 2026-04-08 00:56:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:56:46.286549 | orchestrator | 2026-04-08 00:56:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:56:49.313579 | orchestrator | 2026-04-08 00:56:49 | INFO  | Task fc1cb447-32c5-4fcf-981e-096eb440472c is in state STARTED 2026-04-08 00:56:49.313772 | orchestrator | 2026-04-08 00:56:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:56:49.314986 | orchestrator | 2026-04-08 00:56:49 | INFO  | Task d4ba0406-23a5-4811-a75f-99387af6b44a is in state STARTED 2026-04-08 00:56:49.315625 | orchestrator | 2026-04-08 00:56:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:56:49.315827 | orchestrator | 2026-04-08 00:56:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:56:52.353613 | orchestrator | 2026-04-08 00:56:52 | INFO  | Task fc1cb447-32c5-4fcf-981e-096eb440472c is in state STARTED 2026-04-08 00:56:52.354649 | orchestrator | 2026-04-08 00:56:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:56:52.356063 | orchestrator | 2026-04-08 00:56:52 | INFO  | Task d4ba0406-23a5-4811-a75f-99387af6b44a is in state STARTED 2026-04-08 00:56:52.357344 | orchestrator | 2026-04-08 00:56:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:56:52.357446 | orchestrator | 2026-04-08 00:56:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:56:55.392589 | orchestrator | 2026-04-08 00:56:55 | INFO  | Task fc1cb447-32c5-4fcf-981e-096eb440472c is in state STARTED 2026-04-08 00:56:55.392669 | orchestrator | 2026-04-08 00:56:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:56:55.392983 | orchestrator | 2026-04-08 00:56:55 | INFO  | Task d4ba0406-23a5-4811-a75f-99387af6b44a is in state STARTED 2026-04-08 00:56:55.393761 | orchestrator | 2026-04-08 00:56:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:56:55.393796 | orchestrator | 2026-04-08 00:56:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:56:58.440553 | orchestrator | 2026-04-08 00:56:58 | INFO  | Task fc1cb447-32c5-4fcf-981e-096eb440472c is in state STARTED 2026-04-08 00:56:58.440623 | orchestrator | 2026-04-08 00:56:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:56:58.441140 | orchestrator | 2026-04-08 00:56:58 | INFO  | Task d4ba0406-23a5-4811-a75f-99387af6b44a is in state STARTED 2026-04-08 00:56:58.442056 | orchestrator | 2026-04-08 00:56:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:56:58.442096 | orchestrator | 2026-04-08 00:56:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:57:01.483689 | orchestrator | 2026-04-08 00:57:01 | INFO  | Task fc1cb447-32c5-4fcf-981e-096eb440472c is in state STARTED 2026-04-08 00:57:01.485469 | orchestrator | 2026-04-08 00:57:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:57:01.487309 | orchestrator | 2026-04-08 00:57:01 | INFO  | Task d4ba0406-23a5-4811-a75f-99387af6b44a is in state STARTED 2026-04-08 00:57:01.489086 | orchestrator | 2026-04-08 00:57:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:57:01.489128 | orchestrator | 2026-04-08 00:57:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:57:04.529236 | orchestrator | 2026-04-08 00:57:04 | INFO  | Task fc1cb447-32c5-4fcf-981e-096eb440472c is in state STARTED 2026-04-08 00:57:04.531241 | orchestrator | 2026-04-08 00:57:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:57:04.532863 | orchestrator | 2026-04-08 00:57:04 | INFO  | Task d4ba0406-23a5-4811-a75f-99387af6b44a is in state STARTED 2026-04-08 00:57:04.534540 | orchestrator | 2026-04-08 00:57:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:57:04.534854 | orchestrator | 2026-04-08 00:57:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:57:07.587337 | orchestrator | 2026-04-08 00:57:07 | INFO  | Task fc1cb447-32c5-4fcf-981e-096eb440472c is in state STARTED 2026-04-08 00:57:07.590219 | orchestrator | 2026-04-08 00:57:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:57:07.594129 | orchestrator | 2026-04-08 00:57:07 | INFO  | Task d4ba0406-23a5-4811-a75f-99387af6b44a is in state STARTED 2026-04-08 00:57:07.597508 | orchestrator | 2026-04-08 00:57:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:57:07.597593 | orchestrator | 2026-04-08 00:57:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:57:10.650140 | orchestrator | 2026-04-08 00:57:10 | INFO  | Task fc1cb447-32c5-4fcf-981e-096eb440472c is in state STARTED 2026-04-08 00:57:10.650295 | orchestrator | 2026-04-08 00:57:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:57:10.650306 | orchestrator | 2026-04-08 00:57:10 | INFO  | Task d4ba0406-23a5-4811-a75f-99387af6b44a is in state STARTED 2026-04-08 00:57:10.650310 | orchestrator | 2026-04-08 00:57:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:57:10.650314 | orchestrator | 2026-04-08 00:57:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:57:13.678964 | orchestrator | 2026-04-08 00:57:13 | INFO  | Task fc1cb447-32c5-4fcf-981e-096eb440472c is in state STARTED 2026-04-08 00:57:13.683348 | orchestrator | 2026-04-08 00:57:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:57:13.689717 | orchestrator | 2026-04-08 00:57:13 | INFO  | Task d4ba0406-23a5-4811-a75f-99387af6b44a is in state SUCCESS 2026-04-08 00:57:13.689832 | orchestrator | 2026-04-08 00:57:13.692192 | orchestrator | 2026-04-08 00:57:13.692251 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-08 00:57:13.692259 | orchestrator | 2026-04-08 00:57:13.692263 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-08 00:57:13.692268 | orchestrator | Wednesday 08 April 2026 00:56:05 +0000 (0:00:00.422) 0:00:00.422 ******* 2026-04-08 00:57:13.692273 | orchestrator | ok: [testbed-manager] 2026-04-08 00:57:13.692278 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:57:13.692282 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:57:13.692286 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:57:13.692290 | orchestrator | ok: [testbed-node-3] 2026-04-08 00:57:13.692294 | orchestrator | ok: [testbed-node-4] 2026-04-08 00:57:13.692298 | orchestrator | ok: [testbed-node-5] 2026-04-08 00:57:13.692302 | orchestrator | 2026-04-08 00:57:13.692306 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-08 00:57:13.692310 | orchestrator | Wednesday 08 April 2026 00:56:06 +0000 (0:00:00.808) 0:00:01.231 ******* 2026-04-08 00:57:13.692315 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-04-08 00:57:13.692319 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-04-08 00:57:13.692323 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-04-08 00:57:13.692327 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-04-08 00:57:13.692330 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-04-08 00:57:13.692334 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-04-08 00:57:13.692357 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-04-08 00:57:13.692361 | orchestrator | 2026-04-08 00:57:13.692365 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-04-08 00:57:13.692368 | orchestrator | 2026-04-08 00:57:13.692372 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-08 00:57:13.692376 | orchestrator | Wednesday 08 April 2026 00:56:07 +0000 (0:00:00.993) 0:00:02.225 ******* 2026-04-08 00:57:13.692380 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:57:13.692386 | orchestrator | 2026-04-08 00:57:13.692390 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-04-08 00:57:13.692393 | orchestrator | Wednesday 08 April 2026 00:56:08 +0000 (0:00:01.411) 0:00:03.636 ******* 2026-04-08 00:57:13.692400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 00:57:13.692407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 00:57:13.692424 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-08 00:57:13.692447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:57:13.692454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:57:13.692466 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 00:57:13.692472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 00:57:13.692480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:57:13.692486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:57:13.692492 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 00:57:13.692514 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 00:57:13.692528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:57:13.692534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 00:57:13.692543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 00:57:13.692547 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 00:57:13.692551 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 00:57:13.692559 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:57:13.692566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:57:13.692570 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 00:57:13.692581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:57:13.692585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:57:13.692589 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 00:57:13.692593 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-08 00:57:13.692597 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:57:13.692604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 00:57:13.692611 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 00:57:13.692619 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-08 00:57:13.692623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:57:13.692627 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-08 00:57:13.692631 | orchestrator | 2026-04-08 00:57:13.692635 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-08 00:57:13.692639 | orchestrator | Wednesday 08 April 2026 00:56:12 +0000 (0:00:03.874) 0:00:07.510 ******* 2026-04-08 00:57:13.692643 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-08 00:57:13.692647 | orchestrator | 2026-04-08 00:57:13.692651 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-04-08 00:57:13.692655 | orchestrator | Wednesday 08 April 2026 00:56:14 +0000 (0:00:01.568) 0:00:09.079 ******* 2026-04-08 00:57:13.692661 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-08 00:57:13.692666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 00:57:13.692676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 00:57:13.692680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 00:57:13.692684 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 00:57:13.692688 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 00:57:13.692692 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 00:57:13.692696 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 00:57:13.692703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:57:13.692713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:57:13.692717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:57:13.692722 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 00:57:13.692726 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 00:57:13.692730 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 00:57:13.692733 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 00:57:13.692737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:57:13.692752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:57:13.692759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:57:13.692763 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-08 00:57:13.692767 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-08 00:57:13.692771 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-08 00:57:13.692775 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:57:13.692781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 00:57:13.692789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 00:57:13.693260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 00:57:13.693289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:57:13.693297 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:57:13.693305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:57:13.693311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:57:13.693318 | orchestrator | 2026-04-08 00:57:13.693325 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-04-08 00:57:13.693331 | orchestrator | Wednesday 08 April 2026 00:56:19 +0000 (0:00:04.914) 0:00:13.994 ******* 2026-04-08 00:57:13.693338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-08 00:57:13.693360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-08 00:57:13.693373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:57:13.693380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:57:13.693387 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-08 00:57:13.693394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:57:13.693401 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-08 00:57:13.693416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:57:13.693424 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-08 00:57:13.693433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-08 00:57:13.693439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-08 00:57:13.693446 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:57:13.693452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:57:13.693463 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:57:13.693469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-08 00:57:13.693478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:57:13.693484 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:57:13.693494 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:57:13.693500 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:57:13.693506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:57:13.693512 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-08 00:57:13.693518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:57:13.693525 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-08 00:57:13.693536 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-08 00:57:13.693545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-08 00:57:13.693552 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-08 00:57:13.693561 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:57:13.693568 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-08 00:57:13.693575 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-08 00:57:13.693605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:57:13.693613 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:57:13.693619 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-08 00:57:13.693631 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:57:13.693638 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-08 00:57:13.693644 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-08 00:57:13.693651 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:57:13.693657 | orchestrator | 2026-04-08 00:57:13.693664 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-04-08 00:57:13.693671 | orchestrator | Wednesday 08 April 2026 00:56:21 +0000 (0:00:01.941) 0:00:15.935 ******* 2026-04-08 00:57:13.693678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-08 00:57:13.693691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:57:13.693698 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-08 00:57:13.693795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-08 00:57:13.693816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:57:13.693823 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-08 00:57:13.693832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:57:13.693844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-08 00:57:13.693850 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-08 00:57:13.693857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-08 00:57:13.693864 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:57:13.693875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:57:13.693881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:57:13.694074 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:57:13.694084 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:57:13.694090 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:57:13.694102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:57:13.694109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-08 00:57:13.694115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:57:13.694126 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-08 00:57:13.694133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-08 00:57:13.694139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:57:13.694145 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:57:13.694156 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-08 00:57:13.694453 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-08 00:57:13.694520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:57:13.694527 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:57:13.694533 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-08 00:57:13.694555 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:57:13.694559 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-08 00:57:13.694564 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-08 00:57:13.694568 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-08 00:57:13.694583 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-08 00:57:13.694589 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:57:13.694597 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-08 00:57:13.694606 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:57:13.694613 | orchestrator | 2026-04-08 00:57:13.694630 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-04-08 00:57:13.694637 | orchestrator | Wednesday 08 April 2026 00:56:23 +0000 (0:00:02.292) 0:00:18.228 ******* 2026-04-08 00:57:13.694643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 00:57:13.694657 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-08 00:57:13.694664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 00:57:13.694671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 00:57:13.694678 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 00:57:13.694689 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 00:57:13.694706 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 00:57:13.694713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:57:13.694725 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 00:57:13.694732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:57:13.694739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:57:13.694745 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 00:57:13.694756 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 00:57:13.694763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:57:13.694774 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 00:57:13.694786 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 00:57:13.694793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:57:13.694802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:57:13.694808 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-08 00:57:13.694815 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-08 00:57:13.694824 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-08 00:57:13.694832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 00:57:13.694843 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:57:13.694851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 00:57:13.694858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 00:57:13.694864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:57:13.694874 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:57:13.694880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:57:13.694895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:57:13.694901 | orchestrator | 2026-04-08 00:57:13.694907 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-04-08 00:57:13.694913 | orchestrator | Wednesday 08 April 2026 00:56:28 +0000 (0:00:05.008) 0:00:23.236 ******* 2026-04-08 00:57:13.694919 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-08 00:57:13.694925 | orchestrator | 2026-04-08 00:57:13.694931 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-04-08 00:57:13.694936 | orchestrator | Wednesday 08 April 2026 00:56:29 +0000 (0:00:00.833) 0:00:24.070 ******* 2026-04-08 00:57:13.694942 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:57:13.694947 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:57:13.694953 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:57:13.694958 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:57:13.694965 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:57:13.694971 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:57:13.694977 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:57:13.694982 | orchestrator | 2026-04-08 00:57:13.694988 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-04-08 00:57:13.694994 | orchestrator | Wednesday 08 April 2026 00:56:29 +0000 (0:00:00.736) 0:00:24.806 ******* 2026-04-08 00:57:13.695000 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-08 00:57:13.695005 | orchestrator | 2026-04-08 00:57:13.695011 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-04-08 00:57:13.695016 | orchestrator | Wednesday 08 April 2026 00:56:30 +0000 (0:00:00.695) 0:00:25.501 ******* 2026-04-08 00:57:13.695022 | orchestrator | [WARNING]: Skipped 2026-04-08 00:57:13.695029 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-08 00:57:13.695036 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-04-08 00:57:13.695043 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-08 00:57:13.695048 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-04-08 00:57:13.695055 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-08 00:57:13.695061 | orchestrator | [WARNING]: Skipped 2026-04-08 00:57:13.695067 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-08 00:57:13.695073 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-04-08 00:57:13.695079 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-08 00:57:13.695084 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-04-08 00:57:13.695090 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-08 00:57:13.695096 | orchestrator | [WARNING]: Skipped 2026-04-08 00:57:13.695101 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-08 00:57:13.695107 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-04-08 00:57:13.695113 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-08 00:57:13.695119 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-04-08 00:57:13.695126 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-08 00:57:13.695132 | orchestrator | [WARNING]: Skipped 2026-04-08 00:57:13.695139 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-08 00:57:13.695145 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-04-08 00:57:13.695159 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-08 00:57:13.695165 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-04-08 00:57:13.695193 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-08 00:57:13.695201 | orchestrator | [WARNING]: Skipped 2026-04-08 00:57:13.695208 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-08 00:57:13.695213 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-04-08 00:57:13.695220 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-08 00:57:13.695225 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-04-08 00:57:13.695231 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-08 00:57:13.695236 | orchestrator | [WARNING]: Skipped 2026-04-08 00:57:13.695243 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-08 00:57:13.695250 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-04-08 00:57:13.695262 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-08 00:57:13.695269 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-04-08 00:57:13.695275 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-08 00:57:13.695282 | orchestrator | [WARNING]: Skipped 2026-04-08 00:57:13.695289 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-08 00:57:13.695295 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-04-08 00:57:13.695302 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-08 00:57:13.695308 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-04-08 00:57:13.695314 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-08 00:57:13.695318 | orchestrator | 2026-04-08 00:57:13.695323 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-04-08 00:57:13.695333 | orchestrator | Wednesday 08 April 2026 00:56:32 +0000 (0:00:01.456) 0:00:26.959 ******* 2026-04-08 00:57:13.695338 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-08 00:57:13.695343 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:57:13.695348 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-08 00:57:13.695352 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:57:13.695357 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-08 00:57:13.695361 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:57:13.695365 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-08 00:57:13.695370 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:57:13.695375 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-08 00:57:13.695380 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:57:13.695384 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-08 00:57:13.695388 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:57:13.695393 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-04-08 00:57:13.695397 | orchestrator | 2026-04-08 00:57:13.695402 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-04-08 00:57:13.695407 | orchestrator | Wednesday 08 April 2026 00:56:45 +0000 (0:00:13.370) 0:00:40.329 ******* 2026-04-08 00:57:13.695411 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-08 00:57:13.695416 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:57:13.695420 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-08 00:57:13.695425 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:57:13.695435 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-08 00:57:13.695439 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:57:13.695443 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-08 00:57:13.695448 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:57:13.695452 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-08 00:57:13.695455 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:57:13.695459 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-08 00:57:13.695463 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:57:13.695468 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-04-08 00:57:13.695475 | orchestrator | 2026-04-08 00:57:13.695481 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-04-08 00:57:13.695487 | orchestrator | Wednesday 08 April 2026 00:56:48 +0000 (0:00:03.043) 0:00:43.372 ******* 2026-04-08 00:57:13.695493 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-08 00:57:13.695500 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-08 00:57:13.695506 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:57:13.695513 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:57:13.695519 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-08 00:57:13.695525 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:57:13.695532 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-08 00:57:13.695537 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:57:13.695541 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-08 00:57:13.695545 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:57:13.695549 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-08 00:57:13.695553 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:57:13.695563 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-04-08 00:57:13.695567 | orchestrator | 2026-04-08 00:57:13.695571 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-04-08 00:57:13.695575 | orchestrator | Wednesday 08 April 2026 00:56:49 +0000 (0:00:01.323) 0:00:44.695 ******* 2026-04-08 00:57:13.695579 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-08 00:57:13.695583 | orchestrator | 2026-04-08 00:57:13.695586 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-04-08 00:57:13.695590 | orchestrator | Wednesday 08 April 2026 00:56:50 +0000 (0:00:00.724) 0:00:45.420 ******* 2026-04-08 00:57:13.695595 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:57:13.695598 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:57:13.695602 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:57:13.695606 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:57:13.695610 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:57:13.695614 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:57:13.695622 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:57:13.695626 | orchestrator | 2026-04-08 00:57:13.695630 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-04-08 00:57:13.695634 | orchestrator | Wednesday 08 April 2026 00:56:51 +0000 (0:00:00.726) 0:00:46.147 ******* 2026-04-08 00:57:13.695645 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:57:13.695649 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:57:13.695653 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:57:13.695657 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:57:13.695661 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:57:13.695664 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:57:13.695668 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:57:13.695723 | orchestrator | 2026-04-08 00:57:13.695728 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-04-08 00:57:13.695732 | orchestrator | Wednesday 08 April 2026 00:56:52 +0000 (0:00:01.661) 0:00:47.808 ******* 2026-04-08 00:57:13.695736 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-08 00:57:13.695741 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:57:13.695745 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-08 00:57:13.695749 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:57:13.695753 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-08 00:57:13.695757 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:57:13.695761 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-08 00:57:13.695765 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:57:13.695769 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-08 00:57:13.695773 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:57:13.695776 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-08 00:57:13.695781 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:57:13.695784 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-08 00:57:13.695788 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:57:13.695792 | orchestrator | 2026-04-08 00:57:13.695796 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-04-08 00:57:13.695800 | orchestrator | Wednesday 08 April 2026 00:56:54 +0000 (0:00:01.227) 0:00:49.036 ******* 2026-04-08 00:57:13.695804 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-08 00:57:13.695808 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:57:13.695812 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-08 00:57:13.695816 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:57:13.695819 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-08 00:57:13.695823 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:57:13.695827 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-08 00:57:13.695831 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:57:13.695835 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-08 00:57:13.695839 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:57:13.695843 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-08 00:57:13.695846 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:57:13.695850 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-04-08 00:57:13.695854 | orchestrator | 2026-04-08 00:57:13.695858 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-04-08 00:57:13.695862 | orchestrator | Wednesday 08 April 2026 00:56:55 +0000 (0:00:01.500) 0:00:50.537 ******* 2026-04-08 00:57:13.695866 | orchestrator | [WARNING]: Skipped 2026-04-08 00:57:13.695877 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-04-08 00:57:13.695881 | orchestrator | due to this access issue: 2026-04-08 00:57:13.695885 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-04-08 00:57:13.695889 | orchestrator | not a directory 2026-04-08 00:57:13.695893 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-08 00:57:13.695897 | orchestrator | 2026-04-08 00:57:13.695901 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-04-08 00:57:13.695908 | orchestrator | Wednesday 08 April 2026 00:56:56 +0000 (0:00:00.966) 0:00:51.503 ******* 2026-04-08 00:57:13.695912 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:57:13.695916 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:57:13.695920 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:57:13.695924 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:57:13.695927 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:57:13.695931 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:57:13.695935 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:57:13.695939 | orchestrator | 2026-04-08 00:57:13.695943 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-04-08 00:57:13.695947 | orchestrator | Wednesday 08 April 2026 00:56:57 +0000 (0:00:00.492) 0:00:51.996 ******* 2026-04-08 00:57:13.695951 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:57:13.695955 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:57:13.695959 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:57:13.695962 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:57:13.695967 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:57:13.695974 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:57:13.695978 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:57:13.695982 | orchestrator | 2026-04-08 00:57:13.695986 | orchestrator | TASK [service-check-containers : prometheus | Check containers] **************** 2026-04-08 00:57:13.695990 | orchestrator | Wednesday 08 April 2026 00:56:57 +0000 (0:00:00.595) 0:00:52.591 ******* 2026-04-08 00:57:13.695996 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-08 00:57:13.696001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 00:57:13.696006 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 00:57:13.696014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 00:57:13.696021 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 00:57:13.696026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 00:57:13.696034 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 00:57:13.696038 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-08 00:57:13.696042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:57:13.696047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:57:13.696054 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 00:57:13.696059 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 00:57:13.696066 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 00:57:13.696075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:57:13.696079 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 00:57:13.696084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:57:13.696088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:57:13.696096 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-08 00:57:13.696100 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-08 00:57:13.696104 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-08 00:57:13.696111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:57:13.696118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 00:57:13.696122 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:57:13.696126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 00:57:13.696133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-08 00:57:13.696137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:57:13.696144 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:57:13.696148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:57:13.696156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-08 00:57:13.696160 | orchestrator | 2026-04-08 00:57:13.696164 | orchestrator | TASK [service-check-containers : prometheus | Notify handlers to restart containers] *** 2026-04-08 00:57:13.696168 | orchestrator | Wednesday 08 April 2026 00:57:01 +0000 (0:00:03.539) 0:00:56.131 ******* 2026-04-08 00:57:13.696251 | orchestrator | changed: [testbed-manager] => { 2026-04-08 00:57:13.696259 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:57:13.696263 | orchestrator | } 2026-04-08 00:57:13.696267 | orchestrator | changed: [testbed-node-0] => { 2026-04-08 00:57:13.696271 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:57:13.696275 | orchestrator | } 2026-04-08 00:57:13.696279 | orchestrator | changed: [testbed-node-1] => { 2026-04-08 00:57:13.696282 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:57:13.696286 | orchestrator | } 2026-04-08 00:57:13.696290 | orchestrator | changed: [testbed-node-2] => { 2026-04-08 00:57:13.696301 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:57:13.696308 | orchestrator | } 2026-04-08 00:57:13.696313 | orchestrator | changed: [testbed-node-3] => { 2026-04-08 00:57:13.696319 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:57:13.696325 | orchestrator | } 2026-04-08 00:57:13.696331 | orchestrator | changed: [testbed-node-4] => { 2026-04-08 00:57:13.696337 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:57:13.696343 | orchestrator | } 2026-04-08 00:57:13.696349 | orchestrator | changed: [testbed-node-5] => { 2026-04-08 00:57:13.696355 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:57:13.696361 | orchestrator | } 2026-04-08 00:57:13.696367 | orchestrator | 2026-04-08 00:57:13.696374 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-08 00:57:13.696380 | orchestrator | Wednesday 08 April 2026 00:57:02 +0000 (0:00:00.900) 0:00:57.031 ******* 2026-04-08 00:57:13.696386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-08 00:57:13.696394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:57:13.696399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:57:13.696407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-08 00:57:13.696416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:57:13.696421 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-08 00:57:13.696430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-08 00:57:13.696434 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-08 00:57:13.696438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:57:13.696442 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-08 00:57:13.696450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:57:13.696459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-08 00:57:13.696466 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:57:13.696471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:57:13.696475 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:57:13.696479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-08 00:57:13.696486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:57:13.696490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:57:13.696497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-08 00:57:13.696505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-08 00:57:13.696509 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:57:13.696513 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:57:13.696517 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:57:13.696520 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:57:13.696524 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-08 00:57:13.696528 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-08 00:57:13.696532 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-08 00:57:13.696536 | orchestrator | skipping: [testbed-node-3] 2026-04-08 00:57:13.696540 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-08 00:57:13.696546 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-08 00:57:13.696556 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-08 00:57:13.696560 | orchestrator | skipping: [testbed-node-4] 2026-04-08 00:57:13.696564 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-08 00:57:13.696568 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-08 00:57:13.696572 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-08 00:57:13.696576 | orchestrator | skipping: [testbed-node-5] 2026-04-08 00:57:13.696580 | orchestrator | 2026-04-08 00:57:13.696584 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-04-08 00:57:13.696589 | orchestrator | Wednesday 08 April 2026 00:57:04 +0000 (0:00:01.950) 0:00:58.982 ******* 2026-04-08 00:57:13.696595 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-08 00:57:13.696601 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:57:13.696611 | orchestrator | 2026-04-08 00:57:13.696621 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-08 00:57:13.696627 | orchestrator | Wednesday 08 April 2026 00:57:05 +0000 (0:00:01.242) 0:01:00.224 ******* 2026-04-08 00:57:13.696632 | orchestrator | 2026-04-08 00:57:13.696638 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-08 00:57:13.696643 | orchestrator | Wednesday 08 April 2026 00:57:05 +0000 (0:00:00.069) 0:01:00.294 ******* 2026-04-08 00:57:13.696649 | orchestrator | 2026-04-08 00:57:13.696654 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-08 00:57:13.696662 | orchestrator | Wednesday 08 April 2026 00:57:05 +0000 (0:00:00.318) 0:01:00.613 ******* 2026-04-08 00:57:13.696667 | orchestrator | 2026-04-08 00:57:13.696673 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-08 00:57:13.696679 | orchestrator | Wednesday 08 April 2026 00:57:05 +0000 (0:00:00.065) 0:01:00.678 ******* 2026-04-08 00:57:13.696684 | orchestrator | 2026-04-08 00:57:13.696691 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-08 00:57:13.696697 | orchestrator | Wednesday 08 April 2026 00:57:05 +0000 (0:00:00.066) 0:01:00.745 ******* 2026-04-08 00:57:13.696708 | orchestrator | 2026-04-08 00:57:13.696713 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-08 00:57:13.696719 | orchestrator | Wednesday 08 April 2026 00:57:05 +0000 (0:00:00.065) 0:01:00.811 ******* 2026-04-08 00:57:13.696724 | orchestrator | 2026-04-08 00:57:13.696730 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-08 00:57:13.696739 | orchestrator | Wednesday 08 April 2026 00:57:06 +0000 (0:00:00.063) 0:01:00.875 ******* 2026-04-08 00:57:13.696746 | orchestrator | 2026-04-08 00:57:13.696752 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-04-08 00:57:13.696757 | orchestrator | Wednesday 08 April 2026 00:57:06 +0000 (0:00:00.084) 0:01:00.959 ******* 2026-04-08 00:57:13.696771 | orchestrator | fatal: [testbed-manager]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=3.2.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_nd6xnti6/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_nd6xnti6/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_nd6xnti6/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_nd6xnti6/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=3.2.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-server: Bad Request (\"invalid reference format\")\\n'"} 2026-04-08 00:57:13.696779 | orchestrator | 2026-04-08 00:57:13.696785 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-04-08 00:57:13.696791 | orchestrator | Wednesday 08 April 2026 00:57:08 +0000 (0:00:02.519) 0:01:03.479 ******* 2026-04-08 00:57:13.696806 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-node-exporter\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_gtoygx23/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_gtoygx23/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_gtoygx23/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_gtoygx23/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-node-exporter: Bad Request (\"invalid reference format\")\\n'"} 2026-04-08 00:57:13.696819 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-node-exporter\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_jpedqj1e/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_jpedqj1e/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_jpedqj1e/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_jpedqj1e/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-node-exporter: Bad Request (\"invalid reference format\")\\n'"} 2026-04-08 00:57:13.696839 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-node-exporter\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_9u7paw39/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_9u7paw39/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_9u7paw39/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_9u7paw39/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-node-exporter: Bad Request (\"invalid reference format\")\\n'"} 2026-04-08 00:57:13.696851 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-node-exporter\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_w04mptan/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_w04mptan/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_w04mptan/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_w04mptan/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-node-exporter: Bad Request (\"invalid reference format\")\\n'"} 2026-04-08 00:57:13.696868 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-node-exporter\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_buvxlf4y/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_buvxlf4y/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_buvxlf4y/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_buvxlf4y/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-node-exporter: Bad Request (\"invalid reference format\")\\n'"} 2026-04-08 00:57:13.696884 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-node-exporter\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_7ziiz6wc/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_7ziiz6wc/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_7ziiz6wc/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_7ziiz6wc/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-node-exporter: Bad Request (\"invalid reference format\")\\n'"} 2026-04-08 00:57:13.696899 | orchestrator | 2026-04-08 00:57:13.696906 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:57:13.696913 | orchestrator | testbed-manager : ok=18  changed=9  unreachable=0 failed=1  skipped=10  rescued=0 ignored=0 2026-04-08 00:57:13.696921 | orchestrator | testbed-node-0 : ok=11  changed=6  unreachable=0 failed=1  skipped=12  rescued=0 ignored=0 2026-04-08 00:57:13.696928 | orchestrator | testbed-node-1 : ok=11  changed=6  unreachable=0 failed=1  skipped=12  rescued=0 ignored=0 2026-04-08 00:57:13.696934 | orchestrator | testbed-node-2 : ok=11  changed=6  unreachable=0 failed=1  skipped=12  rescued=0 ignored=0 2026-04-08 00:57:13.696940 | orchestrator | testbed-node-3 : ok=10  changed=5  unreachable=0 failed=1  skipped=13  rescued=0 ignored=0 2026-04-08 00:57:13.696947 | orchestrator | testbed-node-4 : ok=10  changed=5  unreachable=0 failed=1  skipped=13  rescued=0 ignored=0 2026-04-08 00:57:13.696954 | orchestrator | testbed-node-5 : ok=10  changed=5  unreachable=0 failed=1  skipped=13  rescued=0 ignored=0 2026-04-08 00:57:13.696960 | orchestrator | 2026-04-08 00:57:13.696966 | orchestrator | 2026-04-08 00:57:13.696973 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:57:13.696979 | orchestrator | Wednesday 08 April 2026 00:57:13 +0000 (0:00:04.413) 0:01:07.892 ******* 2026-04-08 00:57:13.696985 | orchestrator | =============================================================================== 2026-04-08 00:57:13.696999 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 13.37s 2026-04-08 00:57:13.697003 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.01s 2026-04-08 00:57:13.697007 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 4.91s 2026-04-08 00:57:13.697011 | orchestrator | prometheus : Restart prometheus-node-exporter container ----------------- 4.41s 2026-04-08 00:57:13.697016 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.87s 2026-04-08 00:57:13.697019 | orchestrator | service-check-containers : prometheus | Check containers ---------------- 3.54s 2026-04-08 00:57:13.697023 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.04s 2026-04-08 00:57:13.697027 | orchestrator | prometheus : Restart prometheus-server container ------------------------ 2.52s 2026-04-08 00:57:13.697031 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.29s 2026-04-08 00:57:13.697034 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.95s 2026-04-08 00:57:13.697038 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 1.94s 2026-04-08 00:57:13.697043 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 1.66s 2026-04-08 00:57:13.697047 | orchestrator | prometheus : include_tasks ---------------------------------------------- 1.57s 2026-04-08 00:57:13.697051 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 1.50s 2026-04-08 00:57:13.697054 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.46s 2026-04-08 00:57:13.697058 | orchestrator | prometheus : include_tasks ---------------------------------------------- 1.41s 2026-04-08 00:57:13.697066 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 1.32s 2026-04-08 00:57:13.697070 | orchestrator | prometheus : Creating prometheus database user and setting permissions --- 1.24s 2026-04-08 00:57:13.697073 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 1.23s 2026-04-08 00:57:13.697077 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.99s 2026-04-08 00:57:13.697081 | orchestrator | 2026-04-08 00:57:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:57:13.697086 | orchestrator | 2026-04-08 00:57:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:57:16.744047 | orchestrator | 2026-04-08 00:57:16 | INFO  | Task fc1cb447-32c5-4fcf-981e-096eb440472c is in state STARTED 2026-04-08 00:57:16.746143 | orchestrator | 2026-04-08 00:57:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:57:16.748361 | orchestrator | 2026-04-08 00:57:16 | INFO  | Task e2707348-ae7d-4484-9e84-fbb74ed8fa8d is in state STARTED 2026-04-08 00:57:16.749653 | orchestrator | 2026-04-08 00:57:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:57:16.749758 | orchestrator | 2026-04-08 00:57:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:57:19.792665 | orchestrator | 2026-04-08 00:57:19 | INFO  | Task fc1cb447-32c5-4fcf-981e-096eb440472c is in state STARTED 2026-04-08 00:57:19.792750 | orchestrator | 2026-04-08 00:57:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:57:19.793420 | orchestrator | 2026-04-08 00:57:19 | INFO  | Task e2707348-ae7d-4484-9e84-fbb74ed8fa8d is in state STARTED 2026-04-08 00:57:19.794415 | orchestrator | 2026-04-08 00:57:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:57:19.794453 | orchestrator | 2026-04-08 00:57:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:57:22.838298 | orchestrator | 2026-04-08 00:57:22 | INFO  | Task fc1cb447-32c5-4fcf-981e-096eb440472c is in state SUCCESS 2026-04-08 00:57:22.839420 | orchestrator | 2026-04-08 00:57:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:57:22.840734 | orchestrator | 2026-04-08 00:57:22 | INFO  | Task e2707348-ae7d-4484-9e84-fbb74ed8fa8d is in state STARTED 2026-04-08 00:57:22.841790 | orchestrator | 2026-04-08 00:57:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:57:22.841809 | orchestrator | 2026-04-08 00:57:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:57:25.882435 | orchestrator | 2026-04-08 00:57:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:57:25.884511 | orchestrator | 2026-04-08 00:57:25 | INFO  | Task e2707348-ae7d-4484-9e84-fbb74ed8fa8d is in state STARTED 2026-04-08 00:57:25.886270 | orchestrator | 2026-04-08 00:57:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:57:25.886312 | orchestrator | 2026-04-08 00:57:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:57:28.917115 | orchestrator | 2026-04-08 00:57:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:57:28.918804 | orchestrator | 2026-04-08 00:57:28 | INFO  | Task e2707348-ae7d-4484-9e84-fbb74ed8fa8d is in state STARTED 2026-04-08 00:57:28.920434 | orchestrator | 2026-04-08 00:57:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:57:28.920474 | orchestrator | 2026-04-08 00:57:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:57:31.971066 | orchestrator | 2026-04-08 00:57:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:57:31.973344 | orchestrator | 2026-04-08 00:57:31 | INFO  | Task e2707348-ae7d-4484-9e84-fbb74ed8fa8d is in state SUCCESS 2026-04-08 00:57:31.974283 | orchestrator | 2026-04-08 00:57:31.974332 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-08 00:57:31.974339 | orchestrator | 2.16.14 2026-04-08 00:57:31.974346 | orchestrator | 2026-04-08 00:57:31.974353 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-04-08 00:57:31.974360 | orchestrator | 2026-04-08 00:57:31.974366 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-04-08 00:57:31.974373 | orchestrator | Wednesday 08 April 2026 00:56:07 +0000 (0:00:00.255) 0:00:00.255 ******* 2026-04-08 00:57:31.974379 | orchestrator | changed: [testbed-manager] 2026-04-08 00:57:31.974386 | orchestrator | 2026-04-08 00:57:31.974396 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-04-08 00:57:31.974407 | orchestrator | Wednesday 08 April 2026 00:56:09 +0000 (0:00:02.264) 0:00:02.520 ******* 2026-04-08 00:57:31.974416 | orchestrator | changed: [testbed-manager] 2026-04-08 00:57:31.974422 | orchestrator | 2026-04-08 00:57:31.974427 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-04-08 00:57:31.974449 | orchestrator | Wednesday 08 April 2026 00:56:10 +0000 (0:00:01.235) 0:00:03.756 ******* 2026-04-08 00:57:31.974455 | orchestrator | changed: [testbed-manager] 2026-04-08 00:57:31.974461 | orchestrator | 2026-04-08 00:57:31.974467 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-04-08 00:57:31.974473 | orchestrator | Wednesday 08 April 2026 00:56:11 +0000 (0:00:01.137) 0:00:04.893 ******* 2026-04-08 00:57:31.974479 | orchestrator | changed: [testbed-manager] 2026-04-08 00:57:31.974485 | orchestrator | 2026-04-08 00:57:31.974491 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-04-08 00:57:31.974496 | orchestrator | Wednesday 08 April 2026 00:56:13 +0000 (0:00:01.260) 0:00:06.154 ******* 2026-04-08 00:57:31.974502 | orchestrator | changed: [testbed-manager] 2026-04-08 00:57:31.974507 | orchestrator | 2026-04-08 00:57:31.974513 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-04-08 00:57:31.974519 | orchestrator | Wednesday 08 April 2026 00:56:14 +0000 (0:00:01.159) 0:00:07.313 ******* 2026-04-08 00:57:31.974545 | orchestrator | changed: [testbed-manager] 2026-04-08 00:57:31.974551 | orchestrator | 2026-04-08 00:57:31.974556 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-04-08 00:57:31.974563 | orchestrator | Wednesday 08 April 2026 00:56:15 +0000 (0:00:01.675) 0:00:08.989 ******* 2026-04-08 00:57:31.974569 | orchestrator | changed: [testbed-manager] 2026-04-08 00:57:31.974576 | orchestrator | 2026-04-08 00:57:31.974582 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-04-08 00:57:31.974589 | orchestrator | Wednesday 08 April 2026 00:56:17 +0000 (0:00:02.076) 0:00:11.065 ******* 2026-04-08 00:57:31.974595 | orchestrator | changed: [testbed-manager] 2026-04-08 00:57:31.974601 | orchestrator | 2026-04-08 00:57:31.974606 | orchestrator | TASK [Create admin user] ******************************************************* 2026-04-08 00:57:31.974612 | orchestrator | Wednesday 08 April 2026 00:56:18 +0000 (0:00:01.029) 0:00:12.094 ******* 2026-04-08 00:57:31.974617 | orchestrator | changed: [testbed-manager] 2026-04-08 00:57:31.974623 | orchestrator | 2026-04-08 00:57:31.974629 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-04-08 00:57:31.974635 | orchestrator | Wednesday 08 April 2026 00:56:56 +0000 (0:00:37.752) 0:00:49.847 ******* 2026-04-08 00:57:31.974641 | orchestrator | skipping: [testbed-manager] 2026-04-08 00:57:31.974646 | orchestrator | 2026-04-08 00:57:31.974652 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-08 00:57:31.974657 | orchestrator | 2026-04-08 00:57:31.974663 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-08 00:57:31.974668 | orchestrator | Wednesday 08 April 2026 00:56:56 +0000 (0:00:00.127) 0:00:49.974 ******* 2026-04-08 00:57:31.974674 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:57:31.974681 | orchestrator | 2026-04-08 00:57:31.974715 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-08 00:57:31.974721 | orchestrator | 2026-04-08 00:57:31.974726 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-08 00:57:31.974731 | orchestrator | Wednesday 08 April 2026 00:57:08 +0000 (0:00:11.801) 0:01:01.776 ******* 2026-04-08 00:57:31.974737 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:57:31.974743 | orchestrator | 2026-04-08 00:57:31.974749 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-08 00:57:31.974755 | orchestrator | 2026-04-08 00:57:31.974761 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-08 00:57:31.974766 | orchestrator | Wednesday 08 April 2026 00:57:10 +0000 (0:00:01.535) 0:01:03.312 ******* 2026-04-08 00:57:31.974772 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:57:31.974777 | orchestrator | 2026-04-08 00:57:31.974783 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:57:31.974790 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-08 00:57:31.974797 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:57:31.974803 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:57:31.974809 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-08 00:57:31.974817 | orchestrator | 2026-04-08 00:57:31.974822 | orchestrator | 2026-04-08 00:57:31.974828 | orchestrator | 2026-04-08 00:57:31.974834 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:57:31.974840 | orchestrator | Wednesday 08 April 2026 00:57:22 +0000 (0:00:11.818) 0:01:15.130 ******* 2026-04-08 00:57:31.974846 | orchestrator | =============================================================================== 2026-04-08 00:57:31.974851 | orchestrator | Create admin user ------------------------------------------------------ 37.75s 2026-04-08 00:57:31.974879 | orchestrator | Restart ceph manager service ------------------------------------------- 25.16s 2026-04-08 00:57:31.974886 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.26s 2026-04-08 00:57:31.974893 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.08s 2026-04-08 00:57:31.974909 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.68s 2026-04-08 00:57:31.974916 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.26s 2026-04-08 00:57:31.974922 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.24s 2026-04-08 00:57:31.974928 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.16s 2026-04-08 00:57:31.974934 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.14s 2026-04-08 00:57:31.974940 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.03s 2026-04-08 00:57:31.974951 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.13s 2026-04-08 00:57:31.974957 | orchestrator | 2026-04-08 00:57:31.974963 | orchestrator | 2026-04-08 00:57:31.974969 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-08 00:57:31.974975 | orchestrator | 2026-04-08 00:57:31.974980 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-08 00:57:31.974986 | orchestrator | Wednesday 08 April 2026 00:57:16 +0000 (0:00:00.280) 0:00:00.280 ******* 2026-04-08 00:57:31.974992 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:57:31.975000 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:57:31.975006 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:57:31.975013 | orchestrator | 2026-04-08 00:57:31.975019 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-08 00:57:31.975024 | orchestrator | Wednesday 08 April 2026 00:57:16 +0000 (0:00:00.255) 0:00:00.535 ******* 2026-04-08 00:57:31.975030 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-04-08 00:57:31.975037 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-04-08 00:57:31.975042 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-04-08 00:57:31.975049 | orchestrator | 2026-04-08 00:57:31.975054 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-04-08 00:57:31.975060 | orchestrator | 2026-04-08 00:57:31.975065 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-08 00:57:31.975071 | orchestrator | Wednesday 08 April 2026 00:57:17 +0000 (0:00:00.256) 0:00:00.792 ******* 2026-04-08 00:57:31.975076 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:57:31.975083 | orchestrator | 2026-04-08 00:57:31.975089 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-04-08 00:57:31.975094 | orchestrator | Wednesday 08 April 2026 00:57:17 +0000 (0:00:00.530) 0:00:01.322 ******* 2026-04-08 00:57:31.975104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:57:31.975113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:57:31.975133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:57:31.975139 | orchestrator | 2026-04-08 00:57:31.975146 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-04-08 00:57:31.975153 | orchestrator | Wednesday 08 April 2026 00:57:18 +0000 (0:00:00.927) 0:00:02.250 ******* 2026-04-08 00:57:31.975209 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-08 00:57:31.975218 | orchestrator | 2026-04-08 00:57:31.975224 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-08 00:57:31.975229 | orchestrator | Wednesday 08 April 2026 00:57:19 +0000 (0:00:00.757) 0:00:03.007 ******* 2026-04-08 00:57:31.975235 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-08 00:57:31.975242 | orchestrator | 2026-04-08 00:57:31.975254 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-04-08 00:57:31.975260 | orchestrator | Wednesday 08 April 2026 00:57:19 +0000 (0:00:00.462) 0:00:03.470 ******* 2026-04-08 00:57:31.975267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:57:31.975273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:57:31.975280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:57:31.975291 | orchestrator | 2026-04-08 00:57:31.975297 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-04-08 00:57:31.975303 | orchestrator | Wednesday 08 April 2026 00:57:20 +0000 (0:00:01.223) 0:00:04.693 ******* 2026-04-08 00:57:31.975308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:57:31.975320 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:57:31.975332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:57:31.975340 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:57:31.975346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:57:31.975352 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:57:31.975358 | orchestrator | 2026-04-08 00:57:31.975364 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-04-08 00:57:31.975369 | orchestrator | Wednesday 08 April 2026 00:57:21 +0000 (0:00:00.376) 0:00:05.069 ******* 2026-04-08 00:57:31.975376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:57:31.975387 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:57:31.975393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:57:31.975400 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:57:31.975410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:57:31.975416 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:57:31.975422 | orchestrator | 2026-04-08 00:57:31.975427 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-04-08 00:57:31.975433 | orchestrator | Wednesday 08 April 2026 00:57:21 +0000 (0:00:00.530) 0:00:05.600 ******* 2026-04-08 00:57:31.975442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:57:31.975449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:57:31.975455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:57:31.975465 | orchestrator | 2026-04-08 00:57:31.975471 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-04-08 00:57:31.975477 | orchestrator | Wednesday 08 April 2026 00:57:22 +0000 (0:00:01.075) 0:00:06.676 ******* 2026-04-08 00:57:31.975482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:57:31.975493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:57:31.975503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:57:31.975509 | orchestrator | 2026-04-08 00:57:31.975515 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-04-08 00:57:31.975521 | orchestrator | Wednesday 08 April 2026 00:57:24 +0000 (0:00:01.332) 0:00:08.008 ******* 2026-04-08 00:57:31.975527 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:57:31.975532 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:57:31.975538 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:57:31.975544 | orchestrator | 2026-04-08 00:57:31.975550 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-04-08 00:57:31.975556 | orchestrator | Wednesday 08 April 2026 00:57:24 +0000 (0:00:00.248) 0:00:08.256 ******* 2026-04-08 00:57:31.975566 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-08 00:57:31.975573 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-08 00:57:31.975579 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-08 00:57:31.975585 | orchestrator | 2026-04-08 00:57:31.975591 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-04-08 00:57:31.975597 | orchestrator | Wednesday 08 April 2026 00:57:25 +0000 (0:00:01.084) 0:00:09.341 ******* 2026-04-08 00:57:31.975603 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-08 00:57:31.975610 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-08 00:57:31.975615 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-08 00:57:31.975622 | orchestrator | 2026-04-08 00:57:31.975628 | orchestrator | TASK [grafana : Check if the folder for custom grafana dashboards exists] ****** 2026-04-08 00:57:31.975634 | orchestrator | Wednesday 08 April 2026 00:57:26 +0000 (0:00:01.185) 0:00:10.526 ******* 2026-04-08 00:57:31.975639 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-08 00:57:31.975645 | orchestrator | 2026-04-08 00:57:31.975651 | orchestrator | TASK [grafana : Remove templated Grafana dashboards] *************************** 2026-04-08 00:57:31.975658 | orchestrator | Wednesday 08 April 2026 00:57:27 +0000 (0:00:00.696) 0:00:11.222 ******* 2026-04-08 00:57:31.975664 | orchestrator | ok: [testbed-node-0] 2026-04-08 00:57:31.975670 | orchestrator | ok: [testbed-node-1] 2026-04-08 00:57:31.975677 | orchestrator | ok: [testbed-node-2] 2026-04-08 00:57:31.975683 | orchestrator | 2026-04-08 00:57:31.975689 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-04-08 00:57:31.975695 | orchestrator | Wednesday 08 April 2026 00:57:28 +0000 (0:00:00.715) 0:00:11.937 ******* 2026-04-08 00:57:31.975701 | orchestrator | changed: [testbed-node-0] 2026-04-08 00:57:31.975707 | orchestrator | changed: [testbed-node-1] 2026-04-08 00:57:31.975713 | orchestrator | changed: [testbed-node-2] 2026-04-08 00:57:31.975720 | orchestrator | 2026-04-08 00:57:31.975726 | orchestrator | TASK [service-check-containers : grafana | Check containers] ******************* 2026-04-08 00:57:31.975732 | orchestrator | Wednesday 08 April 2026 00:57:29 +0000 (0:00:01.050) 0:00:12.988 ******* 2026-04-08 00:57:31.975738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:57:31.975749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:57:31.975768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-08 00:57:31.975774 | orchestrator | 2026-04-08 00:57:31.975779 | orchestrator | TASK [service-check-containers : grafana | Notify handlers to restart containers] *** 2026-04-08 00:57:31.975785 | orchestrator | Wednesday 08 April 2026 00:57:30 +0000 (0:00:00.840) 0:00:13.829 ******* 2026-04-08 00:57:31.975790 | orchestrator | changed: [testbed-node-0] => { 2026-04-08 00:57:31.975796 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:57:31.975804 | orchestrator | } 2026-04-08 00:57:31.975811 | orchestrator | changed: [testbed-node-1] => { 2026-04-08 00:57:31.975818 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:57:31.975823 | orchestrator | } 2026-04-08 00:57:31.975830 | orchestrator | changed: [testbed-node-2] => { 2026-04-08 00:57:31.975836 | orchestrator |  "msg": "Notifying handlers" 2026-04-08 00:57:31.975842 | orchestrator | } 2026-04-08 00:57:31.975847 | orchestrator | 2026-04-08 00:57:31.975853 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-08 00:57:31.975859 | orchestrator | Wednesday 08 April 2026 00:57:30 +0000 (0:00:00.280) 0:00:14.109 ******* 2026-04-08 00:57:31.975864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:57:31.975870 | orchestrator | skipping: [testbed-node-0] 2026-04-08 00:57:31.975876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:57:31.975882 | orchestrator | skipping: [testbed-node-1] 2026-04-08 00:57:31.975896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-08 00:57:31.975907 | orchestrator | skipping: [testbed-node-2] 2026-04-08 00:57:31.975913 | orchestrator | 2026-04-08 00:57:31.975920 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-04-08 00:57:31.975926 | orchestrator | Wednesday 08 April 2026 00:57:31 +0000 (0:00:00.668) 0:00:14.778 ******* 2026-04-08 00:57:31.975931 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-08 00:57:31.975938 | orchestrator | 2026-04-08 00:57:31.975944 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-08 00:57:31.975954 | orchestrator | testbed-node-0 : ok=16  changed=9  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-08 00:57:31.975961 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-08 00:57:31.975971 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-08 00:57:31.975978 | orchestrator | 2026-04-08 00:57:31.975988 | orchestrator | 2026-04-08 00:57:31.975995 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-08 00:57:31.976005 | orchestrator | Wednesday 08 April 2026 00:57:31 +0000 (0:00:00.623) 0:00:15.402 ******* 2026-04-08 00:57:31.976013 | orchestrator | =============================================================================== 2026-04-08 00:57:31.976021 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.33s 2026-04-08 00:57:31.976031 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.22s 2026-04-08 00:57:31.976040 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.19s 2026-04-08 00:57:31.976049 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.08s 2026-04-08 00:57:31.976058 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.08s 2026-04-08 00:57:31.976068 | orchestrator | grafana : Copying over custom dashboards -------------------------------- 1.05s 2026-04-08 00:57:31.976075 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.93s 2026-04-08 00:57:31.976084 | orchestrator | service-check-containers : grafana | Check containers ------------------- 0.84s 2026-04-08 00:57:31.976093 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.76s 2026-04-08 00:57:31.976101 | orchestrator | grafana : Remove templated Grafana dashboards --------------------------- 0.72s 2026-04-08 00:57:31.976111 | orchestrator | grafana : Check if the folder for custom grafana dashboards exists ------ 0.70s 2026-04-08 00:57:31.976120 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.67s 2026-04-08 00:57:31.976128 | orchestrator | grafana : Creating grafana database ------------------------------------- 0.62s 2026-04-08 00:57:31.976139 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.53s 2026-04-08 00:57:31.976149 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.53s 2026-04-08 00:57:31.976176 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.46s 2026-04-08 00:57:31.976186 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS certificate --- 0.38s 2026-04-08 00:57:31.976195 | orchestrator | service-check-containers : grafana | Notify handlers to restart containers --- 0.28s 2026-04-08 00:57:31.976204 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.26s 2026-04-08 00:57:31.976213 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.26s 2026-04-08 00:57:31.976357 | orchestrator | 2026-04-08 00:57:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:57:31.976369 | orchestrator | 2026-04-08 00:57:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:57:35.027849 | orchestrator | 2026-04-08 00:57:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:57:35.029821 | orchestrator | 2026-04-08 00:57:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:57:35.029889 | orchestrator | 2026-04-08 00:57:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:57:38.073217 | orchestrator | 2026-04-08 00:57:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:57:38.075843 | orchestrator | 2026-04-08 00:57:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:57:38.075931 | orchestrator | 2026-04-08 00:57:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:57:41.114658 | orchestrator | 2026-04-08 00:57:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:57:41.115999 | orchestrator | 2026-04-08 00:57:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:57:41.116137 | orchestrator | 2026-04-08 00:57:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:57:44.158581 | orchestrator | 2026-04-08 00:57:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:57:44.160984 | orchestrator | 2026-04-08 00:57:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:57:44.161041 | orchestrator | 2026-04-08 00:57:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:57:47.197930 | orchestrator | 2026-04-08 00:57:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:57:47.199259 | orchestrator | 2026-04-08 00:57:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:57:47.199323 | orchestrator | 2026-04-08 00:57:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:57:50.245099 | orchestrator | 2026-04-08 00:57:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:57:50.246409 | orchestrator | 2026-04-08 00:57:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:57:50.246469 | orchestrator | 2026-04-08 00:57:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:57:53.299353 | orchestrator | 2026-04-08 00:57:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:57:53.300640 | orchestrator | 2026-04-08 00:57:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:57:53.300683 | orchestrator | 2026-04-08 00:57:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:57:56.343695 | orchestrator | 2026-04-08 00:57:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:57:56.346704 | orchestrator | 2026-04-08 00:57:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:57:56.346782 | orchestrator | 2026-04-08 00:57:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:57:59.392465 | orchestrator | 2026-04-08 00:57:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:57:59.395230 | orchestrator | 2026-04-08 00:57:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:57:59.395280 | orchestrator | 2026-04-08 00:57:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:58:02.443651 | orchestrator | 2026-04-08 00:58:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:58:02.445355 | orchestrator | 2026-04-08 00:58:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:58:02.445403 | orchestrator | 2026-04-08 00:58:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:58:05.493825 | orchestrator | 2026-04-08 00:58:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:58:05.495517 | orchestrator | 2026-04-08 00:58:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:58:05.495576 | orchestrator | 2026-04-08 00:58:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:58:08.551625 | orchestrator | 2026-04-08 00:58:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:58:08.552928 | orchestrator | 2026-04-08 00:58:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:58:08.552997 | orchestrator | 2026-04-08 00:58:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:58:11.602544 | orchestrator | 2026-04-08 00:58:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:58:11.603821 | orchestrator | 2026-04-08 00:58:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:58:11.603843 | orchestrator | 2026-04-08 00:58:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:58:14.649204 | orchestrator | 2026-04-08 00:58:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:58:14.651554 | orchestrator | 2026-04-08 00:58:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:58:14.651629 | orchestrator | 2026-04-08 00:58:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:58:17.692768 | orchestrator | 2026-04-08 00:58:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:58:17.694657 | orchestrator | 2026-04-08 00:58:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:58:17.694787 | orchestrator | 2026-04-08 00:58:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:58:20.739306 | orchestrator | 2026-04-08 00:58:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:58:20.741080 | orchestrator | 2026-04-08 00:58:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:58:20.741169 | orchestrator | 2026-04-08 00:58:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:58:23.784526 | orchestrator | 2026-04-08 00:58:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:58:23.786811 | orchestrator | 2026-04-08 00:58:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:58:23.786886 | orchestrator | 2026-04-08 00:58:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:58:26.835820 | orchestrator | 2026-04-08 00:58:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:58:26.836789 | orchestrator | 2026-04-08 00:58:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:58:26.837260 | orchestrator | 2026-04-08 00:58:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:58:29.891332 | orchestrator | 2026-04-08 00:58:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:58:29.892846 | orchestrator | 2026-04-08 00:58:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:58:29.893301 | orchestrator | 2026-04-08 00:58:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:58:32.940460 | orchestrator | 2026-04-08 00:58:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:58:32.941981 | orchestrator | 2026-04-08 00:58:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:58:32.942066 | orchestrator | 2026-04-08 00:58:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:58:35.983528 | orchestrator | 2026-04-08 00:58:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:58:35.983612 | orchestrator | 2026-04-08 00:58:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:58:35.983622 | orchestrator | 2026-04-08 00:58:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:58:39.030472 | orchestrator | 2026-04-08 00:58:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:58:39.032152 | orchestrator | 2026-04-08 00:58:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:58:39.032290 | orchestrator | 2026-04-08 00:58:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:58:42.086659 | orchestrator | 2026-04-08 00:58:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:58:42.088884 | orchestrator | 2026-04-08 00:58:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:58:42.088962 | orchestrator | 2026-04-08 00:58:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:58:45.129744 | orchestrator | 2026-04-08 00:58:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:58:45.131163 | orchestrator | 2026-04-08 00:58:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:58:45.131267 | orchestrator | 2026-04-08 00:58:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:58:48.173018 | orchestrator | 2026-04-08 00:58:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:58:48.175483 | orchestrator | 2026-04-08 00:58:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:58:48.175557 | orchestrator | 2026-04-08 00:58:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:58:51.215171 | orchestrator | 2026-04-08 00:58:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:58:51.216991 | orchestrator | 2026-04-08 00:58:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:58:51.217061 | orchestrator | 2026-04-08 00:58:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:58:54.264721 | orchestrator | 2026-04-08 00:58:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:58:54.266000 | orchestrator | 2026-04-08 00:58:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:58:54.266153 | orchestrator | 2026-04-08 00:58:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:58:57.308769 | orchestrator | 2026-04-08 00:58:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:58:57.310688 | orchestrator | 2026-04-08 00:58:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:58:57.310748 | orchestrator | 2026-04-08 00:58:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:59:00.355246 | orchestrator | 2026-04-08 00:59:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:59:00.357055 | orchestrator | 2026-04-08 00:59:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:59:00.357120 | orchestrator | 2026-04-08 00:59:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:59:03.408384 | orchestrator | 2026-04-08 00:59:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:59:03.410210 | orchestrator | 2026-04-08 00:59:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:59:03.410290 | orchestrator | 2026-04-08 00:59:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:59:06.456373 | orchestrator | 2026-04-08 00:59:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:59:06.457939 | orchestrator | 2026-04-08 00:59:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:59:06.457997 | orchestrator | 2026-04-08 00:59:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:59:09.503193 | orchestrator | 2026-04-08 00:59:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:59:09.505453 | orchestrator | 2026-04-08 00:59:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:59:09.505556 | orchestrator | 2026-04-08 00:59:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:59:12.547232 | orchestrator | 2026-04-08 00:59:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:59:12.548884 | orchestrator | 2026-04-08 00:59:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:59:12.548937 | orchestrator | 2026-04-08 00:59:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:59:15.593456 | orchestrator | 2026-04-08 00:59:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:59:15.594492 | orchestrator | 2026-04-08 00:59:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:59:15.594541 | orchestrator | 2026-04-08 00:59:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:59:18.635983 | orchestrator | 2026-04-08 00:59:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:59:18.637813 | orchestrator | 2026-04-08 00:59:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:59:18.637882 | orchestrator | 2026-04-08 00:59:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:59:21.681656 | orchestrator | 2026-04-08 00:59:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:59:21.683388 | orchestrator | 2026-04-08 00:59:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:59:21.683712 | orchestrator | 2026-04-08 00:59:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:59:24.727574 | orchestrator | 2026-04-08 00:59:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:59:24.729516 | orchestrator | 2026-04-08 00:59:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:59:24.729565 | orchestrator | 2026-04-08 00:59:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:59:27.770616 | orchestrator | 2026-04-08 00:59:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:59:27.771849 | orchestrator | 2026-04-08 00:59:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:59:27.771888 | orchestrator | 2026-04-08 00:59:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:59:30.808345 | orchestrator | 2026-04-08 00:59:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:59:30.809781 | orchestrator | 2026-04-08 00:59:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:59:30.809851 | orchestrator | 2026-04-08 00:59:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:59:33.854835 | orchestrator | 2026-04-08 00:59:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:59:33.856039 | orchestrator | 2026-04-08 00:59:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:59:33.856164 | orchestrator | 2026-04-08 00:59:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:59:36.902456 | orchestrator | 2026-04-08 00:59:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:59:36.906178 | orchestrator | 2026-04-08 00:59:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:59:36.906305 | orchestrator | 2026-04-08 00:59:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:59:39.952953 | orchestrator | 2026-04-08 00:59:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:59:39.954588 | orchestrator | 2026-04-08 00:59:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:59:39.954641 | orchestrator | 2026-04-08 00:59:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:59:42.997984 | orchestrator | 2026-04-08 00:59:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:59:42.999675 | orchestrator | 2026-04-08 00:59:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:59:43.000320 | orchestrator | 2026-04-08 00:59:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:59:46.043597 | orchestrator | 2026-04-08 00:59:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:59:46.045111 | orchestrator | 2026-04-08 00:59:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:59:46.045344 | orchestrator | 2026-04-08 00:59:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:59:49.086956 | orchestrator | 2026-04-08 00:59:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:59:49.088435 | orchestrator | 2026-04-08 00:59:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:59:49.088500 | orchestrator | 2026-04-08 00:59:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:59:52.127381 | orchestrator | 2026-04-08 00:59:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:59:52.128874 | orchestrator | 2026-04-08 00:59:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:59:52.128928 | orchestrator | 2026-04-08 00:59:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:59:55.172470 | orchestrator | 2026-04-08 00:59:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:59:55.174384 | orchestrator | 2026-04-08 00:59:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:59:55.174485 | orchestrator | 2026-04-08 00:59:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 00:59:58.217001 | orchestrator | 2026-04-08 00:59:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 00:59:58.218541 | orchestrator | 2026-04-08 00:59:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 00:59:58.218601 | orchestrator | 2026-04-08 00:59:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:00:01.254363 | orchestrator | 2026-04-08 01:00:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:00:01.256974 | orchestrator | 2026-04-08 01:00:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:00:01.257074 | orchestrator | 2026-04-08 01:00:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:00:04.298124 | orchestrator | 2026-04-08 01:00:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:00:04.300258 | orchestrator | 2026-04-08 01:00:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:00:04.300337 | orchestrator | 2026-04-08 01:00:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:00:07.343325 | orchestrator | 2026-04-08 01:00:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:00:07.345699 | orchestrator | 2026-04-08 01:00:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:00:07.345843 | orchestrator | 2026-04-08 01:00:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:00:10.399559 | orchestrator | 2026-04-08 01:00:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:00:10.401419 | orchestrator | 2026-04-08 01:00:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:00:10.401666 | orchestrator | 2026-04-08 01:00:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:00:13.446846 | orchestrator | 2026-04-08 01:00:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:00:13.449554 | orchestrator | 2026-04-08 01:00:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:00:13.449609 | orchestrator | 2026-04-08 01:00:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:00:16.494100 | orchestrator | 2026-04-08 01:00:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:00:16.495502 | orchestrator | 2026-04-08 01:00:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:00:16.495546 | orchestrator | 2026-04-08 01:00:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:00:19.538579 | orchestrator | 2026-04-08 01:00:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:00:19.539770 | orchestrator | 2026-04-08 01:00:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:00:19.539819 | orchestrator | 2026-04-08 01:00:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:00:22.584386 | orchestrator | 2026-04-08 01:00:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:00:22.587067 | orchestrator | 2026-04-08 01:00:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:00:22.587167 | orchestrator | 2026-04-08 01:00:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:00:25.632174 | orchestrator | 2026-04-08 01:00:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:00:25.634207 | orchestrator | 2026-04-08 01:00:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:00:25.634340 | orchestrator | 2026-04-08 01:00:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:00:28.684830 | orchestrator | 2026-04-08 01:00:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:00:28.687076 | orchestrator | 2026-04-08 01:00:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:00:28.687143 | orchestrator | 2026-04-08 01:00:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:00:31.734928 | orchestrator | 2026-04-08 01:00:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:00:31.737470 | orchestrator | 2026-04-08 01:00:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:00:31.737532 | orchestrator | 2026-04-08 01:00:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:00:34.785481 | orchestrator | 2026-04-08 01:00:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:00:34.786988 | orchestrator | 2026-04-08 01:00:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:00:34.787066 | orchestrator | 2026-04-08 01:00:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:00:37.833255 | orchestrator | 2026-04-08 01:00:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:00:37.834651 | orchestrator | 2026-04-08 01:00:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:00:37.834692 | orchestrator | 2026-04-08 01:00:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:00:40.875749 | orchestrator | 2026-04-08 01:00:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:00:40.877402 | orchestrator | 2026-04-08 01:00:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:00:40.877512 | orchestrator | 2026-04-08 01:00:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:00:43.921449 | orchestrator | 2026-04-08 01:00:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:00:43.922333 | orchestrator | 2026-04-08 01:00:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:00:43.922353 | orchestrator | 2026-04-08 01:00:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:00:46.964482 | orchestrator | 2026-04-08 01:00:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:00:46.966512 | orchestrator | 2026-04-08 01:00:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:00:46.966559 | orchestrator | 2026-04-08 01:00:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:00:50.017828 | orchestrator | 2026-04-08 01:00:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:00:50.020117 | orchestrator | 2026-04-08 01:00:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:00:50.020189 | orchestrator | 2026-04-08 01:00:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:00:53.068007 | orchestrator | 2026-04-08 01:00:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:00:53.069250 | orchestrator | 2026-04-08 01:00:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:00:53.069298 | orchestrator | 2026-04-08 01:00:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:00:56.110628 | orchestrator | 2026-04-08 01:00:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:00:56.113238 | orchestrator | 2026-04-08 01:00:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:00:56.113311 | orchestrator | 2026-04-08 01:00:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:00:59.154178 | orchestrator | 2026-04-08 01:00:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:00:59.155573 | orchestrator | 2026-04-08 01:00:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:00:59.155618 | orchestrator | 2026-04-08 01:00:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:01:02.201865 | orchestrator | 2026-04-08 01:01:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:01:02.203297 | orchestrator | 2026-04-08 01:01:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:01:02.203385 | orchestrator | 2026-04-08 01:01:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:01:05.244294 | orchestrator | 2026-04-08 01:01:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:01:05.245608 | orchestrator | 2026-04-08 01:01:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:01:05.245660 | orchestrator | 2026-04-08 01:01:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:01:08.285614 | orchestrator | 2026-04-08 01:01:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:01:08.287068 | orchestrator | 2026-04-08 01:01:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:01:08.287124 | orchestrator | 2026-04-08 01:01:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:01:11.330690 | orchestrator | 2026-04-08 01:01:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:01:11.332697 | orchestrator | 2026-04-08 01:01:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:01:11.332750 | orchestrator | 2026-04-08 01:01:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:01:14.376964 | orchestrator | 2026-04-08 01:01:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:01:14.379481 | orchestrator | 2026-04-08 01:01:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:01:14.379537 | orchestrator | 2026-04-08 01:01:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:01:17.432649 | orchestrator | 2026-04-08 01:01:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:01:17.433932 | orchestrator | 2026-04-08 01:01:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:01:17.434000 | orchestrator | 2026-04-08 01:01:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:01:20.483438 | orchestrator | 2026-04-08 01:01:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:01:20.483939 | orchestrator | 2026-04-08 01:01:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:01:20.483972 | orchestrator | 2026-04-08 01:01:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:01:23.532730 | orchestrator | 2026-04-08 01:01:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:01:23.534187 | orchestrator | 2026-04-08 01:01:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:01:23.534228 | orchestrator | 2026-04-08 01:01:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:01:26.582065 | orchestrator | 2026-04-08 01:01:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:01:26.583420 | orchestrator | 2026-04-08 01:01:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:01:26.583746 | orchestrator | 2026-04-08 01:01:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:01:29.633425 | orchestrator | 2026-04-08 01:01:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:01:29.635378 | orchestrator | 2026-04-08 01:01:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:01:29.635557 | orchestrator | 2026-04-08 01:01:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:01:32.685726 | orchestrator | 2026-04-08 01:01:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:01:32.686770 | orchestrator | 2026-04-08 01:01:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:01:32.686911 | orchestrator | 2026-04-08 01:01:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:01:35.736413 | orchestrator | 2026-04-08 01:01:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:01:35.737918 | orchestrator | 2026-04-08 01:01:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:01:35.738203 | orchestrator | 2026-04-08 01:01:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:01:38.782618 | orchestrator | 2026-04-08 01:01:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:01:38.784188 | orchestrator | 2026-04-08 01:01:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:01:38.784258 | orchestrator | 2026-04-08 01:01:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:01:41.828465 | orchestrator | 2026-04-08 01:01:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:01:41.831045 | orchestrator | 2026-04-08 01:01:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:01:41.831101 | orchestrator | 2026-04-08 01:01:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:01:44.878571 | orchestrator | 2026-04-08 01:01:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:01:44.881097 | orchestrator | 2026-04-08 01:01:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:01:44.881148 | orchestrator | 2026-04-08 01:01:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:01:47.930921 | orchestrator | 2026-04-08 01:01:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:01:47.932691 | orchestrator | 2026-04-08 01:01:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:01:47.932743 | orchestrator | 2026-04-08 01:01:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:01:50.978782 | orchestrator | 2026-04-08 01:01:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:01:50.980268 | orchestrator | 2026-04-08 01:01:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:01:50.980683 | orchestrator | 2026-04-08 01:01:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:01:54.027911 | orchestrator | 2026-04-08 01:01:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:01:54.029431 | orchestrator | 2026-04-08 01:01:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:01:54.029494 | orchestrator | 2026-04-08 01:01:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:01:57.070643 | orchestrator | 2026-04-08 01:01:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:01:57.070873 | orchestrator | 2026-04-08 01:01:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:01:57.070892 | orchestrator | 2026-04-08 01:01:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:02:00.108073 | orchestrator | 2026-04-08 01:02:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:02:00.109920 | orchestrator | 2026-04-08 01:02:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:02:00.110081 | orchestrator | 2026-04-08 01:02:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:02:03.165300 | orchestrator | 2026-04-08 01:02:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:02:03.168073 | orchestrator | 2026-04-08 01:02:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:02:03.168180 | orchestrator | 2026-04-08 01:02:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:02:06.217966 | orchestrator | 2026-04-08 01:02:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:02:06.220358 | orchestrator | 2026-04-08 01:02:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:02:06.220496 | orchestrator | 2026-04-08 01:02:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:02:09.274602 | orchestrator | 2026-04-08 01:02:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:02:09.275670 | orchestrator | 2026-04-08 01:02:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:02:09.275861 | orchestrator | 2026-04-08 01:02:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:02:12.316120 | orchestrator | 2026-04-08 01:02:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:02:12.317291 | orchestrator | 2026-04-08 01:02:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:02:12.317369 | orchestrator | 2026-04-08 01:02:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:02:15.359273 | orchestrator | 2026-04-08 01:02:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:02:15.360396 | orchestrator | 2026-04-08 01:02:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:02:15.360440 | orchestrator | 2026-04-08 01:02:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:02:18.409092 | orchestrator | 2026-04-08 01:02:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:02:18.411494 | orchestrator | 2026-04-08 01:02:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:02:18.411570 | orchestrator | 2026-04-08 01:02:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:02:21.452777 | orchestrator | 2026-04-08 01:02:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:02:21.454174 | orchestrator | 2026-04-08 01:02:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:02:21.454222 | orchestrator | 2026-04-08 01:02:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:02:24.499784 | orchestrator | 2026-04-08 01:02:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:02:24.502137 | orchestrator | 2026-04-08 01:02:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:02:24.502239 | orchestrator | 2026-04-08 01:02:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:02:27.544194 | orchestrator | 2026-04-08 01:02:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:02:27.545739 | orchestrator | 2026-04-08 01:02:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:02:27.545821 | orchestrator | 2026-04-08 01:02:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:02:30.589077 | orchestrator | 2026-04-08 01:02:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:02:30.590952 | orchestrator | 2026-04-08 01:02:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:02:30.591044 | orchestrator | 2026-04-08 01:02:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:02:33.633211 | orchestrator | 2026-04-08 01:02:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:02:33.633372 | orchestrator | 2026-04-08 01:02:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:02:33.633389 | orchestrator | 2026-04-08 01:02:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:02:36.675719 | orchestrator | 2026-04-08 01:02:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:02:36.675795 | orchestrator | 2026-04-08 01:02:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:02:36.675801 | orchestrator | 2026-04-08 01:02:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:02:39.720126 | orchestrator | 2026-04-08 01:02:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:02:39.722422 | orchestrator | 2026-04-08 01:02:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:02:39.722477 | orchestrator | 2026-04-08 01:02:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:02:42.772116 | orchestrator | 2026-04-08 01:02:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:02:42.773211 | orchestrator | 2026-04-08 01:02:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:02:42.773252 | orchestrator | 2026-04-08 01:02:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:02:45.823606 | orchestrator | 2026-04-08 01:02:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:02:45.826359 | orchestrator | 2026-04-08 01:02:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:02:45.826466 | orchestrator | 2026-04-08 01:02:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:02:48.868483 | orchestrator | 2026-04-08 01:02:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:02:48.869901 | orchestrator | 2026-04-08 01:02:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:02:48.869999 | orchestrator | 2026-04-08 01:02:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:02:51.919373 | orchestrator | 2026-04-08 01:02:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:02:51.922161 | orchestrator | 2026-04-08 01:02:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:02:51.922269 | orchestrator | 2026-04-08 01:02:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:02:54.963652 | orchestrator | 2026-04-08 01:02:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:02:54.965215 | orchestrator | 2026-04-08 01:02:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:02:54.965311 | orchestrator | 2026-04-08 01:02:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:02:58.008298 | orchestrator | 2026-04-08 01:02:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:02:58.010133 | orchestrator | 2026-04-08 01:02:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:02:58.010209 | orchestrator | 2026-04-08 01:02:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:03:01.045543 | orchestrator | 2026-04-08 01:03:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:03:01.046596 | orchestrator | 2026-04-08 01:03:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:03:01.047157 | orchestrator | 2026-04-08 01:03:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:03:04.087585 | orchestrator | 2026-04-08 01:03:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:03:04.090858 | orchestrator | 2026-04-08 01:03:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:03:04.090982 | orchestrator | 2026-04-08 01:03:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:03:07.137722 | orchestrator | 2026-04-08 01:03:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:03:07.139741 | orchestrator | 2026-04-08 01:03:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:03:07.140715 | orchestrator | 2026-04-08 01:03:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:03:10.178486 | orchestrator | 2026-04-08 01:03:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:03:10.179373 | orchestrator | 2026-04-08 01:03:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:03:10.179418 | orchestrator | 2026-04-08 01:03:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:03:13.217327 | orchestrator | 2026-04-08 01:03:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:03:13.219036 | orchestrator | 2026-04-08 01:03:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:03:13.219087 | orchestrator | 2026-04-08 01:03:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:03:16.258730 | orchestrator | 2026-04-08 01:03:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:03:16.260688 | orchestrator | 2026-04-08 01:03:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:03:16.260773 | orchestrator | 2026-04-08 01:03:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:03:19.308673 | orchestrator | 2026-04-08 01:03:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:03:19.310222 | orchestrator | 2026-04-08 01:03:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:03:19.310287 | orchestrator | 2026-04-08 01:03:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:03:22.353573 | orchestrator | 2026-04-08 01:03:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:03:22.355097 | orchestrator | 2026-04-08 01:03:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:03:22.355180 | orchestrator | 2026-04-08 01:03:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:03:25.401425 | orchestrator | 2026-04-08 01:03:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:03:25.403129 | orchestrator | 2026-04-08 01:03:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:03:25.403199 | orchestrator | 2026-04-08 01:03:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:03:28.446010 | orchestrator | 2026-04-08 01:03:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:03:28.447609 | orchestrator | 2026-04-08 01:03:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:03:28.447656 | orchestrator | 2026-04-08 01:03:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:03:31.485472 | orchestrator | 2026-04-08 01:03:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:03:31.487977 | orchestrator | 2026-04-08 01:03:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:03:31.488048 | orchestrator | 2026-04-08 01:03:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:03:34.531053 | orchestrator | 2026-04-08 01:03:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:03:34.533359 | orchestrator | 2026-04-08 01:03:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:03:34.533422 | orchestrator | 2026-04-08 01:03:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:03:37.565750 | orchestrator | 2026-04-08 01:03:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:03:37.568185 | orchestrator | 2026-04-08 01:03:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:03:37.568307 | orchestrator | 2026-04-08 01:03:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:03:40.613189 | orchestrator | 2026-04-08 01:03:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:03:40.615453 | orchestrator | 2026-04-08 01:03:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:03:40.615528 | orchestrator | 2026-04-08 01:03:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:03:43.662264 | orchestrator | 2026-04-08 01:03:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:03:43.663439 | orchestrator | 2026-04-08 01:03:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:03:43.663472 | orchestrator | 2026-04-08 01:03:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:03:46.711951 | orchestrator | 2026-04-08 01:03:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:03:46.713933 | orchestrator | 2026-04-08 01:03:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:03:46.713980 | orchestrator | 2026-04-08 01:03:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:03:49.749488 | orchestrator | 2026-04-08 01:03:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:03:49.750340 | orchestrator | 2026-04-08 01:03:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:03:49.750386 | orchestrator | 2026-04-08 01:03:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:03:52.797080 | orchestrator | 2026-04-08 01:03:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:03:52.798048 | orchestrator | 2026-04-08 01:03:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:03:52.798082 | orchestrator | 2026-04-08 01:03:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:03:55.844279 | orchestrator | 2026-04-08 01:03:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:03:55.846234 | orchestrator | 2026-04-08 01:03:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:03:55.846278 | orchestrator | 2026-04-08 01:03:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:03:58.895512 | orchestrator | 2026-04-08 01:03:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:03:58.897817 | orchestrator | 2026-04-08 01:03:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:03:58.898162 | orchestrator | 2026-04-08 01:03:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:04:01.939269 | orchestrator | 2026-04-08 01:04:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:04:01.941089 | orchestrator | 2026-04-08 01:04:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:04:01.941186 | orchestrator | 2026-04-08 01:04:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:04:04.992020 | orchestrator | 2026-04-08 01:04:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:04:04.993383 | orchestrator | 2026-04-08 01:04:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:04:04.993468 | orchestrator | 2026-04-08 01:04:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:04:08.038287 | orchestrator | 2026-04-08 01:04:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:04:08.039995 | orchestrator | 2026-04-08 01:04:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:04:08.040064 | orchestrator | 2026-04-08 01:04:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:04:11.087495 | orchestrator | 2026-04-08 01:04:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:04:11.088165 | orchestrator | 2026-04-08 01:04:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:04:11.088334 | orchestrator | 2026-04-08 01:04:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:04:14.136056 | orchestrator | 2026-04-08 01:04:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:04:14.137695 | orchestrator | 2026-04-08 01:04:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:04:14.137767 | orchestrator | 2026-04-08 01:04:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:04:17.180632 | orchestrator | 2026-04-08 01:04:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:04:17.183133 | orchestrator | 2026-04-08 01:04:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:04:17.183220 | orchestrator | 2026-04-08 01:04:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:04:20.231282 | orchestrator | 2026-04-08 01:04:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:04:20.233828 | orchestrator | 2026-04-08 01:04:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:04:20.234348 | orchestrator | 2026-04-08 01:04:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:04:23.279717 | orchestrator | 2026-04-08 01:04:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:04:23.281209 | orchestrator | 2026-04-08 01:04:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:04:23.281247 | orchestrator | 2026-04-08 01:04:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:04:26.324512 | orchestrator | 2026-04-08 01:04:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:04:26.326226 | orchestrator | 2026-04-08 01:04:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:04:26.326358 | orchestrator | 2026-04-08 01:04:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:04:29.370906 | orchestrator | 2026-04-08 01:04:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:04:29.372326 | orchestrator | 2026-04-08 01:04:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:04:29.372406 | orchestrator | 2026-04-08 01:04:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:04:32.417409 | orchestrator | 2026-04-08 01:04:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:04:32.419073 | orchestrator | 2026-04-08 01:04:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:04:32.419389 | orchestrator | 2026-04-08 01:04:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:04:35.467754 | orchestrator | 2026-04-08 01:04:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:04:35.468772 | orchestrator | 2026-04-08 01:04:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:04:35.468819 | orchestrator | 2026-04-08 01:04:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:04:38.513415 | orchestrator | 2026-04-08 01:04:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:04:38.515035 | orchestrator | 2026-04-08 01:04:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:04:38.515084 | orchestrator | 2026-04-08 01:04:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:04:41.562042 | orchestrator | 2026-04-08 01:04:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:04:41.563377 | orchestrator | 2026-04-08 01:04:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:04:41.563434 | orchestrator | 2026-04-08 01:04:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:04:44.602097 | orchestrator | 2026-04-08 01:04:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:04:44.602810 | orchestrator | 2026-04-08 01:04:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:04:44.602857 | orchestrator | 2026-04-08 01:04:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:04:47.652956 | orchestrator | 2026-04-08 01:04:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:04:47.655572 | orchestrator | 2026-04-08 01:04:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:04:47.655641 | orchestrator | 2026-04-08 01:04:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:04:50.698576 | orchestrator | 2026-04-08 01:04:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:04:50.700655 | orchestrator | 2026-04-08 01:04:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:04:50.700783 | orchestrator | 2026-04-08 01:04:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:04:53.742558 | orchestrator | 2026-04-08 01:04:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:04:53.744219 | orchestrator | 2026-04-08 01:04:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:04:53.744300 | orchestrator | 2026-04-08 01:04:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:04:56.787580 | orchestrator | 2026-04-08 01:04:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:04:56.789271 | orchestrator | 2026-04-08 01:04:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:04:56.789336 | orchestrator | 2026-04-08 01:04:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:04:59.828613 | orchestrator | 2026-04-08 01:04:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:04:59.830301 | orchestrator | 2026-04-08 01:04:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:04:59.830346 | orchestrator | 2026-04-08 01:04:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:05:02.876846 | orchestrator | 2026-04-08 01:05:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:05:02.878436 | orchestrator | 2026-04-08 01:05:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:05:02.878514 | orchestrator | 2026-04-08 01:05:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:05:05.929592 | orchestrator | 2026-04-08 01:05:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:05:05.930752 | orchestrator | 2026-04-08 01:05:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:05:05.930791 | orchestrator | 2026-04-08 01:05:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:05:08.983762 | orchestrator | 2026-04-08 01:05:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:05:08.986943 | orchestrator | 2026-04-08 01:05:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:05:08.986995 | orchestrator | 2026-04-08 01:05:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:05:12.042342 | orchestrator | 2026-04-08 01:05:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:05:12.044002 | orchestrator | 2026-04-08 01:05:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:05:12.044060 | orchestrator | 2026-04-08 01:05:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:05:15.091713 | orchestrator | 2026-04-08 01:05:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:05:15.092938 | orchestrator | 2026-04-08 01:05:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:05:15.092982 | orchestrator | 2026-04-08 01:05:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:05:18.142335 | orchestrator | 2026-04-08 01:05:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:05:18.143968 | orchestrator | 2026-04-08 01:05:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:05:18.144011 | orchestrator | 2026-04-08 01:05:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:05:21.191197 | orchestrator | 2026-04-08 01:05:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:05:21.192613 | orchestrator | 2026-04-08 01:05:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:05:21.192732 | orchestrator | 2026-04-08 01:05:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:05:24.232065 | orchestrator | 2026-04-08 01:05:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:05:24.233513 | orchestrator | 2026-04-08 01:05:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:05:24.233585 | orchestrator | 2026-04-08 01:05:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:05:27.282275 | orchestrator | 2026-04-08 01:05:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:05:27.283473 | orchestrator | 2026-04-08 01:05:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:05:27.283609 | orchestrator | 2026-04-08 01:05:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:05:30.324369 | orchestrator | 2026-04-08 01:05:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:05:30.326177 | orchestrator | 2026-04-08 01:05:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:05:30.326252 | orchestrator | 2026-04-08 01:05:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:05:33.365673 | orchestrator | 2026-04-08 01:05:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:05:33.368193 | orchestrator | 2026-04-08 01:05:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:05:33.368291 | orchestrator | 2026-04-08 01:05:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:05:36.416039 | orchestrator | 2026-04-08 01:05:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:05:36.417514 | orchestrator | 2026-04-08 01:05:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:05:36.417562 | orchestrator | 2026-04-08 01:05:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:05:39.462304 | orchestrator | 2026-04-08 01:05:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:05:39.464193 | orchestrator | 2026-04-08 01:05:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:05:39.464277 | orchestrator | 2026-04-08 01:05:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:05:42.507159 | orchestrator | 2026-04-08 01:05:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:05:42.509052 | orchestrator | 2026-04-08 01:05:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:05:42.509105 | orchestrator | 2026-04-08 01:05:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:05:45.557605 | orchestrator | 2026-04-08 01:05:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:05:45.560066 | orchestrator | 2026-04-08 01:05:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:05:45.560117 | orchestrator | 2026-04-08 01:05:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:05:48.608523 | orchestrator | 2026-04-08 01:05:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:05:48.610279 | orchestrator | 2026-04-08 01:05:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:05:48.610334 | orchestrator | 2026-04-08 01:05:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:05:51.659349 | orchestrator | 2026-04-08 01:05:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:05:51.662380 | orchestrator | 2026-04-08 01:05:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:05:51.662451 | orchestrator | 2026-04-08 01:05:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:05:54.720276 | orchestrator | 2026-04-08 01:05:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:05:54.722832 | orchestrator | 2026-04-08 01:05:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:05:54.722880 | orchestrator | 2026-04-08 01:05:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:05:57.779031 | orchestrator | 2026-04-08 01:05:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:05:57.780263 | orchestrator | 2026-04-08 01:05:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:05:57.780287 | orchestrator | 2026-04-08 01:05:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:06:00.819627 | orchestrator | 2026-04-08 01:06:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:06:00.819793 | orchestrator | 2026-04-08 01:06:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:06:00.819812 | orchestrator | 2026-04-08 01:06:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:06:03.865900 | orchestrator | 2026-04-08 01:06:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:06:03.870291 | orchestrator | 2026-04-08 01:06:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:06:03.870337 | orchestrator | 2026-04-08 01:06:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:06:06.922432 | orchestrator | 2026-04-08 01:06:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:06:06.923906 | orchestrator | 2026-04-08 01:06:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:06:06.923953 | orchestrator | 2026-04-08 01:06:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:06:09.969475 | orchestrator | 2026-04-08 01:06:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:06:09.971697 | orchestrator | 2026-04-08 01:06:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:06:09.971740 | orchestrator | 2026-04-08 01:06:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:06:13.013985 | orchestrator | 2026-04-08 01:06:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:06:13.016079 | orchestrator | 2026-04-08 01:06:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:06:13.016128 | orchestrator | 2026-04-08 01:06:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:06:16.063171 | orchestrator | 2026-04-08 01:06:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:06:16.065080 | orchestrator | 2026-04-08 01:06:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:06:16.065126 | orchestrator | 2026-04-08 01:06:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:06:19.110807 | orchestrator | 2026-04-08 01:06:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:06:19.112610 | orchestrator | 2026-04-08 01:06:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:06:19.112710 | orchestrator | 2026-04-08 01:06:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:06:22.156419 | orchestrator | 2026-04-08 01:06:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:06:22.158090 | orchestrator | 2026-04-08 01:06:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:06:22.158378 | orchestrator | 2026-04-08 01:06:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:06:25.204191 | orchestrator | 2026-04-08 01:06:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:06:25.206187 | orchestrator | 2026-04-08 01:06:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:06:25.206248 | orchestrator | 2026-04-08 01:06:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:06:28.248449 | orchestrator | 2026-04-08 01:06:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:06:28.250155 | orchestrator | 2026-04-08 01:06:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:06:28.250192 | orchestrator | 2026-04-08 01:06:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:06:31.293864 | orchestrator | 2026-04-08 01:06:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:06:31.295462 | orchestrator | 2026-04-08 01:06:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:06:31.295564 | orchestrator | 2026-04-08 01:06:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:06:34.340811 | orchestrator | 2026-04-08 01:06:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:06:34.343020 | orchestrator | 2026-04-08 01:06:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:06:34.343101 | orchestrator | 2026-04-08 01:06:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:06:37.392097 | orchestrator | 2026-04-08 01:06:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:06:37.404300 | orchestrator | 2026-04-08 01:06:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:06:37.404405 | orchestrator | 2026-04-08 01:06:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:06:40.454246 | orchestrator | 2026-04-08 01:06:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:06:40.455273 | orchestrator | 2026-04-08 01:06:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:06:40.455317 | orchestrator | 2026-04-08 01:06:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:06:43.504915 | orchestrator | 2026-04-08 01:06:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:06:43.507246 | orchestrator | 2026-04-08 01:06:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:06:43.507543 | orchestrator | 2026-04-08 01:06:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:06:46.559165 | orchestrator | 2026-04-08 01:06:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:06:46.560446 | orchestrator | 2026-04-08 01:06:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:06:46.560492 | orchestrator | 2026-04-08 01:06:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:06:49.605175 | orchestrator | 2026-04-08 01:06:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:06:49.607130 | orchestrator | 2026-04-08 01:06:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:06:49.607214 | orchestrator | 2026-04-08 01:06:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:06:52.654564 | orchestrator | 2026-04-08 01:06:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:06:52.655974 | orchestrator | 2026-04-08 01:06:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:06:52.656029 | orchestrator | 2026-04-08 01:06:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:06:55.705938 | orchestrator | 2026-04-08 01:06:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:06:55.707190 | orchestrator | 2026-04-08 01:06:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:06:55.707229 | orchestrator | 2026-04-08 01:06:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:06:58.756843 | orchestrator | 2026-04-08 01:06:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:06:58.758869 | orchestrator | 2026-04-08 01:06:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:06:58.758924 | orchestrator | 2026-04-08 01:06:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:07:01.801225 | orchestrator | 2026-04-08 01:07:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:07:01.803509 | orchestrator | 2026-04-08 01:07:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:07:01.803562 | orchestrator | 2026-04-08 01:07:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:07:04.851040 | orchestrator | 2026-04-08 01:07:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:07:04.852872 | orchestrator | 2026-04-08 01:07:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:07:04.852970 | orchestrator | 2026-04-08 01:07:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:07:07.902370 | orchestrator | 2026-04-08 01:07:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:07:07.904201 | orchestrator | 2026-04-08 01:07:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:07:07.904278 | orchestrator | 2026-04-08 01:07:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:07:10.954132 | orchestrator | 2026-04-08 01:07:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:07:10.955687 | orchestrator | 2026-04-08 01:07:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:07:10.955793 | orchestrator | 2026-04-08 01:07:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:07:13.997493 | orchestrator | 2026-04-08 01:07:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:07:13.998987 | orchestrator | 2026-04-08 01:07:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:07:13.999045 | orchestrator | 2026-04-08 01:07:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:07:17.047253 | orchestrator | 2026-04-08 01:07:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:07:17.050192 | orchestrator | 2026-04-08 01:07:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:07:17.050251 | orchestrator | 2026-04-08 01:07:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:07:20.099751 | orchestrator | 2026-04-08 01:07:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:07:20.101159 | orchestrator | 2026-04-08 01:07:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:07:20.101466 | orchestrator | 2026-04-08 01:07:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:07:23.146769 | orchestrator | 2026-04-08 01:07:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:07:23.148183 | orchestrator | 2026-04-08 01:07:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:07:23.148269 | orchestrator | 2026-04-08 01:07:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:07:26.195838 | orchestrator | 2026-04-08 01:07:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:07:26.196733 | orchestrator | 2026-04-08 01:07:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:07:26.196767 | orchestrator | 2026-04-08 01:07:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:07:29.243408 | orchestrator | 2026-04-08 01:07:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:07:29.244582 | orchestrator | 2026-04-08 01:07:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:07:29.244694 | orchestrator | 2026-04-08 01:07:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:07:32.289551 | orchestrator | 2026-04-08 01:07:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:07:32.290737 | orchestrator | 2026-04-08 01:07:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:07:32.290800 | orchestrator | 2026-04-08 01:07:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:07:35.339320 | orchestrator | 2026-04-08 01:07:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:07:35.340393 | orchestrator | 2026-04-08 01:07:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:07:35.340458 | orchestrator | 2026-04-08 01:07:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:07:38.382855 | orchestrator | 2026-04-08 01:07:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:07:38.385571 | orchestrator | 2026-04-08 01:07:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:07:38.385630 | orchestrator | 2026-04-08 01:07:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:07:41.435648 | orchestrator | 2026-04-08 01:07:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:07:41.437723 | orchestrator | 2026-04-08 01:07:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:07:41.437791 | orchestrator | 2026-04-08 01:07:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:07:44.482594 | orchestrator | 2026-04-08 01:07:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:07:44.484202 | orchestrator | 2026-04-08 01:07:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:07:44.484269 | orchestrator | 2026-04-08 01:07:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:07:47.532298 | orchestrator | 2026-04-08 01:07:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:07:47.533815 | orchestrator | 2026-04-08 01:07:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:07:47.533894 | orchestrator | 2026-04-08 01:07:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:07:50.590945 | orchestrator | 2026-04-08 01:07:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:07:50.592856 | orchestrator | 2026-04-08 01:07:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:07:50.592954 | orchestrator | 2026-04-08 01:07:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:07:53.643337 | orchestrator | 2026-04-08 01:07:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:07:53.644953 | orchestrator | 2026-04-08 01:07:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:07:53.644987 | orchestrator | 2026-04-08 01:07:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:07:56.702820 | orchestrator | 2026-04-08 01:07:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:07:56.704685 | orchestrator | 2026-04-08 01:07:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:07:56.704728 | orchestrator | 2026-04-08 01:07:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:07:59.752496 | orchestrator | 2026-04-08 01:07:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:07:59.754798 | orchestrator | 2026-04-08 01:07:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:07:59.754859 | orchestrator | 2026-04-08 01:07:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:08:02.796398 | orchestrator | 2026-04-08 01:08:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:08:02.797362 | orchestrator | 2026-04-08 01:08:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:08:02.797406 | orchestrator | 2026-04-08 01:08:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:08:05.844446 | orchestrator | 2026-04-08 01:08:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:08:05.846974 | orchestrator | 2026-04-08 01:08:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:08:05.849275 | orchestrator | 2026-04-08 01:08:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:08:08.890573 | orchestrator | 2026-04-08 01:08:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:08:08.891988 | orchestrator | 2026-04-08 01:08:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:08:08.892150 | orchestrator | 2026-04-08 01:08:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:08:11.934552 | orchestrator | 2026-04-08 01:08:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:08:11.936466 | orchestrator | 2026-04-08 01:08:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:08:11.936517 | orchestrator | 2026-04-08 01:08:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:08:14.981042 | orchestrator | 2026-04-08 01:08:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:08:14.982497 | orchestrator | 2026-04-08 01:08:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:08:14.982762 | orchestrator | 2026-04-08 01:08:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:08:18.026467 | orchestrator | 2026-04-08 01:08:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:08:18.029929 | orchestrator | 2026-04-08 01:08:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:08:18.030087 | orchestrator | 2026-04-08 01:08:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:08:21.076957 | orchestrator | 2026-04-08 01:08:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:08:21.078375 | orchestrator | 2026-04-08 01:08:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:08:21.078447 | orchestrator | 2026-04-08 01:08:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:08:24.124884 | orchestrator | 2026-04-08 01:08:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:08:24.126966 | orchestrator | 2026-04-08 01:08:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:08:24.127017 | orchestrator | 2026-04-08 01:08:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:08:27.169713 | orchestrator | 2026-04-08 01:08:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:08:27.171550 | orchestrator | 2026-04-08 01:08:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:08:27.171617 | orchestrator | 2026-04-08 01:08:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:08:30.215762 | orchestrator | 2026-04-08 01:08:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:08:30.217968 | orchestrator | 2026-04-08 01:08:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:08:30.218109 | orchestrator | 2026-04-08 01:08:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:08:33.256953 | orchestrator | 2026-04-08 01:08:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:08:33.258399 | orchestrator | 2026-04-08 01:08:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:08:33.258471 | orchestrator | 2026-04-08 01:08:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:08:36.303199 | orchestrator | 2026-04-08 01:08:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:08:36.304897 | orchestrator | 2026-04-08 01:08:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:08:36.304968 | orchestrator | 2026-04-08 01:08:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:08:39.346713 | orchestrator | 2026-04-08 01:08:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:08:39.348857 | orchestrator | 2026-04-08 01:08:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:08:39.348932 | orchestrator | 2026-04-08 01:08:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:08:42.400053 | orchestrator | 2026-04-08 01:08:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:08:42.403937 | orchestrator | 2026-04-08 01:08:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:08:42.404024 | orchestrator | 2026-04-08 01:08:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:08:45.448447 | orchestrator | 2026-04-08 01:08:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:08:45.449352 | orchestrator | 2026-04-08 01:08:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:08:45.449405 | orchestrator | 2026-04-08 01:08:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:08:48.495083 | orchestrator | 2026-04-08 01:08:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:08:48.497841 | orchestrator | 2026-04-08 01:08:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:08:48.497970 | orchestrator | 2026-04-08 01:08:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:08:51.544740 | orchestrator | 2026-04-08 01:08:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:08:51.547211 | orchestrator | 2026-04-08 01:08:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:08:51.547258 | orchestrator | 2026-04-08 01:08:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:08:54.593779 | orchestrator | 2026-04-08 01:08:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:08:54.597128 | orchestrator | 2026-04-08 01:08:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:08:54.597197 | orchestrator | 2026-04-08 01:08:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:08:57.650503 | orchestrator | 2026-04-08 01:08:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:08:57.651985 | orchestrator | 2026-04-08 01:08:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:08:57.652052 | orchestrator | 2026-04-08 01:08:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:09:00.694627 | orchestrator | 2026-04-08 01:09:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:09:00.703779 | orchestrator | 2026-04-08 01:09:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:09:00.703904 | orchestrator | 2026-04-08 01:09:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:09:03.753529 | orchestrator | 2026-04-08 01:09:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:09:03.755378 | orchestrator | 2026-04-08 01:09:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:09:03.755464 | orchestrator | 2026-04-08 01:09:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:09:06.807313 | orchestrator | 2026-04-08 01:09:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:09:06.808798 | orchestrator | 2026-04-08 01:09:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:09:06.808932 | orchestrator | 2026-04-08 01:09:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:09:09.858898 | orchestrator | 2026-04-08 01:09:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:09:09.860986 | orchestrator | 2026-04-08 01:09:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:09:09.861052 | orchestrator | 2026-04-08 01:09:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:09:12.911536 | orchestrator | 2026-04-08 01:09:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:09:12.913342 | orchestrator | 2026-04-08 01:09:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:09:12.913401 | orchestrator | 2026-04-08 01:09:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:09:15.960155 | orchestrator | 2026-04-08 01:09:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:09:15.962078 | orchestrator | 2026-04-08 01:09:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:09:15.962127 | orchestrator | 2026-04-08 01:09:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:09:19.008135 | orchestrator | 2026-04-08 01:09:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:09:19.009719 | orchestrator | 2026-04-08 01:09:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:09:19.009789 | orchestrator | 2026-04-08 01:09:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:09:22.061110 | orchestrator | 2026-04-08 01:09:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:09:22.062318 | orchestrator | 2026-04-08 01:09:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:09:22.062354 | orchestrator | 2026-04-08 01:09:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:09:25.102979 | orchestrator | 2026-04-08 01:09:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:09:25.105452 | orchestrator | 2026-04-08 01:09:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:09:25.105494 | orchestrator | 2026-04-08 01:09:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:09:28.152232 | orchestrator | 2026-04-08 01:09:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:09:28.153252 | orchestrator | 2026-04-08 01:09:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:09:28.153307 | orchestrator | 2026-04-08 01:09:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:09:31.204635 | orchestrator | 2026-04-08 01:09:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:09:31.207390 | orchestrator | 2026-04-08 01:09:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:09:31.207473 | orchestrator | 2026-04-08 01:09:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:09:34.254896 | orchestrator | 2026-04-08 01:09:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:09:34.256694 | orchestrator | 2026-04-08 01:09:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:09:34.256897 | orchestrator | 2026-04-08 01:09:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:09:37.305133 | orchestrator | 2026-04-08 01:09:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:09:37.306284 | orchestrator | 2026-04-08 01:09:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:09:37.306362 | orchestrator | 2026-04-08 01:09:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:09:40.355302 | orchestrator | 2026-04-08 01:09:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:09:40.358184 | orchestrator | 2026-04-08 01:09:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:09:40.358353 | orchestrator | 2026-04-08 01:09:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:09:43.405436 | orchestrator | 2026-04-08 01:09:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:09:43.407978 | orchestrator | 2026-04-08 01:09:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:09:43.408290 | orchestrator | 2026-04-08 01:09:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:09:46.459609 | orchestrator | 2026-04-08 01:09:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:09:46.461528 | orchestrator | 2026-04-08 01:09:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:09:46.461620 | orchestrator | 2026-04-08 01:09:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:09:49.512620 | orchestrator | 2026-04-08 01:09:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:09:49.512762 | orchestrator | 2026-04-08 01:09:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:09:49.512775 | orchestrator | 2026-04-08 01:09:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:09:52.557923 | orchestrator | 2026-04-08 01:09:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:09:52.559061 | orchestrator | 2026-04-08 01:09:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:09:52.559143 | orchestrator | 2026-04-08 01:09:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:09:55.606457 | orchestrator | 2026-04-08 01:09:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:09:55.608009 | orchestrator | 2026-04-08 01:09:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:09:55.608055 | orchestrator | 2026-04-08 01:09:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:09:58.653480 | orchestrator | 2026-04-08 01:09:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:09:58.655909 | orchestrator | 2026-04-08 01:09:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:09:58.655995 | orchestrator | 2026-04-08 01:09:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:10:01.707353 | orchestrator | 2026-04-08 01:10:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:10:01.708728 | orchestrator | 2026-04-08 01:10:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:10:01.708864 | orchestrator | 2026-04-08 01:10:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:10:04.756060 | orchestrator | 2026-04-08 01:10:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:10:04.757239 | orchestrator | 2026-04-08 01:10:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:10:04.757306 | orchestrator | 2026-04-08 01:10:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:10:07.802343 | orchestrator | 2026-04-08 01:10:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:10:07.805620 | orchestrator | 2026-04-08 01:10:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:10:07.805702 | orchestrator | 2026-04-08 01:10:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:10:10.846294 | orchestrator | 2026-04-08 01:10:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:10:10.848480 | orchestrator | 2026-04-08 01:10:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:10:10.848603 | orchestrator | 2026-04-08 01:10:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:10:13.897824 | orchestrator | 2026-04-08 01:10:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:10:13.899660 | orchestrator | 2026-04-08 01:10:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:10:13.899710 | orchestrator | 2026-04-08 01:10:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:10:16.944197 | orchestrator | 2026-04-08 01:10:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:10:16.947642 | orchestrator | 2026-04-08 01:10:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:10:16.947724 | orchestrator | 2026-04-08 01:10:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:10:19.997492 | orchestrator | 2026-04-08 01:10:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:10:20.005837 | orchestrator | 2026-04-08 01:10:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:10:20.005887 | orchestrator | 2026-04-08 01:10:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:10:23.056597 | orchestrator | 2026-04-08 01:10:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:10:23.058605 | orchestrator | 2026-04-08 01:10:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:10:23.058658 | orchestrator | 2026-04-08 01:10:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:10:26.106493 | orchestrator | 2026-04-08 01:10:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:10:26.110145 | orchestrator | 2026-04-08 01:10:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:10:26.110198 | orchestrator | 2026-04-08 01:10:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:10:29.158965 | orchestrator | 2026-04-08 01:10:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:10:29.160712 | orchestrator | 2026-04-08 01:10:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:10:29.160836 | orchestrator | 2026-04-08 01:10:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:10:32.211621 | orchestrator | 2026-04-08 01:10:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:10:32.213473 | orchestrator | 2026-04-08 01:10:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:10:32.213530 | orchestrator | 2026-04-08 01:10:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:10:35.266049 | orchestrator | 2026-04-08 01:10:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:10:35.268110 | orchestrator | 2026-04-08 01:10:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:10:35.268229 | orchestrator | 2026-04-08 01:10:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:10:38.316408 | orchestrator | 2026-04-08 01:10:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:10:38.318831 | orchestrator | 2026-04-08 01:10:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:10:38.318907 | orchestrator | 2026-04-08 01:10:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:10:41.361675 | orchestrator | 2026-04-08 01:10:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:10:41.362867 | orchestrator | 2026-04-08 01:10:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:10:41.362920 | orchestrator | 2026-04-08 01:10:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:10:44.414740 | orchestrator | 2026-04-08 01:10:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:10:44.416588 | orchestrator | 2026-04-08 01:10:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:10:44.416639 | orchestrator | 2026-04-08 01:10:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:10:47.465162 | orchestrator | 2026-04-08 01:10:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:10:47.467251 | orchestrator | 2026-04-08 01:10:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:10:47.467331 | orchestrator | 2026-04-08 01:10:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:10:50.512329 | orchestrator | 2026-04-08 01:10:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:10:50.514322 | orchestrator | 2026-04-08 01:10:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:10:50.514585 | orchestrator | 2026-04-08 01:10:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:10:53.562499 | orchestrator | 2026-04-08 01:10:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:10:53.565126 | orchestrator | 2026-04-08 01:10:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:10:53.565201 | orchestrator | 2026-04-08 01:10:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:10:56.611177 | orchestrator | 2026-04-08 01:10:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:10:56.612780 | orchestrator | 2026-04-08 01:10:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:10:56.612897 | orchestrator | 2026-04-08 01:10:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:10:59.656736 | orchestrator | 2026-04-08 01:10:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:10:59.657745 | orchestrator | 2026-04-08 01:10:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:10:59.657799 | orchestrator | 2026-04-08 01:10:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:11:02.709446 | orchestrator | 2026-04-08 01:11:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:11:02.711395 | orchestrator | 2026-04-08 01:11:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:11:02.711441 | orchestrator | 2026-04-08 01:11:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:11:05.755471 | orchestrator | 2026-04-08 01:11:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:11:05.757155 | orchestrator | 2026-04-08 01:11:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:11:05.757204 | orchestrator | 2026-04-08 01:11:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:11:08.796615 | orchestrator | 2026-04-08 01:11:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:11:08.797952 | orchestrator | 2026-04-08 01:11:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:11:08.797996 | orchestrator | 2026-04-08 01:11:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:11:11.842069 | orchestrator | 2026-04-08 01:11:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:11:11.843882 | orchestrator | 2026-04-08 01:11:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:11:11.844086 | orchestrator | 2026-04-08 01:11:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:11:14.886829 | orchestrator | 2026-04-08 01:11:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:11:14.888968 | orchestrator | 2026-04-08 01:11:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:11:14.889636 | orchestrator | 2026-04-08 01:11:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:11:17.943355 | orchestrator | 2026-04-08 01:11:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:11:17.944790 | orchestrator | 2026-04-08 01:11:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:11:17.944868 | orchestrator | 2026-04-08 01:11:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:11:20.995290 | orchestrator | 2026-04-08 01:11:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:11:20.999916 | orchestrator | 2026-04-08 01:11:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:11:21.000050 | orchestrator | 2026-04-08 01:11:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:11:24.049070 | orchestrator | 2026-04-08 01:11:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:11:24.050286 | orchestrator | 2026-04-08 01:11:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:11:24.050344 | orchestrator | 2026-04-08 01:11:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:11:27.107117 | orchestrator | 2026-04-08 01:11:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:11:27.110001 | orchestrator | 2026-04-08 01:11:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:11:27.110150 | orchestrator | 2026-04-08 01:11:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:11:30.158594 | orchestrator | 2026-04-08 01:11:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:11:30.160522 | orchestrator | 2026-04-08 01:11:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:11:30.160678 | orchestrator | 2026-04-08 01:11:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:11:33.213011 | orchestrator | 2026-04-08 01:11:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:11:33.215396 | orchestrator | 2026-04-08 01:11:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:11:33.215701 | orchestrator | 2026-04-08 01:11:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:11:36.258074 | orchestrator | 2026-04-08 01:11:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:11:36.260499 | orchestrator | 2026-04-08 01:11:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:11:36.260546 | orchestrator | 2026-04-08 01:11:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:11:39.309603 | orchestrator | 2026-04-08 01:11:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:11:39.312552 | orchestrator | 2026-04-08 01:11:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:11:39.312642 | orchestrator | 2026-04-08 01:11:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:11:42.348372 | orchestrator | 2026-04-08 01:11:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:11:42.350976 | orchestrator | 2026-04-08 01:11:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:11:42.351029 | orchestrator | 2026-04-08 01:11:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:11:45.396774 | orchestrator | 2026-04-08 01:11:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:11:45.398561 | orchestrator | 2026-04-08 01:11:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:11:45.398683 | orchestrator | 2026-04-08 01:11:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:11:48.448344 | orchestrator | 2026-04-08 01:11:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:11:48.450201 | orchestrator | 2026-04-08 01:11:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:11:48.450316 | orchestrator | 2026-04-08 01:11:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:11:51.500952 | orchestrator | 2026-04-08 01:11:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:11:51.502320 | orchestrator | 2026-04-08 01:11:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:11:51.502388 | orchestrator | 2026-04-08 01:11:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:11:54.551063 | orchestrator | 2026-04-08 01:11:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:11:54.552349 | orchestrator | 2026-04-08 01:11:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:11:54.552408 | orchestrator | 2026-04-08 01:11:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:11:57.596422 | orchestrator | 2026-04-08 01:11:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:11:57.598276 | orchestrator | 2026-04-08 01:11:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:11:57.598342 | orchestrator | 2026-04-08 01:11:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:12:00.646088 | orchestrator | 2026-04-08 01:12:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:12:00.647514 | orchestrator | 2026-04-08 01:12:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:12:00.647607 | orchestrator | 2026-04-08 01:12:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:12:03.700420 | orchestrator | 2026-04-08 01:12:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:12:03.703049 | orchestrator | 2026-04-08 01:12:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:12:03.703209 | orchestrator | 2026-04-08 01:12:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:12:06.752564 | orchestrator | 2026-04-08 01:12:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:12:06.754053 | orchestrator | 2026-04-08 01:12:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:12:06.754094 | orchestrator | 2026-04-08 01:12:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:12:09.801482 | orchestrator | 2026-04-08 01:12:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:12:09.803512 | orchestrator | 2026-04-08 01:12:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:12:09.803572 | orchestrator | 2026-04-08 01:12:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:12:12.848687 | orchestrator | 2026-04-08 01:12:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:12:12.849874 | orchestrator | 2026-04-08 01:12:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:12:12.850216 | orchestrator | 2026-04-08 01:12:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:12:15.898844 | orchestrator | 2026-04-08 01:12:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:12:15.900356 | orchestrator | 2026-04-08 01:12:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:12:15.900451 | orchestrator | 2026-04-08 01:12:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:12:18.948942 | orchestrator | 2026-04-08 01:12:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:12:18.950205 | orchestrator | 2026-04-08 01:12:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:12:18.950291 | orchestrator | 2026-04-08 01:12:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:12:21.997882 | orchestrator | 2026-04-08 01:12:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:12:21.999535 | orchestrator | 2026-04-08 01:12:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:12:21.999612 | orchestrator | 2026-04-08 01:12:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:12:25.050383 | orchestrator | 2026-04-08 01:12:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:12:25.052795 | orchestrator | 2026-04-08 01:12:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:12:25.052868 | orchestrator | 2026-04-08 01:12:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:12:28.101556 | orchestrator | 2026-04-08 01:12:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:12:28.103889 | orchestrator | 2026-04-08 01:12:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:12:28.103955 | orchestrator | 2026-04-08 01:12:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:12:31.145052 | orchestrator | 2026-04-08 01:12:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:12:31.147248 | orchestrator | 2026-04-08 01:12:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:12:31.147297 | orchestrator | 2026-04-08 01:12:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:12:34.193379 | orchestrator | 2026-04-08 01:12:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:12:34.194852 | orchestrator | 2026-04-08 01:12:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:12:34.194960 | orchestrator | 2026-04-08 01:12:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:12:37.240104 | orchestrator | 2026-04-08 01:12:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:12:37.241439 | orchestrator | 2026-04-08 01:12:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:12:37.241552 | orchestrator | 2026-04-08 01:12:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:12:40.289486 | orchestrator | 2026-04-08 01:12:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:12:40.290977 | orchestrator | 2026-04-08 01:12:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:12:40.291087 | orchestrator | 2026-04-08 01:12:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:12:43.334074 | orchestrator | 2026-04-08 01:12:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:12:43.337313 | orchestrator | 2026-04-08 01:12:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:12:43.337368 | orchestrator | 2026-04-08 01:12:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:12:46.389995 | orchestrator | 2026-04-08 01:12:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:12:46.392453 | orchestrator | 2026-04-08 01:12:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:12:46.392545 | orchestrator | 2026-04-08 01:12:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:12:49.444017 | orchestrator | 2026-04-08 01:12:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:12:49.445595 | orchestrator | 2026-04-08 01:12:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:12:49.445652 | orchestrator | 2026-04-08 01:12:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:12:52.496096 | orchestrator | 2026-04-08 01:12:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:12:52.498485 | orchestrator | 2026-04-08 01:12:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:12:52.498865 | orchestrator | 2026-04-08 01:12:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:12:55.550807 | orchestrator | 2026-04-08 01:12:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:12:55.552970 | orchestrator | 2026-04-08 01:12:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:12:55.553020 | orchestrator | 2026-04-08 01:12:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:12:58.602289 | orchestrator | 2026-04-08 01:12:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:12:58.604641 | orchestrator | 2026-04-08 01:12:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:12:58.604763 | orchestrator | 2026-04-08 01:12:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:13:01.654259 | orchestrator | 2026-04-08 01:13:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:13:01.655936 | orchestrator | 2026-04-08 01:13:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:13:01.655998 | orchestrator | 2026-04-08 01:13:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:13:04.700038 | orchestrator | 2026-04-08 01:13:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:13:04.701468 | orchestrator | 2026-04-08 01:13:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:13:04.701594 | orchestrator | 2026-04-08 01:13:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:13:07.748988 | orchestrator | 2026-04-08 01:13:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:13:07.751384 | orchestrator | 2026-04-08 01:13:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:13:07.751449 | orchestrator | 2026-04-08 01:13:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:13:10.792479 | orchestrator | 2026-04-08 01:13:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:13:10.794305 | orchestrator | 2026-04-08 01:13:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:13:10.794373 | orchestrator | 2026-04-08 01:13:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:13:13.840694 | orchestrator | 2026-04-08 01:13:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:13:13.842524 | orchestrator | 2026-04-08 01:13:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:13:13.842578 | orchestrator | 2026-04-08 01:13:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:13:16.892075 | orchestrator | 2026-04-08 01:13:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:13:16.894823 | orchestrator | 2026-04-08 01:13:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:13:16.894887 | orchestrator | 2026-04-08 01:13:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:13:19.948601 | orchestrator | 2026-04-08 01:13:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:13:19.951944 | orchestrator | 2026-04-08 01:13:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:13:19.951998 | orchestrator | 2026-04-08 01:13:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:13:23.005943 | orchestrator | 2026-04-08 01:13:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:13:23.007254 | orchestrator | 2026-04-08 01:13:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:13:23.007555 | orchestrator | 2026-04-08 01:13:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:13:26.057397 | orchestrator | 2026-04-08 01:13:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:13:26.059281 | orchestrator | 2026-04-08 01:13:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:13:26.059440 | orchestrator | 2026-04-08 01:13:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:13:29.110979 | orchestrator | 2026-04-08 01:13:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:13:29.112220 | orchestrator | 2026-04-08 01:13:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:13:29.112258 | orchestrator | 2026-04-08 01:13:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:13:32.153165 | orchestrator | 2026-04-08 01:13:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:13:32.155244 | orchestrator | 2026-04-08 01:13:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:13:32.155337 | orchestrator | 2026-04-08 01:13:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:13:35.205525 | orchestrator | 2026-04-08 01:13:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:13:35.207244 | orchestrator | 2026-04-08 01:13:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:13:35.207333 | orchestrator | 2026-04-08 01:13:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:13:38.248369 | orchestrator | 2026-04-08 01:13:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:13:38.250064 | orchestrator | 2026-04-08 01:13:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:13:38.250124 | orchestrator | 2026-04-08 01:13:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:13:41.300300 | orchestrator | 2026-04-08 01:13:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:13:41.300769 | orchestrator | 2026-04-08 01:13:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:13:41.300803 | orchestrator | 2026-04-08 01:13:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:13:44.350715 | orchestrator | 2026-04-08 01:13:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:13:44.351948 | orchestrator | 2026-04-08 01:13:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:13:44.352020 | orchestrator | 2026-04-08 01:13:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:13:47.395915 | orchestrator | 2026-04-08 01:13:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:13:47.397590 | orchestrator | 2026-04-08 01:13:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:13:47.397736 | orchestrator | 2026-04-08 01:13:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:13:50.442270 | orchestrator | 2026-04-08 01:13:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:13:50.443357 | orchestrator | 2026-04-08 01:13:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:13:50.443412 | orchestrator | 2026-04-08 01:13:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:13:53.485535 | orchestrator | 2026-04-08 01:13:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:13:53.487408 | orchestrator | 2026-04-08 01:13:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:13:53.487477 | orchestrator | 2026-04-08 01:13:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:13:56.531277 | orchestrator | 2026-04-08 01:13:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:13:56.533072 | orchestrator | 2026-04-08 01:13:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:13:56.533136 | orchestrator | 2026-04-08 01:13:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:13:59.579808 | orchestrator | 2026-04-08 01:13:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:13:59.581268 | orchestrator | 2026-04-08 01:13:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:13:59.582003 | orchestrator | 2026-04-08 01:13:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:14:02.627412 | orchestrator | 2026-04-08 01:14:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:14:02.628937 | orchestrator | 2026-04-08 01:14:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:14:02.628985 | orchestrator | 2026-04-08 01:14:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:14:05.676810 | orchestrator | 2026-04-08 01:14:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:14:05.677805 | orchestrator | 2026-04-08 01:14:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:14:05.677845 | orchestrator | 2026-04-08 01:14:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:14:08.719766 | orchestrator | 2026-04-08 01:14:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:14:08.719910 | orchestrator | 2026-04-08 01:14:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:14:08.719922 | orchestrator | 2026-04-08 01:14:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:14:11.771027 | orchestrator | 2026-04-08 01:14:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:14:11.772869 | orchestrator | 2026-04-08 01:14:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:14:11.773468 | orchestrator | 2026-04-08 01:14:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:14:14.821556 | orchestrator | 2026-04-08 01:14:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:14:14.823148 | orchestrator | 2026-04-08 01:14:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:14:14.823200 | orchestrator | 2026-04-08 01:14:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:14:17.871423 | orchestrator | 2026-04-08 01:14:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:14:17.873389 | orchestrator | 2026-04-08 01:14:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:14:17.873436 | orchestrator | 2026-04-08 01:14:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:14:20.925417 | orchestrator | 2026-04-08 01:14:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:16:21.036743 | orchestrator | 2026-04-08 01:16:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:16:21.036824 | orchestrator | 2026-04-08 01:16:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:16:24.074589 | orchestrator | 2026-04-08 01:16:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:16:24.075543 | orchestrator | 2026-04-08 01:16:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:16:24.075591 | orchestrator | 2026-04-08 01:16:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:16:27.122918 | orchestrator | 2026-04-08 01:16:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:16:27.124859 | orchestrator | 2026-04-08 01:16:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:16:27.125014 | orchestrator | 2026-04-08 01:16:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:16:30.170612 | orchestrator | 2026-04-08 01:16:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:16:30.171984 | orchestrator | 2026-04-08 01:16:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:16:30.172043 | orchestrator | 2026-04-08 01:16:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:16:33.214398 | orchestrator | 2026-04-08 01:16:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:16:33.216255 | orchestrator | 2026-04-08 01:16:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:16:33.216393 | orchestrator | 2026-04-08 01:16:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:16:36.267235 | orchestrator | 2026-04-08 01:16:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:16:36.268936 | orchestrator | 2026-04-08 01:16:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:16:36.268973 | orchestrator | 2026-04-08 01:16:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:16:39.310687 | orchestrator | 2026-04-08 01:16:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:16:39.312211 | orchestrator | 2026-04-08 01:16:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:16:39.312265 | orchestrator | 2026-04-08 01:16:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:16:42.352359 | orchestrator | 2026-04-08 01:16:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:16:42.353435 | orchestrator | 2026-04-08 01:16:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:16:42.353484 | orchestrator | 2026-04-08 01:16:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:16:45.397701 | orchestrator | 2026-04-08 01:16:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:16:45.400631 | orchestrator | 2026-04-08 01:16:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:16:45.400719 | orchestrator | 2026-04-08 01:16:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:16:48.450333 | orchestrator | 2026-04-08 01:16:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:16:48.451858 | orchestrator | 2026-04-08 01:16:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:16:48.451943 | orchestrator | 2026-04-08 01:16:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:16:51.497747 | orchestrator | 2026-04-08 01:16:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:16:51.500615 | orchestrator | 2026-04-08 01:16:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:16:51.500694 | orchestrator | 2026-04-08 01:16:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:16:54.539559 | orchestrator | 2026-04-08 01:16:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:16:54.541235 | orchestrator | 2026-04-08 01:16:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:16:54.541283 | orchestrator | 2026-04-08 01:16:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:16:57.585356 | orchestrator | 2026-04-08 01:16:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:16:57.587353 | orchestrator | 2026-04-08 01:16:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:16:57.587393 | orchestrator | 2026-04-08 01:16:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:17:00.626719 | orchestrator | 2026-04-08 01:17:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:17:00.628235 | orchestrator | 2026-04-08 01:17:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:17:00.628333 | orchestrator | 2026-04-08 01:17:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:17:03.672129 | orchestrator | 2026-04-08 01:17:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:17:03.674393 | orchestrator | 2026-04-08 01:17:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:17:03.674566 | orchestrator | 2026-04-08 01:17:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:17:06.718954 | orchestrator | 2026-04-08 01:17:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:17:06.720008 | orchestrator | 2026-04-08 01:17:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:17:06.720095 | orchestrator | 2026-04-08 01:17:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:17:09.764920 | orchestrator | 2026-04-08 01:17:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:17:09.766559 | orchestrator | 2026-04-08 01:17:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:17:09.766616 | orchestrator | 2026-04-08 01:17:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:17:12.806536 | orchestrator | 2026-04-08 01:17:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:17:12.807996 | orchestrator | 2026-04-08 01:17:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:17:12.808046 | orchestrator | 2026-04-08 01:17:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:17:15.852966 | orchestrator | 2026-04-08 01:17:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:17:15.855405 | orchestrator | 2026-04-08 01:17:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:17:15.855548 | orchestrator | 2026-04-08 01:17:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:17:18.896069 | orchestrator | 2026-04-08 01:17:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:17:18.898527 | orchestrator | 2026-04-08 01:17:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:17:18.898583 | orchestrator | 2026-04-08 01:17:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:17:21.949615 | orchestrator | 2026-04-08 01:17:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:17:21.950660 | orchestrator | 2026-04-08 01:17:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:17:21.950688 | orchestrator | 2026-04-08 01:17:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:17:24.993221 | orchestrator | 2026-04-08 01:17:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:17:24.994700 | orchestrator | 2026-04-08 01:17:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:17:24.994739 | orchestrator | 2026-04-08 01:17:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:17:28.037592 | orchestrator | 2026-04-08 01:17:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:17:28.038748 | orchestrator | 2026-04-08 01:17:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:17:28.038882 | orchestrator | 2026-04-08 01:17:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:17:31.087166 | orchestrator | 2026-04-08 01:17:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:17:31.088828 | orchestrator | 2026-04-08 01:17:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:17:31.088861 | orchestrator | 2026-04-08 01:17:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:17:34.131917 | orchestrator | 2026-04-08 01:17:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:17:34.133153 | orchestrator | 2026-04-08 01:17:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:17:34.133271 | orchestrator | 2026-04-08 01:17:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:17:37.172915 | orchestrator | 2026-04-08 01:17:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:17:37.175700 | orchestrator | 2026-04-08 01:17:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:17:37.175818 | orchestrator | 2026-04-08 01:17:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:17:40.224827 | orchestrator | 2026-04-08 01:17:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:17:40.226454 | orchestrator | 2026-04-08 01:17:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:17:40.226491 | orchestrator | 2026-04-08 01:17:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:17:43.272183 | orchestrator | 2026-04-08 01:17:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:17:43.274883 | orchestrator | 2026-04-08 01:17:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:17:43.274977 | orchestrator | 2026-04-08 01:17:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:17:46.317103 | orchestrator | 2026-04-08 01:17:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:17:46.319087 | orchestrator | 2026-04-08 01:17:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:17:46.319183 | orchestrator | 2026-04-08 01:17:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:17:49.362053 | orchestrator | 2026-04-08 01:17:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:17:49.363394 | orchestrator | 2026-04-08 01:17:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:17:49.363552 | orchestrator | 2026-04-08 01:17:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:17:52.410466 | orchestrator | 2026-04-08 01:17:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:17:52.412154 | orchestrator | 2026-04-08 01:17:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:17:52.412273 | orchestrator | 2026-04-08 01:17:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:17:55.455496 | orchestrator | 2026-04-08 01:17:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:17:55.456993 | orchestrator | 2026-04-08 01:17:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:17:55.457024 | orchestrator | 2026-04-08 01:17:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:17:58.500213 | orchestrator | 2026-04-08 01:17:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:17:58.502564 | orchestrator | 2026-04-08 01:17:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:17:58.502684 | orchestrator | 2026-04-08 01:17:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:18:01.547999 | orchestrator | 2026-04-08 01:18:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:18:01.549660 | orchestrator | 2026-04-08 01:18:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:18:01.549741 | orchestrator | 2026-04-08 01:18:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:18:04.594506 | orchestrator | 2026-04-08 01:18:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:18:04.596181 | orchestrator | 2026-04-08 01:18:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:18:04.596231 | orchestrator | 2026-04-08 01:18:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:18:07.641039 | orchestrator | 2026-04-08 01:18:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:18:07.643553 | orchestrator | 2026-04-08 01:18:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:18:07.643709 | orchestrator | 2026-04-08 01:18:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:18:10.686294 | orchestrator | 2026-04-08 01:18:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:18:10.688497 | orchestrator | 2026-04-08 01:18:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:18:10.688551 | orchestrator | 2026-04-08 01:18:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:18:13.737019 | orchestrator | 2026-04-08 01:18:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:18:13.739422 | orchestrator | 2026-04-08 01:18:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:18:13.739489 | orchestrator | 2026-04-08 01:18:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:18:16.781460 | orchestrator | 2026-04-08 01:18:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:18:16.782938 | orchestrator | 2026-04-08 01:18:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:18:16.782999 | orchestrator | 2026-04-08 01:18:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:18:19.834099 | orchestrator | 2026-04-08 01:18:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:18:19.837470 | orchestrator | 2026-04-08 01:18:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:18:19.838990 | orchestrator | 2026-04-08 01:18:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:18:22.879642 | orchestrator | 2026-04-08 01:18:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:18:22.881516 | orchestrator | 2026-04-08 01:18:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:18:22.881589 | orchestrator | 2026-04-08 01:18:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:18:25.924890 | orchestrator | 2026-04-08 01:18:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:18:25.926760 | orchestrator | 2026-04-08 01:18:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:18:25.926823 | orchestrator | 2026-04-08 01:18:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:18:28.975190 | orchestrator | 2026-04-08 01:18:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:18:28.977111 | orchestrator | 2026-04-08 01:18:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:18:28.977204 | orchestrator | 2026-04-08 01:18:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:18:32.025671 | orchestrator | 2026-04-08 01:18:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:18:32.027458 | orchestrator | 2026-04-08 01:18:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:18:32.027525 | orchestrator | 2026-04-08 01:18:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:18:35.076591 | orchestrator | 2026-04-08 01:18:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:18:35.077734 | orchestrator | 2026-04-08 01:18:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:18:35.077776 | orchestrator | 2026-04-08 01:18:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:18:38.121954 | orchestrator | 2026-04-08 01:18:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:18:38.123691 | orchestrator | 2026-04-08 01:18:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:18:38.123754 | orchestrator | 2026-04-08 01:18:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:18:41.165102 | orchestrator | 2026-04-08 01:18:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:18:41.166867 | orchestrator | 2026-04-08 01:18:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:18:41.166921 | orchestrator | 2026-04-08 01:18:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:18:44.210005 | orchestrator | 2026-04-08 01:18:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:18:44.212040 | orchestrator | 2026-04-08 01:18:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:18:44.212092 | orchestrator | 2026-04-08 01:18:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:18:47.253849 | orchestrator | 2026-04-08 01:18:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:18:47.255525 | orchestrator | 2026-04-08 01:18:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:18:47.255586 | orchestrator | 2026-04-08 01:18:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:18:50.300458 | orchestrator | 2026-04-08 01:18:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:18:50.303073 | orchestrator | 2026-04-08 01:18:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:18:50.303216 | orchestrator | 2026-04-08 01:18:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:18:53.346411 | orchestrator | 2026-04-08 01:18:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:18:53.348166 | orchestrator | 2026-04-08 01:18:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:18:53.348219 | orchestrator | 2026-04-08 01:18:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:18:56.385882 | orchestrator | 2026-04-08 01:18:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:18:56.387693 | orchestrator | 2026-04-08 01:18:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:18:56.387762 | orchestrator | 2026-04-08 01:18:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:18:59.436760 | orchestrator | 2026-04-08 01:18:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:18:59.437405 | orchestrator | 2026-04-08 01:18:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:18:59.437698 | orchestrator | 2026-04-08 01:18:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:19:02.481675 | orchestrator | 2026-04-08 01:19:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:19:02.483768 | orchestrator | 2026-04-08 01:19:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:19:02.483907 | orchestrator | 2026-04-08 01:19:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:19:05.528662 | orchestrator | 2026-04-08 01:19:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:19:05.530568 | orchestrator | 2026-04-08 01:19:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:19:05.530629 | orchestrator | 2026-04-08 01:19:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:19:08.575990 | orchestrator | 2026-04-08 01:19:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:19:08.578645 | orchestrator | 2026-04-08 01:19:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:19:08.578728 | orchestrator | 2026-04-08 01:19:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:19:11.623487 | orchestrator | 2026-04-08 01:19:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:19:11.625670 | orchestrator | 2026-04-08 01:19:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:19:11.625817 | orchestrator | 2026-04-08 01:19:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:19:14.670450 | orchestrator | 2026-04-08 01:19:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:19:14.672946 | orchestrator | 2026-04-08 01:19:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:19:14.673009 | orchestrator | 2026-04-08 01:19:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:19:17.722055 | orchestrator | 2026-04-08 01:19:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:19:17.724163 | orchestrator | 2026-04-08 01:19:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:19:17.724245 | orchestrator | 2026-04-08 01:19:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:19:20.765205 | orchestrator | 2026-04-08 01:19:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:19:20.767967 | orchestrator | 2026-04-08 01:19:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:19:20.768038 | orchestrator | 2026-04-08 01:19:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:19:23.814426 | orchestrator | 2026-04-08 01:19:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:19:23.816985 | orchestrator | 2026-04-08 01:19:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:19:23.817122 | orchestrator | 2026-04-08 01:19:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:19:26.855672 | orchestrator | 2026-04-08 01:19:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:19:26.858189 | orchestrator | 2026-04-08 01:19:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:19:26.858252 | orchestrator | 2026-04-08 01:19:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:19:29.896231 | orchestrator | 2026-04-08 01:19:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:19:29.897726 | orchestrator | 2026-04-08 01:19:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:19:29.897873 | orchestrator | 2026-04-08 01:19:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:19:32.945464 | orchestrator | 2026-04-08 01:19:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:19:32.947025 | orchestrator | 2026-04-08 01:19:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:19:32.947090 | orchestrator | 2026-04-08 01:19:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:19:35.992327 | orchestrator | 2026-04-08 01:19:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:19:35.996000 | orchestrator | 2026-04-08 01:19:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:19:35.996070 | orchestrator | 2026-04-08 01:19:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:19:39.045819 | orchestrator | 2026-04-08 01:19:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:19:39.045923 | orchestrator | 2026-04-08 01:19:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:19:39.045932 | orchestrator | 2026-04-08 01:19:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:19:42.091030 | orchestrator | 2026-04-08 01:19:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:19:42.093748 | orchestrator | 2026-04-08 01:19:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:19:42.094287 | orchestrator | 2026-04-08 01:19:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:19:45.137653 | orchestrator | 2026-04-08 01:19:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:19:45.140814 | orchestrator | 2026-04-08 01:19:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:19:45.140897 | orchestrator | 2026-04-08 01:19:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:19:48.186825 | orchestrator | 2026-04-08 01:19:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:19:48.188633 | orchestrator | 2026-04-08 01:19:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:19:48.188708 | orchestrator | 2026-04-08 01:19:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:19:51.233828 | orchestrator | 2026-04-08 01:19:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:19:51.235527 | orchestrator | 2026-04-08 01:19:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:19:51.235713 | orchestrator | 2026-04-08 01:19:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:19:54.273020 | orchestrator | 2026-04-08 01:19:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:19:54.274259 | orchestrator | 2026-04-08 01:19:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:19:54.274315 | orchestrator | 2026-04-08 01:19:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:19:57.314885 | orchestrator | 2026-04-08 01:19:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:19:57.316529 | orchestrator | 2026-04-08 01:19:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:19:57.316591 | orchestrator | 2026-04-08 01:19:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:20:00.356555 | orchestrator | 2026-04-08 01:20:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:20:00.359037 | orchestrator | 2026-04-08 01:20:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:20:00.359093 | orchestrator | 2026-04-08 01:20:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:20:03.399839 | orchestrator | 2026-04-08 01:20:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:20:03.402410 | orchestrator | 2026-04-08 01:20:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:20:03.402611 | orchestrator | 2026-04-08 01:20:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:20:06.444557 | orchestrator | 2026-04-08 01:20:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:20:06.446206 | orchestrator | 2026-04-08 01:20:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:20:06.446296 | orchestrator | 2026-04-08 01:20:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:20:09.491275 | orchestrator | 2026-04-08 01:20:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:20:09.492273 | orchestrator | 2026-04-08 01:20:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:20:09.492311 | orchestrator | 2026-04-08 01:20:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:20:12.537094 | orchestrator | 2026-04-08 01:20:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:20:12.538538 | orchestrator | 2026-04-08 01:20:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:20:12.538856 | orchestrator | 2026-04-08 01:20:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:20:15.585255 | orchestrator | 2026-04-08 01:20:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:20:15.586776 | orchestrator | 2026-04-08 01:20:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:20:15.587069 | orchestrator | 2026-04-08 01:20:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:20:18.630166 | orchestrator | 2026-04-08 01:20:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:20:18.631178 | orchestrator | 2026-04-08 01:20:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:20:18.631582 | orchestrator | 2026-04-08 01:20:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:20:21.677719 | orchestrator | 2026-04-08 01:20:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:20:21.680219 | orchestrator | 2026-04-08 01:20:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:20:21.680320 | orchestrator | 2026-04-08 01:20:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:20:24.727798 | orchestrator | 2026-04-08 01:20:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:20:24.730290 | orchestrator | 2026-04-08 01:20:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:20:24.730349 | orchestrator | 2026-04-08 01:20:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:20:27.772888 | orchestrator | 2026-04-08 01:20:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:20:27.774249 | orchestrator | 2026-04-08 01:20:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:20:27.774308 | orchestrator | 2026-04-08 01:20:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:20:30.816555 | orchestrator | 2026-04-08 01:20:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:20:30.818695 | orchestrator | 2026-04-08 01:20:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:20:30.818769 | orchestrator | 2026-04-08 01:20:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:20:33.865242 | orchestrator | 2026-04-08 01:20:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:20:33.869109 | orchestrator | 2026-04-08 01:20:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:20:33.869185 | orchestrator | 2026-04-08 01:20:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:20:36.914375 | orchestrator | 2026-04-08 01:20:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:20:36.914581 | orchestrator | 2026-04-08 01:20:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:20:36.914598 | orchestrator | 2026-04-08 01:20:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:20:39.956166 | orchestrator | 2026-04-08 01:20:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:20:39.958641 | orchestrator | 2026-04-08 01:20:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:20:39.958726 | orchestrator | 2026-04-08 01:20:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:20:43.000615 | orchestrator | 2026-04-08 01:20:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:20:43.003450 | orchestrator | 2026-04-08 01:20:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:20:43.003574 | orchestrator | 2026-04-08 01:20:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:20:46.049172 | orchestrator | 2026-04-08 01:20:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:20:46.051038 | orchestrator | 2026-04-08 01:20:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:20:46.051132 | orchestrator | 2026-04-08 01:20:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:20:49.100953 | orchestrator | 2026-04-08 01:20:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:20:49.104116 | orchestrator | 2026-04-08 01:20:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:20:49.104202 | orchestrator | 2026-04-08 01:20:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:20:52.148997 | orchestrator | 2026-04-08 01:20:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:20:52.149877 | orchestrator | 2026-04-08 01:20:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:20:52.149915 | orchestrator | 2026-04-08 01:20:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:20:55.193472 | orchestrator | 2026-04-08 01:20:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:20:55.195305 | orchestrator | 2026-04-08 01:20:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:20:55.195449 | orchestrator | 2026-04-08 01:20:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:20:58.243772 | orchestrator | 2026-04-08 01:20:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:20:58.245172 | orchestrator | 2026-04-08 01:20:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:20:58.245206 | orchestrator | 2026-04-08 01:20:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:21:01.291332 | orchestrator | 2026-04-08 01:21:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:21:01.292913 | orchestrator | 2026-04-08 01:21:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:21:01.292968 | orchestrator | 2026-04-08 01:21:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:21:04.341041 | orchestrator | 2026-04-08 01:21:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:21:04.342957 | orchestrator | 2026-04-08 01:21:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:21:04.343055 | orchestrator | 2026-04-08 01:21:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:21:07.385030 | orchestrator | 2026-04-08 01:21:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:21:07.387647 | orchestrator | 2026-04-08 01:21:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:21:07.387790 | orchestrator | 2026-04-08 01:21:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:21:10.432771 | orchestrator | 2026-04-08 01:21:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:21:10.433768 | orchestrator | 2026-04-08 01:21:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:21:10.433837 | orchestrator | 2026-04-08 01:21:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:21:13.479358 | orchestrator | 2026-04-08 01:21:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:21:13.481821 | orchestrator | 2026-04-08 01:21:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:21:13.481900 | orchestrator | 2026-04-08 01:21:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:21:16.523392 | orchestrator | 2026-04-08 01:21:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:21:16.525661 | orchestrator | 2026-04-08 01:21:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:21:16.525724 | orchestrator | 2026-04-08 01:21:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:21:19.573002 | orchestrator | 2026-04-08 01:21:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:21:19.575456 | orchestrator | 2026-04-08 01:21:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:21:19.575512 | orchestrator | 2026-04-08 01:21:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:21:22.627055 | orchestrator | 2026-04-08 01:21:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:21:22.630559 | orchestrator | 2026-04-08 01:21:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:21:22.630614 | orchestrator | 2026-04-08 01:21:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:21:25.679246 | orchestrator | 2026-04-08 01:21:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:21:25.681643 | orchestrator | 2026-04-08 01:21:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:21:25.681749 | orchestrator | 2026-04-08 01:21:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:21:28.736485 | orchestrator | 2026-04-08 01:21:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:21:28.737879 | orchestrator | 2026-04-08 01:21:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:21:28.737973 | orchestrator | 2026-04-08 01:21:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:21:31.788082 | orchestrator | 2026-04-08 01:21:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:21:31.790962 | orchestrator | 2026-04-08 01:21:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:21:31.791023 | orchestrator | 2026-04-08 01:21:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:21:34.839661 | orchestrator | 2026-04-08 01:21:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:21:34.841397 | orchestrator | 2026-04-08 01:21:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:21:34.841474 | orchestrator | 2026-04-08 01:21:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:21:37.884468 | orchestrator | 2026-04-08 01:21:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:21:37.886284 | orchestrator | 2026-04-08 01:21:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:21:37.886336 | orchestrator | 2026-04-08 01:21:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:21:40.927505 | orchestrator | 2026-04-08 01:21:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:21:40.928350 | orchestrator | 2026-04-08 01:21:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:21:40.928393 | orchestrator | 2026-04-08 01:21:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:21:43.978883 | orchestrator | 2026-04-08 01:21:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:21:43.980901 | orchestrator | 2026-04-08 01:21:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:21:43.980964 | orchestrator | 2026-04-08 01:21:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:21:47.029171 | orchestrator | 2026-04-08 01:21:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:21:47.030351 | orchestrator | 2026-04-08 01:21:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:21:47.030395 | orchestrator | 2026-04-08 01:21:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:21:50.080280 | orchestrator | 2026-04-08 01:21:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:21:50.081422 | orchestrator | 2026-04-08 01:21:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:21:50.083303 | orchestrator | 2026-04-08 01:21:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:21:53.132528 | orchestrator | 2026-04-08 01:21:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:21:53.135132 | orchestrator | 2026-04-08 01:21:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:21:53.135199 | orchestrator | 2026-04-08 01:21:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:21:56.178338 | orchestrator | 2026-04-08 01:21:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:21:56.180312 | orchestrator | 2026-04-08 01:21:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:21:56.180456 | orchestrator | 2026-04-08 01:21:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:21:59.224798 | orchestrator | 2026-04-08 01:21:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:21:59.226181 | orchestrator | 2026-04-08 01:21:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:21:59.226288 | orchestrator | 2026-04-08 01:21:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:22:02.274772 | orchestrator | 2026-04-08 01:22:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:22:02.275945 | orchestrator | 2026-04-08 01:22:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:22:02.276073 | orchestrator | 2026-04-08 01:22:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:22:05.321119 | orchestrator | 2026-04-08 01:22:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:22:05.322700 | orchestrator | 2026-04-08 01:22:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:22:05.322803 | orchestrator | 2026-04-08 01:22:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:22:08.368661 | orchestrator | 2026-04-08 01:22:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:22:08.370484 | orchestrator | 2026-04-08 01:22:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:22:08.370581 | orchestrator | 2026-04-08 01:22:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:22:11.427074 | orchestrator | 2026-04-08 01:22:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:22:11.428460 | orchestrator | 2026-04-08 01:22:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:22:11.428690 | orchestrator | 2026-04-08 01:22:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:22:14.472638 | orchestrator | 2026-04-08 01:22:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:22:14.474203 | orchestrator | 2026-04-08 01:22:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:22:14.474253 | orchestrator | 2026-04-08 01:22:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:22:17.518395 | orchestrator | 2026-04-08 01:22:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:22:17.519377 | orchestrator | 2026-04-08 01:22:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:22:17.519422 | orchestrator | 2026-04-08 01:22:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:22:20.574260 | orchestrator | 2026-04-08 01:22:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:22:20.576075 | orchestrator | 2026-04-08 01:22:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:22:20.576119 | orchestrator | 2026-04-08 01:22:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:22:23.628105 | orchestrator | 2026-04-08 01:22:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:22:23.629800 | orchestrator | 2026-04-08 01:22:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:22:23.629862 | orchestrator | 2026-04-08 01:22:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:22:26.678811 | orchestrator | 2026-04-08 01:22:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:22:26.682695 | orchestrator | 2026-04-08 01:22:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:22:26.682739 | orchestrator | 2026-04-08 01:22:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:22:29.730337 | orchestrator | 2026-04-08 01:22:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:22:29.731737 | orchestrator | 2026-04-08 01:22:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:22:29.731783 | orchestrator | 2026-04-08 01:22:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:22:32.788630 | orchestrator | 2026-04-08 01:22:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:22:32.790890 | orchestrator | 2026-04-08 01:22:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:22:32.790960 | orchestrator | 2026-04-08 01:22:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:22:35.836919 | orchestrator | 2026-04-08 01:22:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:22:35.838407 | orchestrator | 2026-04-08 01:22:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:22:35.838507 | orchestrator | 2026-04-08 01:22:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:22:38.888528 | orchestrator | 2026-04-08 01:22:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:22:38.890566 | orchestrator | 2026-04-08 01:22:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:22:38.890693 | orchestrator | 2026-04-08 01:22:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:22:41.943137 | orchestrator | 2026-04-08 01:22:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:22:41.945748 | orchestrator | 2026-04-08 01:22:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:22:41.945815 | orchestrator | 2026-04-08 01:22:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:22:44.991224 | orchestrator | 2026-04-08 01:22:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:22:44.993005 | orchestrator | 2026-04-08 01:22:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:22:44.993074 | orchestrator | 2026-04-08 01:22:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:22:48.042115 | orchestrator | 2026-04-08 01:22:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:22:48.043953 | orchestrator | 2026-04-08 01:22:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:22:48.044029 | orchestrator | 2026-04-08 01:22:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:22:51.087369 | orchestrator | 2026-04-08 01:22:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:22:51.088810 | orchestrator | 2026-04-08 01:22:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:22:51.088849 | orchestrator | 2026-04-08 01:22:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:22:54.141466 | orchestrator | 2026-04-08 01:22:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:22:54.143356 | orchestrator | 2026-04-08 01:22:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:22:54.143450 | orchestrator | 2026-04-08 01:22:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:22:57.188504 | orchestrator | 2026-04-08 01:22:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:22:57.188617 | orchestrator | 2026-04-08 01:22:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:22:57.188684 | orchestrator | 2026-04-08 01:22:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:23:00.229047 | orchestrator | 2026-04-08 01:23:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:23:00.230237 | orchestrator | 2026-04-08 01:23:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:23:00.230297 | orchestrator | 2026-04-08 01:23:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:23:03.281118 | orchestrator | 2026-04-08 01:23:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:23:03.282269 | orchestrator | 2026-04-08 01:23:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:23:03.282346 | orchestrator | 2026-04-08 01:23:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:23:06.328418 | orchestrator | 2026-04-08 01:23:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:23:06.330545 | orchestrator | 2026-04-08 01:23:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:23:06.330681 | orchestrator | 2026-04-08 01:23:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:23:09.380133 | orchestrator | 2026-04-08 01:23:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:23:09.381467 | orchestrator | 2026-04-08 01:23:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:23:09.381574 | orchestrator | 2026-04-08 01:23:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:23:12.432587 | orchestrator | 2026-04-08 01:23:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:23:12.433989 | orchestrator | 2026-04-08 01:23:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:23:12.434110 | orchestrator | 2026-04-08 01:23:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:23:15.479400 | orchestrator | 2026-04-08 01:23:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:23:15.480545 | orchestrator | 2026-04-08 01:23:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:23:15.480586 | orchestrator | 2026-04-08 01:23:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:23:18.525462 | orchestrator | 2026-04-08 01:23:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:23:18.527934 | orchestrator | 2026-04-08 01:23:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:23:18.528044 | orchestrator | 2026-04-08 01:23:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:23:21.568823 | orchestrator | 2026-04-08 01:23:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:23:21.569817 | orchestrator | 2026-04-08 01:23:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:23:21.569896 | orchestrator | 2026-04-08 01:23:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:23:24.623110 | orchestrator | 2026-04-08 01:23:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:23:24.625216 | orchestrator | 2026-04-08 01:23:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:23:24.625268 | orchestrator | 2026-04-08 01:23:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:23:27.674238 | orchestrator | 2026-04-08 01:23:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:23:27.676485 | orchestrator | 2026-04-08 01:23:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:23:27.676680 | orchestrator | 2026-04-08 01:23:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:23:30.721113 | orchestrator | 2026-04-08 01:23:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:23:30.723186 | orchestrator | 2026-04-08 01:23:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:23:30.723325 | orchestrator | 2026-04-08 01:23:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:23:33.766845 | orchestrator | 2026-04-08 01:23:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:23:33.768921 | orchestrator | 2026-04-08 01:23:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:23:33.768991 | orchestrator | 2026-04-08 01:23:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:23:36.821876 | orchestrator | 2026-04-08 01:23:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:23:36.822922 | orchestrator | 2026-04-08 01:23:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:23:36.823074 | orchestrator | 2026-04-08 01:23:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:23:39.868290 | orchestrator | 2026-04-08 01:23:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:23:39.870674 | orchestrator | 2026-04-08 01:23:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:23:39.870747 | orchestrator | 2026-04-08 01:23:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:23:42.910065 | orchestrator | 2026-04-08 01:23:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:23:42.910823 | orchestrator | 2026-04-08 01:23:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:23:42.910881 | orchestrator | 2026-04-08 01:23:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:23:45.959042 | orchestrator | 2026-04-08 01:23:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:23:45.961570 | orchestrator | 2026-04-08 01:23:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:23:45.961726 | orchestrator | 2026-04-08 01:23:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:23:49.007902 | orchestrator | 2026-04-08 01:23:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:23:49.009270 | orchestrator | 2026-04-08 01:23:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:23:49.009355 | orchestrator | 2026-04-08 01:23:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:23:52.051726 | orchestrator | 2026-04-08 01:23:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:23:52.053427 | orchestrator | 2026-04-08 01:23:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:23:52.053480 | orchestrator | 2026-04-08 01:23:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:23:55.105032 | orchestrator | 2026-04-08 01:23:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:23:55.107221 | orchestrator | 2026-04-08 01:23:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:23:55.107282 | orchestrator | 2026-04-08 01:23:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:23:58.156855 | orchestrator | 2026-04-08 01:23:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:23:58.159496 | orchestrator | 2026-04-08 01:23:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:23:58.159566 | orchestrator | 2026-04-08 01:23:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:24:01.207570 | orchestrator | 2026-04-08 01:24:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:24:01.209942 | orchestrator | 2026-04-08 01:24:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:24:01.210070 | orchestrator | 2026-04-08 01:24:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:24:04.256143 | orchestrator | 2026-04-08 01:24:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:24:04.257749 | orchestrator | 2026-04-08 01:24:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:24:04.258097 | orchestrator | 2026-04-08 01:24:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:24:07.299856 | orchestrator | 2026-04-08 01:24:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:24:07.302756 | orchestrator | 2026-04-08 01:24:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:24:07.302841 | orchestrator | 2026-04-08 01:24:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:24:10.350752 | orchestrator | 2026-04-08 01:24:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:24:10.353790 | orchestrator | 2026-04-08 01:24:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:24:10.353838 | orchestrator | 2026-04-08 01:24:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:24:13.402445 | orchestrator | 2026-04-08 01:24:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:24:13.406433 | orchestrator | 2026-04-08 01:24:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:24:13.406505 | orchestrator | 2026-04-08 01:24:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:24:16.455111 | orchestrator | 2026-04-08 01:24:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:24:16.457140 | orchestrator | 2026-04-08 01:24:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:24:16.457191 | orchestrator | 2026-04-08 01:24:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:24:19.503202 | orchestrator | 2026-04-08 01:24:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:24:19.504769 | orchestrator | 2026-04-08 01:24:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:24:19.504803 | orchestrator | 2026-04-08 01:24:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:24:22.557465 | orchestrator | 2026-04-08 01:24:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:24:22.558691 | orchestrator | 2026-04-08 01:24:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:24:22.560129 | orchestrator | 2026-04-08 01:24:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:24:25.610006 | orchestrator | 2026-04-08 01:24:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:24:25.612600 | orchestrator | 2026-04-08 01:24:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:24:25.612855 | orchestrator | 2026-04-08 01:24:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:24:28.660305 | orchestrator | 2026-04-08 01:24:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:24:28.664005 | orchestrator | 2026-04-08 01:24:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:24:28.664093 | orchestrator | 2026-04-08 01:24:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:24:31.709147 | orchestrator | 2026-04-08 01:24:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:24:31.711386 | orchestrator | 2026-04-08 01:24:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:24:31.711509 | orchestrator | 2026-04-08 01:24:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:24:34.755405 | orchestrator | 2026-04-08 01:24:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:24:34.757057 | orchestrator | 2026-04-08 01:24:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:24:34.757099 | orchestrator | 2026-04-08 01:24:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:24:37.804012 | orchestrator | 2026-04-08 01:24:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:24:37.806137 | orchestrator | 2026-04-08 01:24:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:24:37.806212 | orchestrator | 2026-04-08 01:24:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:24:40.842527 | orchestrator | 2026-04-08 01:24:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:24:40.843614 | orchestrator | 2026-04-08 01:24:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:24:40.843718 | orchestrator | 2026-04-08 01:24:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:24:43.891154 | orchestrator | 2026-04-08 01:24:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:24:43.893151 | orchestrator | 2026-04-08 01:24:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:24:43.893204 | orchestrator | 2026-04-08 01:24:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:24:46.939081 | orchestrator | 2026-04-08 01:24:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:24:46.940205 | orchestrator | 2026-04-08 01:24:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:24:46.940257 | orchestrator | 2026-04-08 01:24:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:24:49.986189 | orchestrator | 2026-04-08 01:24:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:24:49.988313 | orchestrator | 2026-04-08 01:24:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:24:49.988361 | orchestrator | 2026-04-08 01:24:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:24:53.038269 | orchestrator | 2026-04-08 01:24:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:24:53.039993 | orchestrator | 2026-04-08 01:24:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:24:53.040068 | orchestrator | 2026-04-08 01:24:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:24:56.081683 | orchestrator | 2026-04-08 01:24:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:24:56.083490 | orchestrator | 2026-04-08 01:24:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:24:56.083548 | orchestrator | 2026-04-08 01:24:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:24:59.124518 | orchestrator | 2026-04-08 01:24:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:24:59.126377 | orchestrator | 2026-04-08 01:24:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:24:59.126437 | orchestrator | 2026-04-08 01:24:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:25:02.174717 | orchestrator | 2026-04-08 01:25:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:25:02.176349 | orchestrator | 2026-04-08 01:25:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:25:02.176468 | orchestrator | 2026-04-08 01:25:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:25:05.224432 | orchestrator | 2026-04-08 01:25:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:25:05.227968 | orchestrator | 2026-04-08 01:25:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:25:05.228104 | orchestrator | 2026-04-08 01:25:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:25:08.275827 | orchestrator | 2026-04-08 01:25:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:25:08.277538 | orchestrator | 2026-04-08 01:25:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:25:08.277595 | orchestrator | 2026-04-08 01:25:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:25:11.324225 | orchestrator | 2026-04-08 01:25:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:25:11.326081 | orchestrator | 2026-04-08 01:25:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:25:11.326216 | orchestrator | 2026-04-08 01:25:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:25:14.378303 | orchestrator | 2026-04-08 01:25:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:25:14.379593 | orchestrator | 2026-04-08 01:25:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:25:14.379634 | orchestrator | 2026-04-08 01:25:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:25:17.430855 | orchestrator | 2026-04-08 01:25:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:25:17.433420 | orchestrator | 2026-04-08 01:25:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:25:17.433477 | orchestrator | 2026-04-08 01:25:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:25:20.479950 | orchestrator | 2026-04-08 01:25:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:25:20.482143 | orchestrator | 2026-04-08 01:25:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:25:20.482193 | orchestrator | 2026-04-08 01:25:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:25:23.533780 | orchestrator | 2026-04-08 01:25:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:25:23.535416 | orchestrator | 2026-04-08 01:25:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:25:23.535574 | orchestrator | 2026-04-08 01:25:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:25:26.582931 | orchestrator | 2026-04-08 01:25:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:25:26.585267 | orchestrator | 2026-04-08 01:25:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:25:26.585328 | orchestrator | 2026-04-08 01:25:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:25:29.632521 | orchestrator | 2026-04-08 01:25:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:25:29.633600 | orchestrator | 2026-04-08 01:25:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:25:29.633699 | orchestrator | 2026-04-08 01:25:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:25:32.684565 | orchestrator | 2026-04-08 01:25:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:25:32.687181 | orchestrator | 2026-04-08 01:25:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:25:32.687240 | orchestrator | 2026-04-08 01:25:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:25:35.735534 | orchestrator | 2026-04-08 01:25:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:25:35.737021 | orchestrator | 2026-04-08 01:25:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:25:35.737297 | orchestrator | 2026-04-08 01:25:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:25:38.792089 | orchestrator | 2026-04-08 01:25:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:25:38.794287 | orchestrator | 2026-04-08 01:25:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:25:38.794345 | orchestrator | 2026-04-08 01:25:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:25:41.840029 | orchestrator | 2026-04-08 01:25:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:25:41.841956 | orchestrator | 2026-04-08 01:25:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:25:41.842002 | orchestrator | 2026-04-08 01:25:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:25:44.889124 | orchestrator | 2026-04-08 01:25:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:25:44.892117 | orchestrator | 2026-04-08 01:25:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:25:44.892255 | orchestrator | 2026-04-08 01:25:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:25:47.939173 | orchestrator | 2026-04-08 01:25:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:25:47.940425 | orchestrator | 2026-04-08 01:25:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:25:47.940480 | orchestrator | 2026-04-08 01:25:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:25:50.988623 | orchestrator | 2026-04-08 01:25:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:25:50.990380 | orchestrator | 2026-04-08 01:25:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:25:50.990467 | orchestrator | 2026-04-08 01:25:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:25:54.036146 | orchestrator | 2026-04-08 01:25:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:25:54.037755 | orchestrator | 2026-04-08 01:25:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:25:54.037816 | orchestrator | 2026-04-08 01:25:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:25:57.076890 | orchestrator | 2026-04-08 01:25:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:25:57.078761 | orchestrator | 2026-04-08 01:25:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:25:57.078938 | orchestrator | 2026-04-08 01:25:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:26:00.122972 | orchestrator | 2026-04-08 01:26:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:26:00.124214 | orchestrator | 2026-04-08 01:26:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:26:00.124255 | orchestrator | 2026-04-08 01:26:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:26:03.169144 | orchestrator | 2026-04-08 01:26:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:26:03.170610 | orchestrator | 2026-04-08 01:26:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:26:03.170658 | orchestrator | 2026-04-08 01:26:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:26:06.220758 | orchestrator | 2026-04-08 01:26:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:26:06.221826 | orchestrator | 2026-04-08 01:26:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:26:06.221966 | orchestrator | 2026-04-08 01:26:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:26:09.273459 | orchestrator | 2026-04-08 01:26:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:26:09.276174 | orchestrator | 2026-04-08 01:26:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:26:09.276250 | orchestrator | 2026-04-08 01:26:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:26:12.312968 | orchestrator | 2026-04-08 01:26:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:26:12.315754 | orchestrator | 2026-04-08 01:26:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:26:12.315838 | orchestrator | 2026-04-08 01:26:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:26:15.360898 | orchestrator | 2026-04-08 01:26:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:26:15.364381 | orchestrator | 2026-04-08 01:26:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:26:15.364464 | orchestrator | 2026-04-08 01:26:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:26:18.412213 | orchestrator | 2026-04-08 01:26:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:26:18.413456 | orchestrator | 2026-04-08 01:26:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:26:18.413487 | orchestrator | 2026-04-08 01:26:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:26:21.460856 | orchestrator | 2026-04-08 01:26:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:26:21.462392 | orchestrator | 2026-04-08 01:26:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:26:21.462431 | orchestrator | 2026-04-08 01:26:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:26:24.512782 | orchestrator | 2026-04-08 01:26:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:26:24.514555 | orchestrator | 2026-04-08 01:26:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:26:24.514622 | orchestrator | 2026-04-08 01:26:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:26:27.557599 | orchestrator | 2026-04-08 01:26:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:26:27.559613 | orchestrator | 2026-04-08 01:26:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:26:27.559720 | orchestrator | 2026-04-08 01:26:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:26:30.605736 | orchestrator | 2026-04-08 01:26:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:26:30.608234 | orchestrator | 2026-04-08 01:26:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:26:30.608778 | orchestrator | 2026-04-08 01:26:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:26:33.649819 | orchestrator | 2026-04-08 01:26:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:26:33.650621 | orchestrator | 2026-04-08 01:26:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:26:33.650795 | orchestrator | 2026-04-08 01:26:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:26:36.696768 | orchestrator | 2026-04-08 01:26:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:26:36.698855 | orchestrator | 2026-04-08 01:26:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:26:36.698957 | orchestrator | 2026-04-08 01:26:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:26:39.744262 | orchestrator | 2026-04-08 01:26:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:26:39.746444 | orchestrator | 2026-04-08 01:26:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:26:39.746527 | orchestrator | 2026-04-08 01:26:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:26:42.788245 | orchestrator | 2026-04-08 01:26:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:26:42.789077 | orchestrator | 2026-04-08 01:26:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:26:42.789105 | orchestrator | 2026-04-08 01:26:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:26:45.833973 | orchestrator | 2026-04-08 01:26:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:26:45.835567 | orchestrator | 2026-04-08 01:26:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:26:45.835653 | orchestrator | 2026-04-08 01:26:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:26:48.883435 | orchestrator | 2026-04-08 01:26:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:26:48.885173 | orchestrator | 2026-04-08 01:26:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:26:48.885246 | orchestrator | 2026-04-08 01:26:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:26:51.929761 | orchestrator | 2026-04-08 01:26:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:26:51.931244 | orchestrator | 2026-04-08 01:26:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:26:51.931299 | orchestrator | 2026-04-08 01:26:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:26:54.981655 | orchestrator | 2026-04-08 01:26:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:26:54.983469 | orchestrator | 2026-04-08 01:26:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:26:54.983508 | orchestrator | 2026-04-08 01:26:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:26:58.028528 | orchestrator | 2026-04-08 01:26:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:26:58.031141 | orchestrator | 2026-04-08 01:26:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:26:58.031230 | orchestrator | 2026-04-08 01:26:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:27:01.067050 | orchestrator | 2026-04-08 01:27:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:27:01.068510 | orchestrator | 2026-04-08 01:27:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:27:01.068583 | orchestrator | 2026-04-08 01:27:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:27:04.118799 | orchestrator | 2026-04-08 01:27:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:27:04.123744 | orchestrator | 2026-04-08 01:27:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:27:04.123851 | orchestrator | 2026-04-08 01:27:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:27:07.166201 | orchestrator | 2026-04-08 01:27:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:27:07.167999 | orchestrator | 2026-04-08 01:27:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:27:07.168050 | orchestrator | 2026-04-08 01:27:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:27:10.206936 | orchestrator | 2026-04-08 01:27:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:27:10.207980 | orchestrator | 2026-04-08 01:27:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:27:10.208035 | orchestrator | 2026-04-08 01:27:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:27:13.251893 | orchestrator | 2026-04-08 01:27:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:27:13.254265 | orchestrator | 2026-04-08 01:27:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:27:13.254366 | orchestrator | 2026-04-08 01:27:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:27:16.300538 | orchestrator | 2026-04-08 01:27:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:27:16.302821 | orchestrator | 2026-04-08 01:27:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:27:16.302900 | orchestrator | 2026-04-08 01:27:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:27:19.357049 | orchestrator | 2026-04-08 01:27:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:27:19.359184 | orchestrator | 2026-04-08 01:27:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:27:19.359280 | orchestrator | 2026-04-08 01:27:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:27:22.410171 | orchestrator | 2026-04-08 01:27:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:27:22.412563 | orchestrator | 2026-04-08 01:27:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:27:22.412619 | orchestrator | 2026-04-08 01:27:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:27:25.468639 | orchestrator | 2026-04-08 01:27:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:27:25.470220 | orchestrator | 2026-04-08 01:27:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:27:25.470278 | orchestrator | 2026-04-08 01:27:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:27:28.518870 | orchestrator | 2026-04-08 01:27:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:27:28.520299 | orchestrator | 2026-04-08 01:27:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:27:28.520447 | orchestrator | 2026-04-08 01:27:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:27:31.569692 | orchestrator | 2026-04-08 01:27:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:27:31.571794 | orchestrator | 2026-04-08 01:27:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:27:31.571852 | orchestrator | 2026-04-08 01:27:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:27:34.619813 | orchestrator | 2026-04-08 01:27:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:27:34.621884 | orchestrator | 2026-04-08 01:27:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:27:34.621937 | orchestrator | 2026-04-08 01:27:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:27:37.667648 | orchestrator | 2026-04-08 01:27:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:27:37.670092 | orchestrator | 2026-04-08 01:27:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:27:37.670159 | orchestrator | 2026-04-08 01:27:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:27:40.722178 | orchestrator | 2026-04-08 01:27:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:27:40.724030 | orchestrator | 2026-04-08 01:27:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:27:40.724262 | orchestrator | 2026-04-08 01:27:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:27:43.768140 | orchestrator | 2026-04-08 01:27:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:27:43.769769 | orchestrator | 2026-04-08 01:27:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:27:43.769827 | orchestrator | 2026-04-08 01:27:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:27:46.817496 | orchestrator | 2026-04-08 01:27:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:27:46.818887 | orchestrator | 2026-04-08 01:27:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:27:46.818943 | orchestrator | 2026-04-08 01:27:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:27:49.870171 | orchestrator | 2026-04-08 01:27:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:27:49.872218 | orchestrator | 2026-04-08 01:27:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:27:49.872284 | orchestrator | 2026-04-08 01:27:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:27:52.923475 | orchestrator | 2026-04-08 01:27:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:27:52.925617 | orchestrator | 2026-04-08 01:27:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:27:52.925789 | orchestrator | 2026-04-08 01:27:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:27:55.967644 | orchestrator | 2026-04-08 01:27:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:27:55.970454 | orchestrator | 2026-04-08 01:27:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:27:55.970545 | orchestrator | 2026-04-08 01:27:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:27:59.017158 | orchestrator | 2026-04-08 01:27:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:27:59.018107 | orchestrator | 2026-04-08 01:27:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:27:59.018152 | orchestrator | 2026-04-08 01:27:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:28:02.069604 | orchestrator | 2026-04-08 01:28:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:28:02.071612 | orchestrator | 2026-04-08 01:28:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:28:02.071679 | orchestrator | 2026-04-08 01:28:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:28:05.119686 | orchestrator | 2026-04-08 01:28:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:28:05.122742 | orchestrator | 2026-04-08 01:28:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:28:05.122812 | orchestrator | 2026-04-08 01:28:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:28:08.166746 | orchestrator | 2026-04-08 01:28:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:28:08.168745 | orchestrator | 2026-04-08 01:28:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:28:08.168875 | orchestrator | 2026-04-08 01:28:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:28:11.218491 | orchestrator | 2026-04-08 01:28:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:28:11.220046 | orchestrator | 2026-04-08 01:28:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:28:11.220103 | orchestrator | 2026-04-08 01:28:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:28:14.261982 | orchestrator | 2026-04-08 01:28:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:28:14.263636 | orchestrator | 2026-04-08 01:28:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:28:14.263724 | orchestrator | 2026-04-08 01:28:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:28:17.311504 | orchestrator | 2026-04-08 01:28:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:28:17.313837 | orchestrator | 2026-04-08 01:28:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:28:17.313926 | orchestrator | 2026-04-08 01:28:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:28:20.360137 | orchestrator | 2026-04-08 01:28:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:28:20.361893 | orchestrator | 2026-04-08 01:28:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:28:20.361943 | orchestrator | 2026-04-08 01:28:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:28:23.408986 | orchestrator | 2026-04-08 01:28:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:28:23.409137 | orchestrator | 2026-04-08 01:28:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:28:23.409155 | orchestrator | 2026-04-08 01:28:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:28:26.453170 | orchestrator | 2026-04-08 01:28:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:28:26.455422 | orchestrator | 2026-04-08 01:28:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:28:26.455495 | orchestrator | 2026-04-08 01:28:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:28:29.501932 | orchestrator | 2026-04-08 01:28:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:28:29.504512 | orchestrator | 2026-04-08 01:28:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:28:29.504577 | orchestrator | 2026-04-08 01:28:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:28:32.552353 | orchestrator | 2026-04-08 01:28:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:28:32.554780 | orchestrator | 2026-04-08 01:28:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:28:32.555253 | orchestrator | 2026-04-08 01:28:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:28:35.602420 | orchestrator | 2026-04-08 01:28:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:28:35.604443 | orchestrator | 2026-04-08 01:28:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:28:35.604541 | orchestrator | 2026-04-08 01:28:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:28:38.649117 | orchestrator | 2026-04-08 01:28:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:28:38.651085 | orchestrator | 2026-04-08 01:28:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:28:38.651174 | orchestrator | 2026-04-08 01:28:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:28:41.693536 | orchestrator | 2026-04-08 01:28:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:28:41.695008 | orchestrator | 2026-04-08 01:28:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:28:41.695570 | orchestrator | 2026-04-08 01:28:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:28:44.742784 | orchestrator | 2026-04-08 01:28:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:28:44.744962 | orchestrator | 2026-04-08 01:28:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:28:44.745055 | orchestrator | 2026-04-08 01:28:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:28:47.797848 | orchestrator | 2026-04-08 01:28:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:28:47.798895 | orchestrator | 2026-04-08 01:28:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:28:47.798940 | orchestrator | 2026-04-08 01:28:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:28:50.843332 | orchestrator | 2026-04-08 01:28:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:28:50.844702 | orchestrator | 2026-04-08 01:28:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:28:50.844771 | orchestrator | 2026-04-08 01:28:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:28:53.892038 | orchestrator | 2026-04-08 01:28:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:28:53.893892 | orchestrator | 2026-04-08 01:28:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:28:53.893948 | orchestrator | 2026-04-08 01:28:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:28:56.933722 | orchestrator | 2026-04-08 01:28:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:28:56.935594 | orchestrator | 2026-04-08 01:28:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:28:56.935645 | orchestrator | 2026-04-08 01:28:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:28:59.988404 | orchestrator | 2026-04-08 01:28:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:28:59.991231 | orchestrator | 2026-04-08 01:28:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:28:59.991317 | orchestrator | 2026-04-08 01:28:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:29:03.040613 | orchestrator | 2026-04-08 01:29:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:29:03.044639 | orchestrator | 2026-04-08 01:29:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:29:03.044717 | orchestrator | 2026-04-08 01:29:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:29:06.092023 | orchestrator | 2026-04-08 01:29:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:29:06.096123 | orchestrator | 2026-04-08 01:29:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:29:06.096212 | orchestrator | 2026-04-08 01:29:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:29:09.153130 | orchestrator | 2026-04-08 01:29:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:29:09.155107 | orchestrator | 2026-04-08 01:29:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:29:09.155154 | orchestrator | 2026-04-08 01:29:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:29:12.205784 | orchestrator | 2026-04-08 01:29:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:29:12.207362 | orchestrator | 2026-04-08 01:29:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:29:12.207425 | orchestrator | 2026-04-08 01:29:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:29:15.261904 | orchestrator | 2026-04-08 01:29:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:29:15.265487 | orchestrator | 2026-04-08 01:29:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:29:15.265767 | orchestrator | 2026-04-08 01:29:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:29:18.321216 | orchestrator | 2026-04-08 01:29:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:29:18.323520 | orchestrator | 2026-04-08 01:29:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:29:18.323602 | orchestrator | 2026-04-08 01:29:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:29:21.369869 | orchestrator | 2026-04-08 01:29:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:29:21.370993 | orchestrator | 2026-04-08 01:29:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:29:21.371101 | orchestrator | 2026-04-08 01:29:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:29:24.415797 | orchestrator | 2026-04-08 01:29:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:29:24.417007 | orchestrator | 2026-04-08 01:29:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:29:24.417070 | orchestrator | 2026-04-08 01:29:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:29:27.457007 | orchestrator | 2026-04-08 01:29:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:29:27.458563 | orchestrator | 2026-04-08 01:29:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:29:27.458822 | orchestrator | 2026-04-08 01:29:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:29:30.508355 | orchestrator | 2026-04-08 01:29:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:29:30.509015 | orchestrator | 2026-04-08 01:29:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:29:30.509072 | orchestrator | 2026-04-08 01:29:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:29:33.559742 | orchestrator | 2026-04-08 01:29:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:29:33.562693 | orchestrator | 2026-04-08 01:29:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:29:33.562737 | orchestrator | 2026-04-08 01:29:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:29:36.608931 | orchestrator | 2026-04-08 01:29:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:29:36.610736 | orchestrator | 2026-04-08 01:29:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:29:36.610802 | orchestrator | 2026-04-08 01:29:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:29:39.656901 | orchestrator | 2026-04-08 01:29:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:29:39.659354 | orchestrator | 2026-04-08 01:29:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:29:39.659412 | orchestrator | 2026-04-08 01:29:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:29:42.706271 | orchestrator | 2026-04-08 01:29:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:29:42.708243 | orchestrator | 2026-04-08 01:29:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:29:42.708309 | orchestrator | 2026-04-08 01:29:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:29:45.760623 | orchestrator | 2026-04-08 01:29:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:29:45.761671 | orchestrator | 2026-04-08 01:29:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:29:45.761768 | orchestrator | 2026-04-08 01:29:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:29:48.807413 | orchestrator | 2026-04-08 01:29:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:29:48.809851 | orchestrator | 2026-04-08 01:29:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:29:48.809933 | orchestrator | 2026-04-08 01:29:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:29:51.857632 | orchestrator | 2026-04-08 01:29:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:29:51.860458 | orchestrator | 2026-04-08 01:29:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:29:51.860573 | orchestrator | 2026-04-08 01:29:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:29:54.908495 | orchestrator | 2026-04-08 01:29:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:29:54.910506 | orchestrator | 2026-04-08 01:29:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:29:54.910553 | orchestrator | 2026-04-08 01:29:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:29:57.955221 | orchestrator | 2026-04-08 01:29:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:29:57.956516 | orchestrator | 2026-04-08 01:29:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:29:57.956557 | orchestrator | 2026-04-08 01:29:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:30:01.007327 | orchestrator | 2026-04-08 01:30:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:30:01.008954 | orchestrator | 2026-04-08 01:30:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:30:01.009091 | orchestrator | 2026-04-08 01:30:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:30:04.055326 | orchestrator | 2026-04-08 01:30:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:30:04.056449 | orchestrator | 2026-04-08 01:30:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:30:04.056510 | orchestrator | 2026-04-08 01:30:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:30:07.105583 | orchestrator | 2026-04-08 01:30:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:30:07.107812 | orchestrator | 2026-04-08 01:30:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:30:07.107908 | orchestrator | 2026-04-08 01:30:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:30:10.159313 | orchestrator | 2026-04-08 01:30:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:30:10.160697 | orchestrator | 2026-04-08 01:30:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:30:10.160746 | orchestrator | 2026-04-08 01:30:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:30:13.217543 | orchestrator | 2026-04-08 01:30:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:30:13.219983 | orchestrator | 2026-04-08 01:30:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:30:13.220044 | orchestrator | 2026-04-08 01:30:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:30:16.276731 | orchestrator | 2026-04-08 01:30:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:30:16.278177 | orchestrator | 2026-04-08 01:30:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:30:16.278401 | orchestrator | 2026-04-08 01:30:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:30:19.331559 | orchestrator | 2026-04-08 01:30:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:30:19.333453 | orchestrator | 2026-04-08 01:30:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:30:19.333556 | orchestrator | 2026-04-08 01:30:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:30:22.387843 | orchestrator | 2026-04-08 01:30:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:30:22.389536 | orchestrator | 2026-04-08 01:30:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:30:22.389673 | orchestrator | 2026-04-08 01:30:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:30:25.440158 | orchestrator | 2026-04-08 01:30:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:30:25.443084 | orchestrator | 2026-04-08 01:30:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:30:25.443269 | orchestrator | 2026-04-08 01:30:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:30:28.487116 | orchestrator | 2026-04-08 01:30:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:30:28.489015 | orchestrator | 2026-04-08 01:30:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:30:28.489060 | orchestrator | 2026-04-08 01:30:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:30:31.533417 | orchestrator | 2026-04-08 01:30:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:30:31.534421 | orchestrator | 2026-04-08 01:30:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:30:31.534452 | orchestrator | 2026-04-08 01:30:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:30:34.583761 | orchestrator | 2026-04-08 01:30:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:30:34.587645 | orchestrator | 2026-04-08 01:30:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:30:34.587961 | orchestrator | 2026-04-08 01:30:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:30:37.634716 | orchestrator | 2026-04-08 01:30:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:30:37.636215 | orchestrator | 2026-04-08 01:30:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:30:37.636301 | orchestrator | 2026-04-08 01:30:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:30:40.680920 | orchestrator | 2026-04-08 01:30:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:30:40.682528 | orchestrator | 2026-04-08 01:30:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:30:40.682572 | orchestrator | 2026-04-08 01:30:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:30:43.732401 | orchestrator | 2026-04-08 01:30:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:30:43.734347 | orchestrator | 2026-04-08 01:30:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:30:43.734422 | orchestrator | 2026-04-08 01:30:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:30:46.784280 | orchestrator | 2026-04-08 01:30:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:30:46.785255 | orchestrator | 2026-04-08 01:30:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:30:46.785291 | orchestrator | 2026-04-08 01:30:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:30:49.834566 | orchestrator | 2026-04-08 01:30:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:30:49.834701 | orchestrator | 2026-04-08 01:30:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:30:49.834716 | orchestrator | 2026-04-08 01:30:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:30:52.884741 | orchestrator | 2026-04-08 01:30:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:30:52.886935 | orchestrator | 2026-04-08 01:30:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:30:52.886986 | orchestrator | 2026-04-08 01:30:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:30:55.935717 | orchestrator | 2026-04-08 01:30:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:30:55.936901 | orchestrator | 2026-04-08 01:30:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:30:55.936949 | orchestrator | 2026-04-08 01:30:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:30:58.987651 | orchestrator | 2026-04-08 01:30:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:30:58.989423 | orchestrator | 2026-04-08 01:30:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:30:58.989479 | orchestrator | 2026-04-08 01:30:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:31:02.051266 | orchestrator | 2026-04-08 01:31:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:31:02.052359 | orchestrator | 2026-04-08 01:31:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:31:02.052463 | orchestrator | 2026-04-08 01:31:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:31:05.099518 | orchestrator | 2026-04-08 01:31:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:31:05.101509 | orchestrator | 2026-04-08 01:31:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:31:05.101781 | orchestrator | 2026-04-08 01:31:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:31:08.152938 | orchestrator | 2026-04-08 01:31:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:31:08.154417 | orchestrator | 2026-04-08 01:31:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:31:08.154472 | orchestrator | 2026-04-08 01:31:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:31:11.198886 | orchestrator | 2026-04-08 01:31:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:31:11.200561 | orchestrator | 2026-04-08 01:31:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:31:11.200634 | orchestrator | 2026-04-08 01:31:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:31:14.249641 | orchestrator | 2026-04-08 01:31:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:31:14.249802 | orchestrator | 2026-04-08 01:31:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:31:14.249812 | orchestrator | 2026-04-08 01:31:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:31:17.298382 | orchestrator | 2026-04-08 01:31:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:31:17.301343 | orchestrator | 2026-04-08 01:31:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:31:17.301423 | orchestrator | 2026-04-08 01:31:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:31:20.342258 | orchestrator | 2026-04-08 01:31:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:31:20.343463 | orchestrator | 2026-04-08 01:31:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:31:20.343517 | orchestrator | 2026-04-08 01:31:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:31:23.392144 | orchestrator | 2026-04-08 01:31:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:31:23.394505 | orchestrator | 2026-04-08 01:31:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:31:23.394544 | orchestrator | 2026-04-08 01:31:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:31:26.444441 | orchestrator | 2026-04-08 01:31:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:31:26.447145 | orchestrator | 2026-04-08 01:31:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:31:26.447206 | orchestrator | 2026-04-08 01:31:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:31:29.497038 | orchestrator | 2026-04-08 01:31:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:31:29.501531 | orchestrator | 2026-04-08 01:31:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:31:29.501673 | orchestrator | 2026-04-08 01:31:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:31:32.554114 | orchestrator | 2026-04-08 01:31:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:31:32.558802 | orchestrator | 2026-04-08 01:31:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:31:32.558873 | orchestrator | 2026-04-08 01:31:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:31:35.604143 | orchestrator | 2026-04-08 01:31:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:31:35.605709 | orchestrator | 2026-04-08 01:31:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:31:35.605801 | orchestrator | 2026-04-08 01:31:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:31:38.650671 | orchestrator | 2026-04-08 01:31:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:31:38.652009 | orchestrator | 2026-04-08 01:31:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:31:38.652060 | orchestrator | 2026-04-08 01:31:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:31:41.697761 | orchestrator | 2026-04-08 01:31:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:31:41.699499 | orchestrator | 2026-04-08 01:31:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:31:41.699546 | orchestrator | 2026-04-08 01:31:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:31:44.748189 | orchestrator | 2026-04-08 01:31:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:31:44.750127 | orchestrator | 2026-04-08 01:31:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:31:44.750251 | orchestrator | 2026-04-08 01:31:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:31:47.801140 | orchestrator | 2026-04-08 01:31:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:31:47.802235 | orchestrator | 2026-04-08 01:31:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:31:47.802272 | orchestrator | 2026-04-08 01:31:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:31:50.848363 | orchestrator | 2026-04-08 01:31:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:31:50.850652 | orchestrator | 2026-04-08 01:31:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:31:50.850732 | orchestrator | 2026-04-08 01:31:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:31:53.900999 | orchestrator | 2026-04-08 01:31:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:31:53.904262 | orchestrator | 2026-04-08 01:31:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:31:53.904315 | orchestrator | 2026-04-08 01:31:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:31:56.951528 | orchestrator | 2026-04-08 01:31:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:31:56.953079 | orchestrator | 2026-04-08 01:31:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:31:56.953142 | orchestrator | 2026-04-08 01:31:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:32:00.007850 | orchestrator | 2026-04-08 01:32:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:32:00.011148 | orchestrator | 2026-04-08 01:32:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:32:00.011227 | orchestrator | 2026-04-08 01:32:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:32:03.057962 | orchestrator | 2026-04-08 01:32:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:32:03.059203 | orchestrator | 2026-04-08 01:32:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:32:03.059254 | orchestrator | 2026-04-08 01:32:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:32:06.106310 | orchestrator | 2026-04-08 01:32:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:32:06.108920 | orchestrator | 2026-04-08 01:32:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:32:06.108979 | orchestrator | 2026-04-08 01:32:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:32:09.158904 | orchestrator | 2026-04-08 01:32:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:32:09.160552 | orchestrator | 2026-04-08 01:32:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:32:09.160657 | orchestrator | 2026-04-08 01:32:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:32:12.199044 | orchestrator | 2026-04-08 01:32:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:32:12.199803 | orchestrator | 2026-04-08 01:32:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:32:12.201211 | orchestrator | 2026-04-08 01:32:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:32:15.243293 | orchestrator | 2026-04-08 01:32:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:32:15.246184 | orchestrator | 2026-04-08 01:32:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:32:15.246264 | orchestrator | 2026-04-08 01:32:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:32:18.294732 | orchestrator | 2026-04-08 01:32:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:32:18.296266 | orchestrator | 2026-04-08 01:32:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:32:18.296325 | orchestrator | 2026-04-08 01:32:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:32:21.342887 | orchestrator | 2026-04-08 01:32:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:32:21.345799 | orchestrator | 2026-04-08 01:32:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:32:21.345870 | orchestrator | 2026-04-08 01:32:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:32:24.394160 | orchestrator | 2026-04-08 01:32:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:32:24.395474 | orchestrator | 2026-04-08 01:32:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:32:24.395510 | orchestrator | 2026-04-08 01:32:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:32:27.445741 | orchestrator | 2026-04-08 01:32:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:32:27.449297 | orchestrator | 2026-04-08 01:32:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:32:27.449400 | orchestrator | 2026-04-08 01:32:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:32:30.496405 | orchestrator | 2026-04-08 01:32:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:32:30.499295 | orchestrator | 2026-04-08 01:32:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:32:30.499725 | orchestrator | 2026-04-08 01:32:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:32:33.543819 | orchestrator | 2026-04-08 01:32:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:32:33.545196 | orchestrator | 2026-04-08 01:32:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:32:33.545305 | orchestrator | 2026-04-08 01:32:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:32:36.588926 | orchestrator | 2026-04-08 01:32:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:32:36.591734 | orchestrator | 2026-04-08 01:32:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:32:36.591823 | orchestrator | 2026-04-08 01:32:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:32:39.638284 | orchestrator | 2026-04-08 01:32:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:32:39.639632 | orchestrator | 2026-04-08 01:32:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:32:39.639683 | orchestrator | 2026-04-08 01:32:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:32:42.691187 | orchestrator | 2026-04-08 01:32:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:32:42.692465 | orchestrator | 2026-04-08 01:32:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:32:42.692620 | orchestrator | 2026-04-08 01:32:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:32:45.738356 | orchestrator | 2026-04-08 01:32:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:32:45.739749 | orchestrator | 2026-04-08 01:32:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:32:45.739806 | orchestrator | 2026-04-08 01:32:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:32:48.793156 | orchestrator | 2026-04-08 01:32:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:32:48.794995 | orchestrator | 2026-04-08 01:32:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:32:48.795050 | orchestrator | 2026-04-08 01:32:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:32:51.839411 | orchestrator | 2026-04-08 01:32:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:32:51.840940 | orchestrator | 2026-04-08 01:32:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:32:51.841022 | orchestrator | 2026-04-08 01:32:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:32:54.886796 | orchestrator | 2026-04-08 01:32:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:32:54.889222 | orchestrator | 2026-04-08 01:32:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:32:54.889299 | orchestrator | 2026-04-08 01:32:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:32:57.928145 | orchestrator | 2026-04-08 01:32:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:32:57.929337 | orchestrator | 2026-04-08 01:32:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:32:57.929381 | orchestrator | 2026-04-08 01:32:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:33:00.978226 | orchestrator | 2026-04-08 01:33:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:33:00.981916 | orchestrator | 2026-04-08 01:33:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:33:00.982052 | orchestrator | 2026-04-08 01:33:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:33:04.033659 | orchestrator | 2026-04-08 01:33:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:33:04.036399 | orchestrator | 2026-04-08 01:33:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:33:04.036482 | orchestrator | 2026-04-08 01:33:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:33:07.081665 | orchestrator | 2026-04-08 01:33:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:33:07.083923 | orchestrator | 2026-04-08 01:33:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:33:07.084422 | orchestrator | 2026-04-08 01:33:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:33:10.122140 | orchestrator | 2026-04-08 01:33:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:33:10.123887 | orchestrator | 2026-04-08 01:33:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:33:10.123912 | orchestrator | 2026-04-08 01:33:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:33:13.171738 | orchestrator | 2026-04-08 01:33:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:33:13.172682 | orchestrator | 2026-04-08 01:33:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:33:13.172746 | orchestrator | 2026-04-08 01:33:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:33:16.219175 | orchestrator | 2026-04-08 01:33:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:33:16.221623 | orchestrator | 2026-04-08 01:33:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:33:16.221665 | orchestrator | 2026-04-08 01:33:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:33:19.265282 | orchestrator | 2026-04-08 01:33:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:33:19.267114 | orchestrator | 2026-04-08 01:33:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:33:19.267224 | orchestrator | 2026-04-08 01:33:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:33:22.312216 | orchestrator | 2026-04-08 01:33:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:33:22.314075 | orchestrator | 2026-04-08 01:33:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:33:22.314134 | orchestrator | 2026-04-08 01:33:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:33:25.357731 | orchestrator | 2026-04-08 01:33:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:33:25.359753 | orchestrator | 2026-04-08 01:33:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:33:25.359806 | orchestrator | 2026-04-08 01:33:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:33:28.404030 | orchestrator | 2026-04-08 01:33:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:33:28.406003 | orchestrator | 2026-04-08 01:33:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:33:28.406154 | orchestrator | 2026-04-08 01:33:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:33:31.455101 | orchestrator | 2026-04-08 01:33:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:33:31.457229 | orchestrator | 2026-04-08 01:33:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:33:31.457314 | orchestrator | 2026-04-08 01:33:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:33:34.500777 | orchestrator | 2026-04-08 01:33:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:33:34.502572 | orchestrator | 2026-04-08 01:33:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:33:34.502638 | orchestrator | 2026-04-08 01:33:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:33:37.550646 | orchestrator | 2026-04-08 01:33:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:33:37.551739 | orchestrator | 2026-04-08 01:33:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:33:37.551794 | orchestrator | 2026-04-08 01:33:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:33:40.595923 | orchestrator | 2026-04-08 01:33:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:33:40.597860 | orchestrator | 2026-04-08 01:33:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:33:40.597910 | orchestrator | 2026-04-08 01:33:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:33:43.646307 | orchestrator | 2026-04-08 01:33:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:33:43.649107 | orchestrator | 2026-04-08 01:33:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:33:43.649249 | orchestrator | 2026-04-08 01:33:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:33:46.696052 | orchestrator | 2026-04-08 01:33:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:33:46.697679 | orchestrator | 2026-04-08 01:33:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:33:46.697740 | orchestrator | 2026-04-08 01:33:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:33:49.738797 | orchestrator | 2026-04-08 01:33:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:33:49.740831 | orchestrator | 2026-04-08 01:33:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:33:49.740961 | orchestrator | 2026-04-08 01:33:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:33:52.788675 | orchestrator | 2026-04-08 01:33:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:33:52.791816 | orchestrator | 2026-04-08 01:33:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:33:52.791976 | orchestrator | 2026-04-08 01:33:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:33:55.841133 | orchestrator | 2026-04-08 01:33:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:33:55.842622 | orchestrator | 2026-04-08 01:33:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:33:55.842672 | orchestrator | 2026-04-08 01:33:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:33:58.891089 | orchestrator | 2026-04-08 01:33:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:33:58.893118 | orchestrator | 2026-04-08 01:33:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:33:58.893186 | orchestrator | 2026-04-08 01:33:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:34:01.942214 | orchestrator | 2026-04-08 01:34:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:34:01.944443 | orchestrator | 2026-04-08 01:34:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:34:01.944576 | orchestrator | 2026-04-08 01:34:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:34:04.996984 | orchestrator | 2026-04-08 01:34:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:34:04.998860 | orchestrator | 2026-04-08 01:34:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:34:04.998941 | orchestrator | 2026-04-08 01:34:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:34:08.043698 | orchestrator | 2026-04-08 01:34:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:34:08.045756 | orchestrator | 2026-04-08 01:34:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:34:08.045826 | orchestrator | 2026-04-08 01:34:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:34:11.096476 | orchestrator | 2026-04-08 01:34:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:34:11.098932 | orchestrator | 2026-04-08 01:34:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:34:11.098986 | orchestrator | 2026-04-08 01:34:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:34:14.144956 | orchestrator | 2026-04-08 01:34:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:34:14.147426 | orchestrator | 2026-04-08 01:34:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:34:14.147585 | orchestrator | 2026-04-08 01:34:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:34:17.197984 | orchestrator | 2026-04-08 01:34:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:34:17.200406 | orchestrator | 2026-04-08 01:34:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:34:17.200489 | orchestrator | 2026-04-08 01:34:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:34:20.246559 | orchestrator | 2026-04-08 01:34:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:34:20.247839 | orchestrator | 2026-04-08 01:34:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:34:20.247865 | orchestrator | 2026-04-08 01:34:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:34:23.296141 | orchestrator | 2026-04-08 01:34:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:34:23.297904 | orchestrator | 2026-04-08 01:34:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:34:23.297947 | orchestrator | 2026-04-08 01:34:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:34:26.346746 | orchestrator | 2026-04-08 01:34:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:34:26.348337 | orchestrator | 2026-04-08 01:34:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:34:26.348416 | orchestrator | 2026-04-08 01:34:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:34:29.396962 | orchestrator | 2026-04-08 01:34:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:34:29.398489 | orchestrator | 2026-04-08 01:34:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:34:29.398607 | orchestrator | 2026-04-08 01:34:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:34:32.449886 | orchestrator | 2026-04-08 01:34:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:34:32.452341 | orchestrator | 2026-04-08 01:34:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:34:32.452427 | orchestrator | 2026-04-08 01:34:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:34:35.502445 | orchestrator | 2026-04-08 01:34:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:34:35.505219 | orchestrator | 2026-04-08 01:34:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:34:35.505683 | orchestrator | 2026-04-08 01:34:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:34:38.550395 | orchestrator | 2026-04-08 01:34:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:34:38.552352 | orchestrator | 2026-04-08 01:34:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:34:38.552471 | orchestrator | 2026-04-08 01:34:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:34:41.600043 | orchestrator | 2026-04-08 01:34:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:34:41.602303 | orchestrator | 2026-04-08 01:34:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:34:41.602342 | orchestrator | 2026-04-08 01:34:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:34:44.642534 | orchestrator | 2026-04-08 01:34:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:34:44.644067 | orchestrator | 2026-04-08 01:34:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:34:44.644192 | orchestrator | 2026-04-08 01:34:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:34:47.688156 | orchestrator | 2026-04-08 01:34:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:34:47.689900 | orchestrator | 2026-04-08 01:34:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:34:47.689928 | orchestrator | 2026-04-08 01:34:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:34:50.736872 | orchestrator | 2026-04-08 01:34:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:34:50.739724 | orchestrator | 2026-04-08 01:34:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:34:50.739794 | orchestrator | 2026-04-08 01:34:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:34:53.787747 | orchestrator | 2026-04-08 01:34:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:34:53.789121 | orchestrator | 2026-04-08 01:34:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:34:53.789173 | orchestrator | 2026-04-08 01:34:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:34:56.843259 | orchestrator | 2026-04-08 01:34:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:34:56.845277 | orchestrator | 2026-04-08 01:34:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:34:56.845327 | orchestrator | 2026-04-08 01:34:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:34:59.894133 | orchestrator | 2026-04-08 01:34:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:34:59.895383 | orchestrator | 2026-04-08 01:34:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:34:59.895426 | orchestrator | 2026-04-08 01:34:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:35:02.943818 | orchestrator | 2026-04-08 01:35:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:35:02.945657 | orchestrator | 2026-04-08 01:35:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:35:02.945699 | orchestrator | 2026-04-08 01:35:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:35:05.992405 | orchestrator | 2026-04-08 01:35:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:35:05.994167 | orchestrator | 2026-04-08 01:35:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:35:05.994218 | orchestrator | 2026-04-08 01:35:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:35:09.039233 | orchestrator | 2026-04-08 01:35:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:35:09.040442 | orchestrator | 2026-04-08 01:35:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:35:09.040486 | orchestrator | 2026-04-08 01:35:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:35:12.083599 | orchestrator | 2026-04-08 01:35:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:35:12.085902 | orchestrator | 2026-04-08 01:35:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:35:12.085948 | orchestrator | 2026-04-08 01:35:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:35:15.129801 | orchestrator | 2026-04-08 01:35:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:35:15.131287 | orchestrator | 2026-04-08 01:35:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:35:15.131358 | orchestrator | 2026-04-08 01:35:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:35:18.172623 | orchestrator | 2026-04-08 01:35:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:35:18.173974 | orchestrator | 2026-04-08 01:35:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:35:18.174062 | orchestrator | 2026-04-08 01:35:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:35:21.217704 | orchestrator | 2026-04-08 01:35:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:35:21.219581 | orchestrator | 2026-04-08 01:35:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:35:21.219619 | orchestrator | 2026-04-08 01:35:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:35:24.268272 | orchestrator | 2026-04-08 01:35:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:35:24.270461 | orchestrator | 2026-04-08 01:35:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:35:24.270591 | orchestrator | 2026-04-08 01:35:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:35:27.308162 | orchestrator | 2026-04-08 01:35:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:35:27.309577 | orchestrator | 2026-04-08 01:35:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:35:27.309643 | orchestrator | 2026-04-08 01:35:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:35:30.360406 | orchestrator | 2026-04-08 01:35:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:35:30.362683 | orchestrator | 2026-04-08 01:35:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:35:30.363338 | orchestrator | 2026-04-08 01:35:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:35:33.414660 | orchestrator | 2026-04-08 01:35:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:35:33.416544 | orchestrator | 2026-04-08 01:35:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:35:33.416601 | orchestrator | 2026-04-08 01:35:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:35:36.470644 | orchestrator | 2026-04-08 01:35:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:35:36.472571 | orchestrator | 2026-04-08 01:35:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:35:36.472685 | orchestrator | 2026-04-08 01:35:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:35:39.521924 | orchestrator | 2026-04-08 01:35:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:35:39.523593 | orchestrator | 2026-04-08 01:35:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:35:39.523640 | orchestrator | 2026-04-08 01:35:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:35:42.570804 | orchestrator | 2026-04-08 01:35:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:35:42.572662 | orchestrator | 2026-04-08 01:35:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:35:42.572713 | orchestrator | 2026-04-08 01:35:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:35:45.618840 | orchestrator | 2026-04-08 01:35:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:35:45.620015 | orchestrator | 2026-04-08 01:35:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:35:45.620158 | orchestrator | 2026-04-08 01:35:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:35:48.669597 | orchestrator | 2026-04-08 01:35:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:35:48.670569 | orchestrator | 2026-04-08 01:35:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:35:48.670611 | orchestrator | 2026-04-08 01:35:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:35:51.718426 | orchestrator | 2026-04-08 01:35:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:35:51.719555 | orchestrator | 2026-04-08 01:35:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:35:51.719606 | orchestrator | 2026-04-08 01:35:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:35:54.763426 | orchestrator | 2026-04-08 01:35:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:35:54.764639 | orchestrator | 2026-04-08 01:35:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:35:54.764687 | orchestrator | 2026-04-08 01:35:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:35:57.813002 | orchestrator | 2026-04-08 01:35:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:35:57.814667 | orchestrator | 2026-04-08 01:35:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:35:57.814770 | orchestrator | 2026-04-08 01:35:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:36:00.864732 | orchestrator | 2026-04-08 01:36:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:36:00.866571 | orchestrator | 2026-04-08 01:36:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:36:00.866684 | orchestrator | 2026-04-08 01:36:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:36:03.911099 | orchestrator | 2026-04-08 01:36:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:36:03.913152 | orchestrator | 2026-04-08 01:36:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:36:03.913194 | orchestrator | 2026-04-08 01:36:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:36:06.963299 | orchestrator | 2026-04-08 01:36:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:36:06.964727 | orchestrator | 2026-04-08 01:36:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:36:06.964895 | orchestrator | 2026-04-08 01:36:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:36:10.012862 | orchestrator | 2026-04-08 01:36:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:36:10.013821 | orchestrator | 2026-04-08 01:36:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:36:10.013884 | orchestrator | 2026-04-08 01:36:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:36:13.059387 | orchestrator | 2026-04-08 01:36:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:36:13.060822 | orchestrator | 2026-04-08 01:36:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:36:13.060868 | orchestrator | 2026-04-08 01:36:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:36:16.112442 | orchestrator | 2026-04-08 01:36:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:36:16.114486 | orchestrator | 2026-04-08 01:36:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:36:16.114557 | orchestrator | 2026-04-08 01:36:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:36:19.158912 | orchestrator | 2026-04-08 01:36:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:36:19.159113 | orchestrator | 2026-04-08 01:36:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:36:19.159140 | orchestrator | 2026-04-08 01:36:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:36:22.199398 | orchestrator | 2026-04-08 01:36:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:36:22.202214 | orchestrator | 2026-04-08 01:36:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:36:22.202259 | orchestrator | 2026-04-08 01:36:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:36:25.251250 | orchestrator | 2026-04-08 01:36:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:36:25.252039 | orchestrator | 2026-04-08 01:36:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:36:25.252097 | orchestrator | 2026-04-08 01:36:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:36:28.291000 | orchestrator | 2026-04-08 01:36:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:36:28.293731 | orchestrator | 2026-04-08 01:36:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:36:28.293798 | orchestrator | 2026-04-08 01:36:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:36:31.346827 | orchestrator | 2026-04-08 01:36:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:36:31.348998 | orchestrator | 2026-04-08 01:36:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:36:31.349094 | orchestrator | 2026-04-08 01:36:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:36:34.405808 | orchestrator | 2026-04-08 01:36:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:36:34.407691 | orchestrator | 2026-04-08 01:36:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:36:34.407732 | orchestrator | 2026-04-08 01:36:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:36:37.455533 | orchestrator | 2026-04-08 01:36:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:36:37.458286 | orchestrator | 2026-04-08 01:36:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:36:37.458368 | orchestrator | 2026-04-08 01:36:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:36:40.506604 | orchestrator | 2026-04-08 01:36:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:36:40.636689 | orchestrator | 2026-04-08 01:36:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:36:40.636769 | orchestrator | 2026-04-08 01:36:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:36:43.555812 | orchestrator | 2026-04-08 01:36:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:36:43.557413 | orchestrator | 2026-04-08 01:36:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:36:43.557484 | orchestrator | 2026-04-08 01:36:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:36:46.601946 | orchestrator | 2026-04-08 01:36:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:36:46.603386 | orchestrator | 2026-04-08 01:36:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:36:46.603518 | orchestrator | 2026-04-08 01:36:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:36:49.654688 | orchestrator | 2026-04-08 01:36:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:36:49.655484 | orchestrator | 2026-04-08 01:36:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:36:49.655516 | orchestrator | 2026-04-08 01:36:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:36:52.700681 | orchestrator | 2026-04-08 01:36:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:36:52.702291 | orchestrator | 2026-04-08 01:36:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:36:52.702338 | orchestrator | 2026-04-08 01:36:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:36:55.752121 | orchestrator | 2026-04-08 01:36:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:36:55.754104 | orchestrator | 2026-04-08 01:36:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:36:55.754187 | orchestrator | 2026-04-08 01:36:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:36:58.802666 | orchestrator | 2026-04-08 01:36:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:36:58.804076 | orchestrator | 2026-04-08 01:36:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:36:58.804112 | orchestrator | 2026-04-08 01:36:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:37:01.850177 | orchestrator | 2026-04-08 01:37:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:37:01.851655 | orchestrator | 2026-04-08 01:37:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:37:01.851697 | orchestrator | 2026-04-08 01:37:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:37:04.905116 | orchestrator | 2026-04-08 01:37:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:37:04.908092 | orchestrator | 2026-04-08 01:37:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:37:04.908120 | orchestrator | 2026-04-08 01:37:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:37:07.954284 | orchestrator | 2026-04-08 01:37:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:37:07.956049 | orchestrator | 2026-04-08 01:37:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:37:07.956361 | orchestrator | 2026-04-08 01:37:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:37:10.997947 | orchestrator | 2026-04-08 01:37:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:37:11.000160 | orchestrator | 2026-04-08 01:37:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:37:11.000290 | orchestrator | 2026-04-08 01:37:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:37:14.037143 | orchestrator | 2026-04-08 01:37:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:37:14.038513 | orchestrator | 2026-04-08 01:37:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:37:14.038561 | orchestrator | 2026-04-08 01:37:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:37:17.085931 | orchestrator | 2026-04-08 01:37:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:37:17.088646 | orchestrator | 2026-04-08 01:37:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:37:17.088687 | orchestrator | 2026-04-08 01:37:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:37:20.139283 | orchestrator | 2026-04-08 01:37:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:37:20.139368 | orchestrator | 2026-04-08 01:37:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:37:20.139378 | orchestrator | 2026-04-08 01:37:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:37:23.192716 | orchestrator | 2026-04-08 01:37:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:37:23.193660 | orchestrator | 2026-04-08 01:37:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:37:23.193702 | orchestrator | 2026-04-08 01:37:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:37:26.245801 | orchestrator | 2026-04-08 01:37:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:37:26.250884 | orchestrator | 2026-04-08 01:37:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:37:26.251090 | orchestrator | 2026-04-08 01:37:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:37:29.303302 | orchestrator | 2026-04-08 01:37:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:37:29.305978 | orchestrator | 2026-04-08 01:37:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:37:29.306116 | orchestrator | 2026-04-08 01:37:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:37:32.359924 | orchestrator | 2026-04-08 01:37:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:37:32.362198 | orchestrator | 2026-04-08 01:37:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:37:32.362263 | orchestrator | 2026-04-08 01:37:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:37:35.418571 | orchestrator | 2026-04-08 01:37:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:37:35.421817 | orchestrator | 2026-04-08 01:37:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:37:35.421883 | orchestrator | 2026-04-08 01:37:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:37:38.474405 | orchestrator | 2026-04-08 01:37:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:37:38.474525 | orchestrator | 2026-04-08 01:37:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:37:38.474535 | orchestrator | 2026-04-08 01:37:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:37:41.514373 | orchestrator | 2026-04-08 01:37:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:37:41.515645 | orchestrator | 2026-04-08 01:37:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:37:41.515717 | orchestrator | 2026-04-08 01:37:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:37:44.566220 | orchestrator | 2026-04-08 01:37:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:37:44.566655 | orchestrator | 2026-04-08 01:37:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:37:44.566673 | orchestrator | 2026-04-08 01:37:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:37:47.612219 | orchestrator | 2026-04-08 01:37:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:37:47.613385 | orchestrator | 2026-04-08 01:37:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:37:47.613507 | orchestrator | 2026-04-08 01:37:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:37:50.667104 | orchestrator | 2026-04-08 01:37:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:37:50.668564 | orchestrator | 2026-04-08 01:37:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:37:50.668626 | orchestrator | 2026-04-08 01:37:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:37:53.716136 | orchestrator | 2026-04-08 01:37:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:37:53.717483 | orchestrator | 2026-04-08 01:37:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:37:53.717534 | orchestrator | 2026-04-08 01:37:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:37:56.766699 | orchestrator | 2026-04-08 01:37:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:37:56.768892 | orchestrator | 2026-04-08 01:37:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:37:56.768954 | orchestrator | 2026-04-08 01:37:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:37:59.818386 | orchestrator | 2026-04-08 01:37:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:37:59.819536 | orchestrator | 2026-04-08 01:37:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:37:59.819566 | orchestrator | 2026-04-08 01:37:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:38:02.870809 | orchestrator | 2026-04-08 01:38:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:38:02.873311 | orchestrator | 2026-04-08 01:38:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:38:02.873387 | orchestrator | 2026-04-08 01:38:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:38:05.923384 | orchestrator | 2026-04-08 01:38:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:38:05.925086 | orchestrator | 2026-04-08 01:38:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:38:05.925175 | orchestrator | 2026-04-08 01:38:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:38:08.970127 | orchestrator | 2026-04-08 01:38:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:38:08.971891 | orchestrator | 2026-04-08 01:38:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:38:08.972063 | orchestrator | 2026-04-08 01:38:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:38:12.021823 | orchestrator | 2026-04-08 01:38:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:38:12.024899 | orchestrator | 2026-04-08 01:38:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:38:12.024980 | orchestrator | 2026-04-08 01:38:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:38:15.071965 | orchestrator | 2026-04-08 01:38:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:38:15.073840 | orchestrator | 2026-04-08 01:38:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:38:15.073894 | orchestrator | 2026-04-08 01:38:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:38:18.120211 | orchestrator | 2026-04-08 01:38:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:38:18.122147 | orchestrator | 2026-04-08 01:38:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:38:18.122200 | orchestrator | 2026-04-08 01:38:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:38:21.171011 | orchestrator | 2026-04-08 01:38:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:38:21.172831 | orchestrator | 2026-04-08 01:38:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:38:21.173254 | orchestrator | 2026-04-08 01:38:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:38:24.220528 | orchestrator | 2026-04-08 01:38:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:38:24.222457 | orchestrator | 2026-04-08 01:38:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:38:24.222571 | orchestrator | 2026-04-08 01:38:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:38:27.269875 | orchestrator | 2026-04-08 01:38:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:38:27.271365 | orchestrator | 2026-04-08 01:38:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:38:27.271946 | orchestrator | 2026-04-08 01:38:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:38:30.314167 | orchestrator | 2026-04-08 01:38:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:38:30.316326 | orchestrator | 2026-04-08 01:38:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:38:30.316631 | orchestrator | 2026-04-08 01:38:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:38:33.357337 | orchestrator | 2026-04-08 01:38:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:38:33.360186 | orchestrator | 2026-04-08 01:38:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:38:33.360255 | orchestrator | 2026-04-08 01:38:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:38:36.410272 | orchestrator | 2026-04-08 01:38:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:38:36.411347 | orchestrator | 2026-04-08 01:38:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:38:36.411385 | orchestrator | 2026-04-08 01:38:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:38:39.461877 | orchestrator | 2026-04-08 01:38:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:38:39.462163 | orchestrator | 2026-04-08 01:38:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:38:39.462194 | orchestrator | 2026-04-08 01:38:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:38:42.507669 | orchestrator | 2026-04-08 01:38:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:38:42.509604 | orchestrator | 2026-04-08 01:38:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:38:42.509650 | orchestrator | 2026-04-08 01:38:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:38:45.556122 | orchestrator | 2026-04-08 01:38:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:38:45.556940 | orchestrator | 2026-04-08 01:38:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:38:45.557673 | orchestrator | 2026-04-08 01:38:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:38:48.601301 | orchestrator | 2026-04-08 01:38:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:38:48.603107 | orchestrator | 2026-04-08 01:38:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:38:48.603205 | orchestrator | 2026-04-08 01:38:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:38:51.652080 | orchestrator | 2026-04-08 01:38:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:38:51.653603 | orchestrator | 2026-04-08 01:38:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:38:51.653690 | orchestrator | 2026-04-08 01:38:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:38:54.701025 | orchestrator | 2026-04-08 01:38:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:38:54.703001 | orchestrator | 2026-04-08 01:38:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:38:54.703058 | orchestrator | 2026-04-08 01:38:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:38:57.749474 | orchestrator | 2026-04-08 01:38:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:38:57.750609 | orchestrator | 2026-04-08 01:38:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:38:57.750684 | orchestrator | 2026-04-08 01:38:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:39:00.805132 | orchestrator | 2026-04-08 01:39:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:39:00.806997 | orchestrator | 2026-04-08 01:39:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:39:00.807031 | orchestrator | 2026-04-08 01:39:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:39:03.859243 | orchestrator | 2026-04-08 01:39:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:39:03.862265 | orchestrator | 2026-04-08 01:39:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:39:03.862331 | orchestrator | 2026-04-08 01:39:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:39:06.910780 | orchestrator | 2026-04-08 01:39:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:39:06.911952 | orchestrator | 2026-04-08 01:39:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:39:06.911978 | orchestrator | 2026-04-08 01:39:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:39:09.953937 | orchestrator | 2026-04-08 01:39:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:39:09.955848 | orchestrator | 2026-04-08 01:39:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:39:09.955913 | orchestrator | 2026-04-08 01:39:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:39:13.000321 | orchestrator | 2026-04-08 01:39:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:39:13.002013 | orchestrator | 2026-04-08 01:39:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:39:13.002084 | orchestrator | 2026-04-08 01:39:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:39:16.057768 | orchestrator | 2026-04-08 01:39:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:39:16.060056 | orchestrator | 2026-04-08 01:39:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:39:16.060130 | orchestrator | 2026-04-08 01:39:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:39:19.105106 | orchestrator | 2026-04-08 01:39:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:39:19.106185 | orchestrator | 2026-04-08 01:39:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:39:19.106230 | orchestrator | 2026-04-08 01:39:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:39:22.158222 | orchestrator | 2026-04-08 01:39:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:39:22.160312 | orchestrator | 2026-04-08 01:39:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:39:22.160574 | orchestrator | 2026-04-08 01:39:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:39:25.203011 | orchestrator | 2026-04-08 01:39:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:39:25.204920 | orchestrator | 2026-04-08 01:39:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:39:25.205041 | orchestrator | 2026-04-08 01:39:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:39:28.244441 | orchestrator | 2026-04-08 01:39:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:39:28.246265 | orchestrator | 2026-04-08 01:39:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:39:28.246283 | orchestrator | 2026-04-08 01:39:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:39:31.295128 | orchestrator | 2026-04-08 01:39:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:39:31.298177 | orchestrator | 2026-04-08 01:39:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:39:31.298285 | orchestrator | 2026-04-08 01:39:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:39:34.341859 | orchestrator | 2026-04-08 01:39:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:39:34.342147 | orchestrator | 2026-04-08 01:39:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:39:34.342163 | orchestrator | 2026-04-08 01:39:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:39:37.383439 | orchestrator | 2026-04-08 01:39:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:39:37.384286 | orchestrator | 2026-04-08 01:39:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:39:37.384329 | orchestrator | 2026-04-08 01:39:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:39:40.431379 | orchestrator | 2026-04-08 01:39:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:39:40.434180 | orchestrator | 2026-04-08 01:39:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:39:40.434250 | orchestrator | 2026-04-08 01:39:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:39:43.478553 | orchestrator | 2026-04-08 01:39:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:39:43.478906 | orchestrator | 2026-04-08 01:39:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:39:43.478920 | orchestrator | 2026-04-08 01:39:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:39:46.526271 | orchestrator | 2026-04-08 01:39:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:39:46.527294 | orchestrator | 2026-04-08 01:39:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:39:46.527340 | orchestrator | 2026-04-08 01:39:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:39:49.575614 | orchestrator | 2026-04-08 01:39:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:39:49.577543 | orchestrator | 2026-04-08 01:39:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:39:49.577642 | orchestrator | 2026-04-08 01:39:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:39:52.626454 | orchestrator | 2026-04-08 01:39:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:39:52.628232 | orchestrator | 2026-04-08 01:39:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:39:52.628293 | orchestrator | 2026-04-08 01:39:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:39:55.676263 | orchestrator | 2026-04-08 01:39:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:39:55.679463 | orchestrator | 2026-04-08 01:39:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:39:55.679515 | orchestrator | 2026-04-08 01:39:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:39:58.724113 | orchestrator | 2026-04-08 01:39:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:39:58.726049 | orchestrator | 2026-04-08 01:39:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:39:58.726181 | orchestrator | 2026-04-08 01:39:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:40:01.778206 | orchestrator | 2026-04-08 01:40:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:40:01.778277 | orchestrator | 2026-04-08 01:40:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:40:01.778292 | orchestrator | 2026-04-08 01:40:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:40:04.821956 | orchestrator | 2026-04-08 01:40:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:40:04.824763 | orchestrator | 2026-04-08 01:40:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:40:04.824826 | orchestrator | 2026-04-08 01:40:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:40:07.872680 | orchestrator | 2026-04-08 01:40:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:40:07.875600 | orchestrator | 2026-04-08 01:40:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:40:07.876060 | orchestrator | 2026-04-08 01:40:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:40:10.924779 | orchestrator | 2026-04-08 01:40:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:40:10.925708 | orchestrator | 2026-04-08 01:40:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:40:10.925751 | orchestrator | 2026-04-08 01:40:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:40:13.978129 | orchestrator | 2026-04-08 01:40:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:40:13.978211 | orchestrator | 2026-04-08 01:40:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:40:13.978226 | orchestrator | 2026-04-08 01:40:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:40:17.024169 | orchestrator | 2026-04-08 01:40:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:40:17.026351 | orchestrator | 2026-04-08 01:40:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:40:17.026472 | orchestrator | 2026-04-08 01:40:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:40:20.077334 | orchestrator | 2026-04-08 01:40:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:40:20.078750 | orchestrator | 2026-04-08 01:40:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:40:20.078793 | orchestrator | 2026-04-08 01:40:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:40:23.127947 | orchestrator | 2026-04-08 01:40:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:40:23.130881 | orchestrator | 2026-04-08 01:40:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:40:23.130950 | orchestrator | 2026-04-08 01:40:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:40:26.182793 | orchestrator | 2026-04-08 01:40:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:40:26.182880 | orchestrator | 2026-04-08 01:40:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:40:26.182892 | orchestrator | 2026-04-08 01:40:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:40:29.234934 | orchestrator | 2026-04-08 01:40:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:40:29.236703 | orchestrator | 2026-04-08 01:40:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:40:29.236760 | orchestrator | 2026-04-08 01:40:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:40:32.283348 | orchestrator | 2026-04-08 01:40:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:40:32.284864 | orchestrator | 2026-04-08 01:40:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:40:32.284915 | orchestrator | 2026-04-08 01:40:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:40:35.339162 | orchestrator | 2026-04-08 01:40:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:40:35.340708 | orchestrator | 2026-04-08 01:40:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:40:35.340755 | orchestrator | 2026-04-08 01:40:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:40:38.391354 | orchestrator | 2026-04-08 01:40:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:40:38.393415 | orchestrator | 2026-04-08 01:40:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:40:38.393510 | orchestrator | 2026-04-08 01:40:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:40:41.441351 | orchestrator | 2026-04-08 01:40:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:40:41.443148 | orchestrator | 2026-04-08 01:40:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:40:41.443177 | orchestrator | 2026-04-08 01:40:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:40:44.497285 | orchestrator | 2026-04-08 01:40:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:40:44.500357 | orchestrator | 2026-04-08 01:40:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:40:44.500511 | orchestrator | 2026-04-08 01:40:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:40:47.549821 | orchestrator | 2026-04-08 01:40:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:40:47.551699 | orchestrator | 2026-04-08 01:40:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:40:47.551768 | orchestrator | 2026-04-08 01:40:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:40:50.603604 | orchestrator | 2026-04-08 01:40:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:40:50.606897 | orchestrator | 2026-04-08 01:40:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:40:50.607044 | orchestrator | 2026-04-08 01:40:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:40:53.652702 | orchestrator | 2026-04-08 01:40:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:40:53.654221 | orchestrator | 2026-04-08 01:40:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:40:53.654259 | orchestrator | 2026-04-08 01:40:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:40:56.703997 | orchestrator | 2026-04-08 01:40:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:40:56.704848 | orchestrator | 2026-04-08 01:40:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:40:56.704870 | orchestrator | 2026-04-08 01:40:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:40:59.751714 | orchestrator | 2026-04-08 01:40:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:40:59.753201 | orchestrator | 2026-04-08 01:40:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:40:59.753250 | orchestrator | 2026-04-08 01:40:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:41:02.798529 | orchestrator | 2026-04-08 01:41:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:41:02.800626 | orchestrator | 2026-04-08 01:41:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:41:02.800691 | orchestrator | 2026-04-08 01:41:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:41:05.845921 | orchestrator | 2026-04-08 01:41:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:41:05.847867 | orchestrator | 2026-04-08 01:41:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:41:05.847920 | orchestrator | 2026-04-08 01:41:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:41:08.894984 | orchestrator | 2026-04-08 01:41:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:41:08.897070 | orchestrator | 2026-04-08 01:41:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:41:08.897114 | orchestrator | 2026-04-08 01:41:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:41:11.942531 | orchestrator | 2026-04-08 01:41:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:41:11.943840 | orchestrator | 2026-04-08 01:41:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:41:11.944322 | orchestrator | 2026-04-08 01:41:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:41:14.990277 | orchestrator | 2026-04-08 01:41:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:41:14.992108 | orchestrator | 2026-04-08 01:41:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:41:14.992155 | orchestrator | 2026-04-08 01:41:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:41:18.041180 | orchestrator | 2026-04-08 01:41:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:41:18.042644 | orchestrator | 2026-04-08 01:41:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:41:18.042824 | orchestrator | 2026-04-08 01:41:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:41:21.089077 | orchestrator | 2026-04-08 01:41:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:41:21.090859 | orchestrator | 2026-04-08 01:41:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:41:21.090908 | orchestrator | 2026-04-08 01:41:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:41:24.134700 | orchestrator | 2026-04-08 01:41:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:41:24.136301 | orchestrator | 2026-04-08 01:41:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:41:24.136567 | orchestrator | 2026-04-08 01:41:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:41:27.182186 | orchestrator | 2026-04-08 01:41:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:41:27.184248 | orchestrator | 2026-04-08 01:41:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:41:27.184328 | orchestrator | 2026-04-08 01:41:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:41:30.238382 | orchestrator | 2026-04-08 01:41:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:41:30.240134 | orchestrator | 2026-04-08 01:41:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:41:30.240182 | orchestrator | 2026-04-08 01:41:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:41:33.284779 | orchestrator | 2026-04-08 01:41:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:41:33.285164 | orchestrator | 2026-04-08 01:41:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:41:33.285275 | orchestrator | 2026-04-08 01:41:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:41:36.333799 | orchestrator | 2026-04-08 01:41:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:41:36.334806 | orchestrator | 2026-04-08 01:41:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:41:36.335004 | orchestrator | 2026-04-08 01:41:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:41:39.381572 | orchestrator | 2026-04-08 01:41:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:41:39.383928 | orchestrator | 2026-04-08 01:41:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:41:39.384019 | orchestrator | 2026-04-08 01:41:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:41:42.437276 | orchestrator | 2026-04-08 01:41:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:41:42.438619 | orchestrator | 2026-04-08 01:41:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:41:42.438684 | orchestrator | 2026-04-08 01:41:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:41:45.486096 | orchestrator | 2026-04-08 01:41:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:41:45.487531 | orchestrator | 2026-04-08 01:41:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:41:45.487599 | orchestrator | 2026-04-08 01:41:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:41:48.528985 | orchestrator | 2026-04-08 01:41:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:41:48.530582 | orchestrator | 2026-04-08 01:41:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:41:48.530650 | orchestrator | 2026-04-08 01:41:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:41:51.577774 | orchestrator | 2026-04-08 01:41:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:41:51.580079 | orchestrator | 2026-04-08 01:41:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:41:51.580131 | orchestrator | 2026-04-08 01:41:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:41:54.627588 | orchestrator | 2026-04-08 01:41:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:41:54.629807 | orchestrator | 2026-04-08 01:41:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:41:54.629861 | orchestrator | 2026-04-08 01:41:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:41:57.672561 | orchestrator | 2026-04-08 01:41:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:41:57.674228 | orchestrator | 2026-04-08 01:41:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:41:57.674265 | orchestrator | 2026-04-08 01:41:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:42:00.720478 | orchestrator | 2026-04-08 01:42:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:42:00.721719 | orchestrator | 2026-04-08 01:42:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:42:00.721784 | orchestrator | 2026-04-08 01:42:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:42:03.769935 | orchestrator | 2026-04-08 01:42:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:42:03.771611 | orchestrator | 2026-04-08 01:42:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:42:03.771653 | orchestrator | 2026-04-08 01:42:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:42:06.818098 | orchestrator | 2026-04-08 01:42:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:42:06.819948 | orchestrator | 2026-04-08 01:42:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:42:06.820109 | orchestrator | 2026-04-08 01:42:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:42:09.862382 | orchestrator | 2026-04-08 01:42:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:42:09.864592 | orchestrator | 2026-04-08 01:42:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:42:09.864710 | orchestrator | 2026-04-08 01:42:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:42:12.910495 | orchestrator | 2026-04-08 01:42:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:42:12.912640 | orchestrator | 2026-04-08 01:42:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:42:12.912710 | orchestrator | 2026-04-08 01:42:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:42:15.954127 | orchestrator | 2026-04-08 01:42:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:42:15.955429 | orchestrator | 2026-04-08 01:42:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:42:15.955548 | orchestrator | 2026-04-08 01:42:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:42:19.003028 | orchestrator | 2026-04-08 01:42:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:42:19.005156 | orchestrator | 2026-04-08 01:42:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:42:19.005302 | orchestrator | 2026-04-08 01:42:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:42:22.058280 | orchestrator | 2026-04-08 01:42:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:42:22.059401 | orchestrator | 2026-04-08 01:42:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:42:22.059647 | orchestrator | 2026-04-08 01:42:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:42:25.110562 | orchestrator | 2026-04-08 01:42:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:42:25.112576 | orchestrator | 2026-04-08 01:42:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:42:25.112617 | orchestrator | 2026-04-08 01:42:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:42:28.161155 | orchestrator | 2026-04-08 01:42:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:42:28.163421 | orchestrator | 2026-04-08 01:42:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:42:28.163504 | orchestrator | 2026-04-08 01:42:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:42:31.219014 | orchestrator | 2026-04-08 01:42:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:42:31.221041 | orchestrator | 2026-04-08 01:42:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:42:31.221118 | orchestrator | 2026-04-08 01:42:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:42:34.270874 | orchestrator | 2026-04-08 01:42:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:42:34.272831 | orchestrator | 2026-04-08 01:42:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:42:34.272913 | orchestrator | 2026-04-08 01:42:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:42:37.319891 | orchestrator | 2026-04-08 01:42:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:42:37.321449 | orchestrator | 2026-04-08 01:42:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:42:37.321502 | orchestrator | 2026-04-08 01:42:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:42:40.371129 | orchestrator | 2026-04-08 01:42:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:42:40.373194 | orchestrator | 2026-04-08 01:42:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:42:40.373265 | orchestrator | 2026-04-08 01:42:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:42:43.421285 | orchestrator | 2026-04-08 01:42:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:42:43.423692 | orchestrator | 2026-04-08 01:42:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:42:43.423748 | orchestrator | 2026-04-08 01:42:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:42:46.473630 | orchestrator | 2026-04-08 01:42:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:42:46.476505 | orchestrator | 2026-04-08 01:42:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:42:46.476577 | orchestrator | 2026-04-08 01:42:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:42:49.527218 | orchestrator | 2026-04-08 01:42:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:42:49.528638 | orchestrator | 2026-04-08 01:42:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:42:49.528675 | orchestrator | 2026-04-08 01:42:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:42:52.574749 | orchestrator | 2026-04-08 01:42:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:42:52.576466 | orchestrator | 2026-04-08 01:42:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:42:52.576600 | orchestrator | 2026-04-08 01:42:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:42:55.619024 | orchestrator | 2026-04-08 01:42:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:42:55.620799 | orchestrator | 2026-04-08 01:42:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:42:55.620865 | orchestrator | 2026-04-08 01:42:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:42:58.667826 | orchestrator | 2026-04-08 01:42:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:42:58.671095 | orchestrator | 2026-04-08 01:42:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:42:58.671168 | orchestrator | 2026-04-08 01:42:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:43:01.718479 | orchestrator | 2026-04-08 01:43:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:43:01.720146 | orchestrator | 2026-04-08 01:43:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:43:01.720200 | orchestrator | 2026-04-08 01:43:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:43:04.760781 | orchestrator | 2026-04-08 01:43:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:43:04.762641 | orchestrator | 2026-04-08 01:43:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:43:04.762765 | orchestrator | 2026-04-08 01:43:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:43:07.803550 | orchestrator | 2026-04-08 01:43:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:43:07.805196 | orchestrator | 2026-04-08 01:43:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:43:07.805256 | orchestrator | 2026-04-08 01:43:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:43:10.849777 | orchestrator | 2026-04-08 01:43:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:43:10.851286 | orchestrator | 2026-04-08 01:43:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:43:10.851440 | orchestrator | 2026-04-08 01:43:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:43:13.898520 | orchestrator | 2026-04-08 01:43:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:43:13.901697 | orchestrator | 2026-04-08 01:43:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:43:13.901751 | orchestrator | 2026-04-08 01:43:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:43:16.949498 | orchestrator | 2026-04-08 01:43:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:43:16.951753 | orchestrator | 2026-04-08 01:43:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:43:16.951840 | orchestrator | 2026-04-08 01:43:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:43:20.009537 | orchestrator | 2026-04-08 01:43:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:43:20.010221 | orchestrator | 2026-04-08 01:43:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:43:20.011712 | orchestrator | 2026-04-08 01:43:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:43:23.066297 | orchestrator | 2026-04-08 01:43:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:43:23.068101 | orchestrator | 2026-04-08 01:43:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:43:23.068143 | orchestrator | 2026-04-08 01:43:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:43:26.118632 | orchestrator | 2026-04-08 01:43:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:43:26.121400 | orchestrator | 2026-04-08 01:43:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:43:26.121425 | orchestrator | 2026-04-08 01:43:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:43:29.170999 | orchestrator | 2026-04-08 01:43:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:43:29.171980 | orchestrator | 2026-04-08 01:43:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:43:29.172111 | orchestrator | 2026-04-08 01:43:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:43:32.224077 | orchestrator | 2026-04-08 01:43:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:43:32.224748 | orchestrator | 2026-04-08 01:43:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:43:32.224782 | orchestrator | 2026-04-08 01:43:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:43:35.269821 | orchestrator | 2026-04-08 01:43:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:43:35.272179 | orchestrator | 2026-04-08 01:43:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:43:35.272298 | orchestrator | 2026-04-08 01:43:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:43:38.315304 | orchestrator | 2026-04-08 01:43:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:43:38.318470 | orchestrator | 2026-04-08 01:43:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:43:38.318650 | orchestrator | 2026-04-08 01:43:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:43:41.359308 | orchestrator | 2026-04-08 01:43:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:43:41.361592 | orchestrator | 2026-04-08 01:43:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:43:41.361750 | orchestrator | 2026-04-08 01:43:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:43:44.407844 | orchestrator | 2026-04-08 01:43:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:43:44.409803 | orchestrator | 2026-04-08 01:43:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:43:44.409880 | orchestrator | 2026-04-08 01:43:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:43:47.461099 | orchestrator | 2026-04-08 01:43:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:43:47.464767 | orchestrator | 2026-04-08 01:43:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:43:47.464894 | orchestrator | 2026-04-08 01:43:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:43:50.506645 | orchestrator | 2026-04-08 01:43:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:43:50.508244 | orchestrator | 2026-04-08 01:43:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:43:50.508587 | orchestrator | 2026-04-08 01:43:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:43:53.551965 | orchestrator | 2026-04-08 01:43:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:43:53.554102 | orchestrator | 2026-04-08 01:43:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:43:53.554176 | orchestrator | 2026-04-08 01:43:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:43:56.599781 | orchestrator | 2026-04-08 01:43:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:43:56.601867 | orchestrator | 2026-04-08 01:43:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:43:56.601922 | orchestrator | 2026-04-08 01:43:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:43:59.648537 | orchestrator | 2026-04-08 01:43:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:43:59.649633 | orchestrator | 2026-04-08 01:43:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:43:59.649824 | orchestrator | 2026-04-08 01:43:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:44:02.706629 | orchestrator | 2026-04-08 01:44:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:44:02.708835 | orchestrator | 2026-04-08 01:44:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:44:02.708882 | orchestrator | 2026-04-08 01:44:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:44:05.756007 | orchestrator | 2026-04-08 01:44:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:44:05.757837 | orchestrator | 2026-04-08 01:44:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:44:05.757914 | orchestrator | 2026-04-08 01:44:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:44:08.803586 | orchestrator | 2026-04-08 01:44:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:44:08.805553 | orchestrator | 2026-04-08 01:44:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:44:08.805627 | orchestrator | 2026-04-08 01:44:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:44:11.854949 | orchestrator | 2026-04-08 01:44:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:44:11.856078 | orchestrator | 2026-04-08 01:44:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:44:11.856236 | orchestrator | 2026-04-08 01:44:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:44:14.902437 | orchestrator | 2026-04-08 01:44:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:44:14.903508 | orchestrator | 2026-04-08 01:44:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:44:14.903914 | orchestrator | 2026-04-08 01:44:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:44:17.952586 | orchestrator | 2026-04-08 01:44:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:44:17.953533 | orchestrator | 2026-04-08 01:44:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:44:17.953581 | orchestrator | 2026-04-08 01:44:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:44:21.001357 | orchestrator | 2026-04-08 01:44:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:44:21.001444 | orchestrator | 2026-04-08 01:44:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:44:21.001451 | orchestrator | 2026-04-08 01:44:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:44:24.053887 | orchestrator | 2026-04-08 01:44:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:44:24.055263 | orchestrator | 2026-04-08 01:44:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:44:24.055389 | orchestrator | 2026-04-08 01:44:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:44:27.105166 | orchestrator | 2026-04-08 01:44:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:44:27.106965 | orchestrator | 2026-04-08 01:44:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:44:27.107004 | orchestrator | 2026-04-08 01:44:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:44:30.155731 | orchestrator | 2026-04-08 01:44:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:44:30.157675 | orchestrator | 2026-04-08 01:44:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:44:30.157728 | orchestrator | 2026-04-08 01:44:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:44:33.201714 | orchestrator | 2026-04-08 01:44:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:44:33.201957 | orchestrator | 2026-04-08 01:44:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:44:33.201987 | orchestrator | 2026-04-08 01:44:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:44:36.238958 | orchestrator | 2026-04-08 01:44:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:44:36.240548 | orchestrator | 2026-04-08 01:44:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:44:36.240608 | orchestrator | 2026-04-08 01:44:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:44:39.281640 | orchestrator | 2026-04-08 01:44:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:44:39.283980 | orchestrator | 2026-04-08 01:44:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:44:39.284069 | orchestrator | 2026-04-08 01:44:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:44:42.330745 | orchestrator | 2026-04-08 01:44:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:44:42.334154 | orchestrator | 2026-04-08 01:44:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:44:42.334427 | orchestrator | 2026-04-08 01:44:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:44:45.390523 | orchestrator | 2026-04-08 01:44:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:44:45.391909 | orchestrator | 2026-04-08 01:44:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:44:45.391964 | orchestrator | 2026-04-08 01:44:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:44:48.446216 | orchestrator | 2026-04-08 01:44:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:44:48.448513 | orchestrator | 2026-04-08 01:44:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:44:48.448881 | orchestrator | 2026-04-08 01:44:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:44:51.495111 | orchestrator | 2026-04-08 01:44:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:44:51.497179 | orchestrator | 2026-04-08 01:44:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:44:51.497233 | orchestrator | 2026-04-08 01:44:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:44:54.544669 | orchestrator | 2026-04-08 01:44:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:44:54.546791 | orchestrator | 2026-04-08 01:44:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:44:54.546860 | orchestrator | 2026-04-08 01:44:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:44:57.595056 | orchestrator | 2026-04-08 01:44:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:44:57.597362 | orchestrator | 2026-04-08 01:44:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:44:57.597512 | orchestrator | 2026-04-08 01:44:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:45:00.646553 | orchestrator | 2026-04-08 01:45:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:45:00.648397 | orchestrator | 2026-04-08 01:45:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:45:00.648442 | orchestrator | 2026-04-08 01:45:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:45:03.700825 | orchestrator | 2026-04-08 01:45:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:45:03.702208 | orchestrator | 2026-04-08 01:45:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:45:03.702353 | orchestrator | 2026-04-08 01:45:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:45:06.747207 | orchestrator | 2026-04-08 01:45:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:45:06.748426 | orchestrator | 2026-04-08 01:45:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:45:06.748467 | orchestrator | 2026-04-08 01:45:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:45:09.795530 | orchestrator | 2026-04-08 01:45:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:45:09.797905 | orchestrator | 2026-04-08 01:45:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:45:09.797954 | orchestrator | 2026-04-08 01:45:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:45:12.842209 | orchestrator | 2026-04-08 01:45:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:45:12.843494 | orchestrator | 2026-04-08 01:45:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:45:12.844032 | orchestrator | 2026-04-08 01:45:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:45:15.888151 | orchestrator | 2026-04-08 01:45:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:45:15.889714 | orchestrator | 2026-04-08 01:45:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:45:15.889807 | orchestrator | 2026-04-08 01:45:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:45:18.938568 | orchestrator | 2026-04-08 01:45:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:45:18.942079 | orchestrator | 2026-04-08 01:45:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:45:18.942195 | orchestrator | 2026-04-08 01:45:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:45:21.986464 | orchestrator | 2026-04-08 01:45:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:45:21.986738 | orchestrator | 2026-04-08 01:45:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:45:21.986766 | orchestrator | 2026-04-08 01:45:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:47:25.137553 | orchestrator | 2026-04-08 01:47:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:47:25.137743 | orchestrator | 2026-04-08 01:47:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:47:25.137760 | orchestrator | 2026-04-08 01:47:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:47:28.180707 | orchestrator | 2026-04-08 01:47:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:47:28.181853 | orchestrator | 2026-04-08 01:47:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:47:28.181916 | orchestrator | 2026-04-08 01:47:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:47:31.223558 | orchestrator | 2026-04-08 01:47:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:47:31.224601 | orchestrator | 2026-04-08 01:47:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:47:31.224679 | orchestrator | 2026-04-08 01:47:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:47:34.271149 | orchestrator | 2026-04-08 01:47:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:47:34.271874 | orchestrator | 2026-04-08 01:47:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:47:34.271929 | orchestrator | 2026-04-08 01:47:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:47:37.317766 | orchestrator | 2026-04-08 01:47:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:47:37.319182 | orchestrator | 2026-04-08 01:47:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:47:37.319436 | orchestrator | 2026-04-08 01:47:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:47:40.364176 | orchestrator | 2026-04-08 01:47:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:47:40.366163 | orchestrator | 2026-04-08 01:47:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:47:40.366271 | orchestrator | 2026-04-08 01:47:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:47:43.407703 | orchestrator | 2026-04-08 01:47:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:47:43.409772 | orchestrator | 2026-04-08 01:47:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:47:43.409957 | orchestrator | 2026-04-08 01:47:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:47:46.447520 | orchestrator | 2026-04-08 01:47:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:47:46.449195 | orchestrator | 2026-04-08 01:47:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:47:46.449283 | orchestrator | 2026-04-08 01:47:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:47:49.493919 | orchestrator | 2026-04-08 01:47:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:47:49.494003 | orchestrator | 2026-04-08 01:47:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:47:49.494064 | orchestrator | 2026-04-08 01:47:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:47:52.532301 | orchestrator | 2026-04-08 01:47:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:47:52.533804 | orchestrator | 2026-04-08 01:47:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:47:52.533828 | orchestrator | 2026-04-08 01:47:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:47:55.581911 | orchestrator | 2026-04-08 01:47:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:47:55.582008 | orchestrator | 2026-04-08 01:47:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:47:55.582089 | orchestrator | 2026-04-08 01:47:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:47:58.626451 | orchestrator | 2026-04-08 01:47:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:47:58.627380 | orchestrator | 2026-04-08 01:47:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:47:58.627412 | orchestrator | 2026-04-08 01:47:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:48:01.687388 | orchestrator | 2026-04-08 01:48:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:48:01.690277 | orchestrator | 2026-04-08 01:48:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:48:01.690324 | orchestrator | 2026-04-08 01:48:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:48:04.744939 | orchestrator | 2026-04-08 01:48:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:48:04.745044 | orchestrator | 2026-04-08 01:48:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:48:04.745134 | orchestrator | 2026-04-08 01:48:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:48:07.794753 | orchestrator | 2026-04-08 01:48:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:48:07.796316 | orchestrator | 2026-04-08 01:48:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:48:07.796429 | orchestrator | 2026-04-08 01:48:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:48:10.844025 | orchestrator | 2026-04-08 01:48:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:48:10.845540 | orchestrator | 2026-04-08 01:48:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:48:10.845570 | orchestrator | 2026-04-08 01:48:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:48:13.891485 | orchestrator | 2026-04-08 01:48:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:48:13.893137 | orchestrator | 2026-04-08 01:48:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:48:13.893240 | orchestrator | 2026-04-08 01:48:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:48:16.930289 | orchestrator | 2026-04-08 01:48:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:48:16.932527 | orchestrator | 2026-04-08 01:48:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:48:16.932575 | orchestrator | 2026-04-08 01:48:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:48:19.984950 | orchestrator | 2026-04-08 01:48:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:48:19.986667 | orchestrator | 2026-04-08 01:48:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:48:19.986759 | orchestrator | 2026-04-08 01:48:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:48:23.032858 | orchestrator | 2026-04-08 01:48:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:48:23.035108 | orchestrator | 2026-04-08 01:48:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:48:23.035245 | orchestrator | 2026-04-08 01:48:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:48:26.073865 | orchestrator | 2026-04-08 01:48:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:48:26.076221 | orchestrator | 2026-04-08 01:48:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:48:26.076279 | orchestrator | 2026-04-08 01:48:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:48:29.117456 | orchestrator | 2026-04-08 01:48:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:48:29.119476 | orchestrator | 2026-04-08 01:48:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:48:29.119549 | orchestrator | 2026-04-08 01:48:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:48:32.162127 | orchestrator | 2026-04-08 01:48:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:48:32.164743 | orchestrator | 2026-04-08 01:48:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:48:32.164824 | orchestrator | 2026-04-08 01:48:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:48:35.208863 | orchestrator | 2026-04-08 01:48:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:48:35.209594 | orchestrator | 2026-04-08 01:48:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:48:35.209702 | orchestrator | 2026-04-08 01:48:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:48:38.251474 | orchestrator | 2026-04-08 01:48:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:48:38.253161 | orchestrator | 2026-04-08 01:48:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:48:38.253220 | orchestrator | 2026-04-08 01:48:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:48:41.298592 | orchestrator | 2026-04-08 01:48:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:48:41.299998 | orchestrator | 2026-04-08 01:48:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:48:41.300091 | orchestrator | 2026-04-08 01:48:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:48:44.336745 | orchestrator | 2026-04-08 01:48:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:48:44.339154 | orchestrator | 2026-04-08 01:48:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:48:44.339312 | orchestrator | 2026-04-08 01:48:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:48:47.375271 | orchestrator | 2026-04-08 01:48:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:48:47.377176 | orchestrator | 2026-04-08 01:48:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:48:47.377369 | orchestrator | 2026-04-08 01:48:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:48:50.415717 | orchestrator | 2026-04-08 01:48:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:48:50.417406 | orchestrator | 2026-04-08 01:48:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:48:50.417447 | orchestrator | 2026-04-08 01:48:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:48:53.455789 | orchestrator | 2026-04-08 01:48:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:48:53.457889 | orchestrator | 2026-04-08 01:48:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:48:53.457948 | orchestrator | 2026-04-08 01:48:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:48:56.502703 | orchestrator | 2026-04-08 01:48:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:48:56.505478 | orchestrator | 2026-04-08 01:48:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:48:56.505542 | orchestrator | 2026-04-08 01:48:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:48:59.545472 | orchestrator | 2026-04-08 01:48:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:48:59.548420 | orchestrator | 2026-04-08 01:48:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:48:59.548554 | orchestrator | 2026-04-08 01:48:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:49:02.593271 | orchestrator | 2026-04-08 01:49:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:49:02.595988 | orchestrator | 2026-04-08 01:49:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:49:02.596115 | orchestrator | 2026-04-08 01:49:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:49:05.639047 | orchestrator | 2026-04-08 01:49:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:49:05.639789 | orchestrator | 2026-04-08 01:49:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:49:05.639828 | orchestrator | 2026-04-08 01:49:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:49:08.681522 | orchestrator | 2026-04-08 01:49:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:49:08.683045 | orchestrator | 2026-04-08 01:49:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:49:08.683141 | orchestrator | 2026-04-08 01:49:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:49:11.726972 | orchestrator | 2026-04-08 01:49:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:49:11.730201 | orchestrator | 2026-04-08 01:49:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:49:11.730347 | orchestrator | 2026-04-08 01:49:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:49:14.771901 | orchestrator | 2026-04-08 01:49:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:49:14.773518 | orchestrator | 2026-04-08 01:49:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:49:14.773586 | orchestrator | 2026-04-08 01:49:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:49:17.815720 | orchestrator | 2026-04-08 01:49:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:49:17.818571 | orchestrator | 2026-04-08 01:49:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:49:17.818623 | orchestrator | 2026-04-08 01:49:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:49:20.861932 | orchestrator | 2026-04-08 01:49:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:49:20.862651 | orchestrator | 2026-04-08 01:49:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:49:20.862757 | orchestrator | 2026-04-08 01:49:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:49:23.904427 | orchestrator | 2026-04-08 01:49:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:49:23.906333 | orchestrator | 2026-04-08 01:49:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:49:23.906468 | orchestrator | 2026-04-08 01:49:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:49:26.950725 | orchestrator | 2026-04-08 01:49:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:49:26.954046 | orchestrator | 2026-04-08 01:49:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:49:26.954229 | orchestrator | 2026-04-08 01:49:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:49:30.011786 | orchestrator | 2026-04-08 01:49:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:49:30.013266 | orchestrator | 2026-04-08 01:49:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:49:30.013505 | orchestrator | 2026-04-08 01:49:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:49:33.060686 | orchestrator | 2026-04-08 01:49:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:49:33.062270 | orchestrator | 2026-04-08 01:49:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:49:33.062358 | orchestrator | 2026-04-08 01:49:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:49:36.107843 | orchestrator | 2026-04-08 01:49:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:49:36.109970 | orchestrator | 2026-04-08 01:49:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:49:36.110085 | orchestrator | 2026-04-08 01:49:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:49:39.153042 | orchestrator | 2026-04-08 01:49:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:49:39.154491 | orchestrator | 2026-04-08 01:49:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:49:39.154542 | orchestrator | 2026-04-08 01:49:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:49:42.203269 | orchestrator | 2026-04-08 01:49:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:49:42.207090 | orchestrator | 2026-04-08 01:49:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:49:42.207182 | orchestrator | 2026-04-08 01:49:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:49:45.251924 | orchestrator | 2026-04-08 01:49:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:49:45.255549 | orchestrator | 2026-04-08 01:49:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:49:45.255626 | orchestrator | 2026-04-08 01:49:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:49:48.302580 | orchestrator | 2026-04-08 01:49:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:49:48.304552 | orchestrator | 2026-04-08 01:49:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:49:48.304610 | orchestrator | 2026-04-08 01:49:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:49:51.351891 | orchestrator | 2026-04-08 01:49:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:49:51.353064 | orchestrator | 2026-04-08 01:49:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:49:51.353253 | orchestrator | 2026-04-08 01:49:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:49:54.393505 | orchestrator | 2026-04-08 01:49:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:49:54.394666 | orchestrator | 2026-04-08 01:49:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:49:54.394789 | orchestrator | 2026-04-08 01:49:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:49:57.441189 | orchestrator | 2026-04-08 01:49:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:49:57.443622 | orchestrator | 2026-04-08 01:49:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:49:57.443713 | orchestrator | 2026-04-08 01:49:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:50:00.482722 | orchestrator | 2026-04-08 01:50:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:50:00.483075 | orchestrator | 2026-04-08 01:50:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:50:00.483103 | orchestrator | 2026-04-08 01:50:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:50:03.530327 | orchestrator | 2026-04-08 01:50:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:50:03.531450 | orchestrator | 2026-04-08 01:50:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:50:03.531533 | orchestrator | 2026-04-08 01:50:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:50:06.581298 | orchestrator | 2026-04-08 01:50:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:50:06.582617 | orchestrator | 2026-04-08 01:50:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:50:06.582711 | orchestrator | 2026-04-08 01:50:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:50:09.627855 | orchestrator | 2026-04-08 01:50:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:50:09.629367 | orchestrator | 2026-04-08 01:50:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:50:09.629544 | orchestrator | 2026-04-08 01:50:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:50:12.675330 | orchestrator | 2026-04-08 01:50:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:50:12.676857 | orchestrator | 2026-04-08 01:50:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:50:12.676951 | orchestrator | 2026-04-08 01:50:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:50:15.721059 | orchestrator | 2026-04-08 01:50:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:50:15.722373 | orchestrator | 2026-04-08 01:50:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:50:15.722408 | orchestrator | 2026-04-08 01:50:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:50:18.770102 | orchestrator | 2026-04-08 01:50:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:50:18.771950 | orchestrator | 2026-04-08 01:50:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:50:18.772245 | orchestrator | 2026-04-08 01:50:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:50:21.817050 | orchestrator | 2026-04-08 01:50:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:50:21.819872 | orchestrator | 2026-04-08 01:50:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:50:21.819934 | orchestrator | 2026-04-08 01:50:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:50:24.861775 | orchestrator | 2026-04-08 01:50:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:50:24.863416 | orchestrator | 2026-04-08 01:50:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:50:24.863499 | orchestrator | 2026-04-08 01:50:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:50:27.907940 | orchestrator | 2026-04-08 01:50:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:50:27.909161 | orchestrator | 2026-04-08 01:50:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:50:27.909279 | orchestrator | 2026-04-08 01:50:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:50:30.952115 | orchestrator | 2026-04-08 01:50:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:50:30.954837 | orchestrator | 2026-04-08 01:50:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:50:30.954868 | orchestrator | 2026-04-08 01:50:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:50:33.994078 | orchestrator | 2026-04-08 01:50:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:50:33.996517 | orchestrator | 2026-04-08 01:50:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:50:33.996597 | orchestrator | 2026-04-08 01:50:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:50:37.044042 | orchestrator | 2026-04-08 01:50:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:50:37.046256 | orchestrator | 2026-04-08 01:50:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:50:37.046305 | orchestrator | 2026-04-08 01:50:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:50:40.091801 | orchestrator | 2026-04-08 01:50:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:50:40.093027 | orchestrator | 2026-04-08 01:50:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:50:40.093203 | orchestrator | 2026-04-08 01:50:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:50:43.140967 | orchestrator | 2026-04-08 01:50:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:50:43.142740 | orchestrator | 2026-04-08 01:50:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:50:43.143052 | orchestrator | 2026-04-08 01:50:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:50:46.185813 | orchestrator | 2026-04-08 01:50:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:50:46.187202 | orchestrator | 2026-04-08 01:50:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:50:46.187253 | orchestrator | 2026-04-08 01:50:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:50:49.233821 | orchestrator | 2026-04-08 01:50:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:50:49.235883 | orchestrator | 2026-04-08 01:50:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:50:49.235976 | orchestrator | 2026-04-08 01:50:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:50:52.279716 | orchestrator | 2026-04-08 01:50:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:50:52.281373 | orchestrator | 2026-04-08 01:50:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:50:52.281418 | orchestrator | 2026-04-08 01:50:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:50:55.325355 | orchestrator | 2026-04-08 01:50:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:50:55.326822 | orchestrator | 2026-04-08 01:50:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:50:55.326895 | orchestrator | 2026-04-08 01:50:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:50:58.373586 | orchestrator | 2026-04-08 01:50:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:50:58.375997 | orchestrator | 2026-04-08 01:50:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:50:58.376204 | orchestrator | 2026-04-08 01:50:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:51:01.423175 | orchestrator | 2026-04-08 01:51:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:51:01.423941 | orchestrator | 2026-04-08 01:51:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:51:01.424425 | orchestrator | 2026-04-08 01:51:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:51:04.463212 | orchestrator | 2026-04-08 01:51:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:51:04.465101 | orchestrator | 2026-04-08 01:51:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:51:04.465166 | orchestrator | 2026-04-08 01:51:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:51:07.519005 | orchestrator | 2026-04-08 01:51:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:51:07.521256 | orchestrator | 2026-04-08 01:51:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:51:07.521424 | orchestrator | 2026-04-08 01:51:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:51:10.565626 | orchestrator | 2026-04-08 01:51:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:51:10.566040 | orchestrator | 2026-04-08 01:51:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:51:10.566231 | orchestrator | 2026-04-08 01:51:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:51:13.611981 | orchestrator | 2026-04-08 01:51:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:51:13.614413 | orchestrator | 2026-04-08 01:51:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:51:13.614551 | orchestrator | 2026-04-08 01:51:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:51:16.660372 | orchestrator | 2026-04-08 01:51:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:51:16.662680 | orchestrator | 2026-04-08 01:51:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:51:16.662801 | orchestrator | 2026-04-08 01:51:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:51:19.709357 | orchestrator | 2026-04-08 01:51:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:51:19.711734 | orchestrator | 2026-04-08 01:51:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:51:19.711812 | orchestrator | 2026-04-08 01:51:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:51:22.757973 | orchestrator | 2026-04-08 01:51:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:51:22.759437 | orchestrator | 2026-04-08 01:51:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:51:22.759543 | orchestrator | 2026-04-08 01:51:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:51:25.799987 | orchestrator | 2026-04-08 01:51:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:51:25.801501 | orchestrator | 2026-04-08 01:51:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:51:25.801532 | orchestrator | 2026-04-08 01:51:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:51:28.847975 | orchestrator | 2026-04-08 01:51:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:51:28.850733 | orchestrator | 2026-04-08 01:51:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:51:28.850777 | orchestrator | 2026-04-08 01:51:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:51:31.896779 | orchestrator | 2026-04-08 01:51:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:51:31.898741 | orchestrator | 2026-04-08 01:51:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:51:31.898969 | orchestrator | 2026-04-08 01:51:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:51:34.945744 | orchestrator | 2026-04-08 01:51:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:51:34.947806 | orchestrator | 2026-04-08 01:51:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:51:34.947948 | orchestrator | 2026-04-08 01:51:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:51:37.996126 | orchestrator | 2026-04-08 01:51:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:51:37.997663 | orchestrator | 2026-04-08 01:51:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:51:37.997833 | orchestrator | 2026-04-08 01:51:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:51:41.038403 | orchestrator | 2026-04-08 01:51:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:51:41.040597 | orchestrator | 2026-04-08 01:51:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:51:41.040637 | orchestrator | 2026-04-08 01:51:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:51:44.085213 | orchestrator | 2026-04-08 01:51:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:51:44.087297 | orchestrator | 2026-04-08 01:51:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:51:44.087435 | orchestrator | 2026-04-08 01:51:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:51:47.133940 | orchestrator | 2026-04-08 01:51:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:51:47.134928 | orchestrator | 2026-04-08 01:51:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:51:47.135161 | orchestrator | 2026-04-08 01:51:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:51:50.183637 | orchestrator | 2026-04-08 01:51:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:51:50.185180 | orchestrator | 2026-04-08 01:51:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:51:50.185243 | orchestrator | 2026-04-08 01:51:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:51:53.236958 | orchestrator | 2026-04-08 01:51:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:51:53.239773 | orchestrator | 2026-04-08 01:51:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:51:53.239840 | orchestrator | 2026-04-08 01:51:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:51:56.287097 | orchestrator | 2026-04-08 01:51:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:51:56.289477 | orchestrator | 2026-04-08 01:51:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:51:56.289578 | orchestrator | 2026-04-08 01:51:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:51:59.335238 | orchestrator | 2026-04-08 01:51:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:51:59.337026 | orchestrator | 2026-04-08 01:51:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:51:59.337156 | orchestrator | 2026-04-08 01:51:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:52:02.381883 | orchestrator | 2026-04-08 01:52:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:52:02.383602 | orchestrator | 2026-04-08 01:52:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:52:02.383642 | orchestrator | 2026-04-08 01:52:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:52:05.428228 | orchestrator | 2026-04-08 01:52:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:52:05.430049 | orchestrator | 2026-04-08 01:52:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:52:05.430111 | orchestrator | 2026-04-08 01:52:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:52:08.480715 | orchestrator | 2026-04-08 01:52:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:52:08.482092 | orchestrator | 2026-04-08 01:52:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:52:08.482133 | orchestrator | 2026-04-08 01:52:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:52:11.523553 | orchestrator | 2026-04-08 01:52:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:52:11.524945 | orchestrator | 2026-04-08 01:52:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:52:11.525118 | orchestrator | 2026-04-08 01:52:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:52:14.567273 | orchestrator | 2026-04-08 01:52:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:52:14.568900 | orchestrator | 2026-04-08 01:52:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:52:14.568977 | orchestrator | 2026-04-08 01:52:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:52:17.614780 | orchestrator | 2026-04-08 01:52:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:52:17.620248 | orchestrator | 2026-04-08 01:52:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:52:17.620478 | orchestrator | 2026-04-08 01:52:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:52:20.682551 | orchestrator | 2026-04-08 01:52:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:52:20.682617 | orchestrator | 2026-04-08 01:52:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:52:20.682623 | orchestrator | 2026-04-08 01:52:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:52:23.736658 | orchestrator | 2026-04-08 01:52:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:52:23.737152 | orchestrator | 2026-04-08 01:52:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:52:23.737184 | orchestrator | 2026-04-08 01:52:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:52:26.791544 | orchestrator | 2026-04-08 01:52:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:52:26.794246 | orchestrator | 2026-04-08 01:52:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:52:26.794310 | orchestrator | 2026-04-08 01:52:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:52:29.840766 | orchestrator | 2026-04-08 01:52:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:52:29.842828 | orchestrator | 2026-04-08 01:52:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:52:29.842968 | orchestrator | 2026-04-08 01:52:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:52:32.897168 | orchestrator | 2026-04-08 01:52:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:52:32.898340 | orchestrator | 2026-04-08 01:52:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:52:32.898382 | orchestrator | 2026-04-08 01:52:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:52:35.948594 | orchestrator | 2026-04-08 01:52:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:52:35.950338 | orchestrator | 2026-04-08 01:52:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:52:35.950478 | orchestrator | 2026-04-08 01:52:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:52:39.000410 | orchestrator | 2026-04-08 01:52:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:52:39.002697 | orchestrator | 2026-04-08 01:52:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:52:39.002771 | orchestrator | 2026-04-08 01:52:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:52:42.056292 | orchestrator | 2026-04-08 01:52:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:52:42.057592 | orchestrator | 2026-04-08 01:52:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:52:42.057647 | orchestrator | 2026-04-08 01:52:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:52:45.105030 | orchestrator | 2026-04-08 01:52:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:52:45.107101 | orchestrator | 2026-04-08 01:52:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:52:45.107162 | orchestrator | 2026-04-08 01:52:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:52:48.159104 | orchestrator | 2026-04-08 01:52:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:52:48.161137 | orchestrator | 2026-04-08 01:52:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:52:48.161164 | orchestrator | 2026-04-08 01:52:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:52:51.205355 | orchestrator | 2026-04-08 01:52:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:52:51.207583 | orchestrator | 2026-04-08 01:52:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:52:51.207738 | orchestrator | 2026-04-08 01:52:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:52:54.257073 | orchestrator | 2026-04-08 01:52:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:52:54.258993 | orchestrator | 2026-04-08 01:52:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:52:54.259041 | orchestrator | 2026-04-08 01:52:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:52:57.313582 | orchestrator | 2026-04-08 01:52:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:52:57.313873 | orchestrator | 2026-04-08 01:52:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:52:57.313910 | orchestrator | 2026-04-08 01:52:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:53:00.361871 | orchestrator | 2026-04-08 01:53:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:53:00.364349 | orchestrator | 2026-04-08 01:53:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:53:00.364425 | orchestrator | 2026-04-08 01:53:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:53:03.416236 | orchestrator | 2026-04-08 01:53:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:53:03.418267 | orchestrator | 2026-04-08 01:53:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:53:03.418378 | orchestrator | 2026-04-08 01:53:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:53:06.466330 | orchestrator | 2026-04-08 01:53:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:53:06.467465 | orchestrator | 2026-04-08 01:53:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:53:06.467553 | orchestrator | 2026-04-08 01:53:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:53:09.516360 | orchestrator | 2026-04-08 01:53:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:53:09.518442 | orchestrator | 2026-04-08 01:53:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:53:09.518508 | orchestrator | 2026-04-08 01:53:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:53:12.569973 | orchestrator | 2026-04-08 01:53:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:53:12.571517 | orchestrator | 2026-04-08 01:53:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:53:12.571605 | orchestrator | 2026-04-08 01:53:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:53:15.619308 | orchestrator | 2026-04-08 01:53:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:53:15.620454 | orchestrator | 2026-04-08 01:53:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:53:15.620597 | orchestrator | 2026-04-08 01:53:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:53:18.667675 | orchestrator | 2026-04-08 01:53:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:53:18.669614 | orchestrator | 2026-04-08 01:53:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:53:18.669804 | orchestrator | 2026-04-08 01:53:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:53:21.706172 | orchestrator | 2026-04-08 01:53:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:53:21.708847 | orchestrator | 2026-04-08 01:53:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:53:21.708896 | orchestrator | 2026-04-08 01:53:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:53:24.758998 | orchestrator | 2026-04-08 01:53:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:53:24.761252 | orchestrator | 2026-04-08 01:53:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:53:24.761280 | orchestrator | 2026-04-08 01:53:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:53:27.808860 | orchestrator | 2026-04-08 01:53:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:53:27.812461 | orchestrator | 2026-04-08 01:53:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:53:27.812533 | orchestrator | 2026-04-08 01:53:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:53:30.863823 | orchestrator | 2026-04-08 01:53:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:53:30.867468 | orchestrator | 2026-04-08 01:53:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:53:30.868005 | orchestrator | 2026-04-08 01:53:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:53:33.918550 | orchestrator | 2026-04-08 01:53:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:53:33.919630 | orchestrator | 2026-04-08 01:53:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:53:33.919700 | orchestrator | 2026-04-08 01:53:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:53:36.966629 | orchestrator | 2026-04-08 01:53:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:53:36.968209 | orchestrator | 2026-04-08 01:53:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:53:36.968225 | orchestrator | 2026-04-08 01:53:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:53:40.013964 | orchestrator | 2026-04-08 01:53:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:53:40.015736 | orchestrator | 2026-04-08 01:53:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:53:40.015807 | orchestrator | 2026-04-08 01:53:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:53:43.062948 | orchestrator | 2026-04-08 01:53:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:53:43.064974 | orchestrator | 2026-04-08 01:53:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:53:43.065039 | orchestrator | 2026-04-08 01:53:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:53:46.124167 | orchestrator | 2026-04-08 01:53:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:53:46.128102 | orchestrator | 2026-04-08 01:53:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:53:46.128170 | orchestrator | 2026-04-08 01:53:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:53:49.204276 | orchestrator | 2026-04-08 01:53:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:53:49.207660 | orchestrator | 2026-04-08 01:53:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:53:49.207736 | orchestrator | 2026-04-08 01:53:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:53:52.266168 | orchestrator | 2026-04-08 01:53:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:53:52.267630 | orchestrator | 2026-04-08 01:53:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:53:52.267672 | orchestrator | 2026-04-08 01:53:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:53:55.314501 | orchestrator | 2026-04-08 01:53:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:53:55.316279 | orchestrator | 2026-04-08 01:53:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:53:55.316575 | orchestrator | 2026-04-08 01:53:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:53:58.363215 | orchestrator | 2026-04-08 01:53:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:53:58.364743 | orchestrator | 2026-04-08 01:53:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:53:58.364800 | orchestrator | 2026-04-08 01:53:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:54:01.415043 | orchestrator | 2026-04-08 01:54:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:54:01.415597 | orchestrator | 2026-04-08 01:54:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:54:01.415725 | orchestrator | 2026-04-08 01:54:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:54:04.459189 | orchestrator | 2026-04-08 01:54:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:54:04.460859 | orchestrator | 2026-04-08 01:54:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:54:04.460935 | orchestrator | 2026-04-08 01:54:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:54:07.501243 | orchestrator | 2026-04-08 01:54:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:54:07.503148 | orchestrator | 2026-04-08 01:54:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:54:07.503179 | orchestrator | 2026-04-08 01:54:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:54:10.549988 | orchestrator | 2026-04-08 01:54:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:54:10.552285 | orchestrator | 2026-04-08 01:54:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:54:10.552588 | orchestrator | 2026-04-08 01:54:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:54:13.596104 | orchestrator | 2026-04-08 01:54:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:54:13.597819 | orchestrator | 2026-04-08 01:54:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:54:13.597879 | orchestrator | 2026-04-08 01:54:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:54:16.646629 | orchestrator | 2026-04-08 01:54:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:54:16.648091 | orchestrator | 2026-04-08 01:54:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:54:16.648123 | orchestrator | 2026-04-08 01:54:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:54:19.693093 | orchestrator | 2026-04-08 01:54:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:54:19.695044 | orchestrator | 2026-04-08 01:54:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:54:19.695068 | orchestrator | 2026-04-08 01:54:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:54:22.737977 | orchestrator | 2026-04-08 01:54:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:54:22.740016 | orchestrator | 2026-04-08 01:54:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:54:22.740066 | orchestrator | 2026-04-08 01:54:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:54:25.782193 | orchestrator | 2026-04-08 01:54:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:54:25.784214 | orchestrator | 2026-04-08 01:54:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:54:25.784244 | orchestrator | 2026-04-08 01:54:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:54:28.834900 | orchestrator | 2026-04-08 01:54:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:54:28.836654 | orchestrator | 2026-04-08 01:54:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:54:28.836742 | orchestrator | 2026-04-08 01:54:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:54:31.881037 | orchestrator | 2026-04-08 01:54:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:54:31.882619 | orchestrator | 2026-04-08 01:54:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:54:31.882711 | orchestrator | 2026-04-08 01:54:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:54:34.926932 | orchestrator | 2026-04-08 01:54:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:54:34.928555 | orchestrator | 2026-04-08 01:54:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:54:34.928602 | orchestrator | 2026-04-08 01:54:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:54:37.972442 | orchestrator | 2026-04-08 01:54:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:54:37.974214 | orchestrator | 2026-04-08 01:54:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:54:37.974240 | orchestrator | 2026-04-08 01:54:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:54:41.024658 | orchestrator | 2026-04-08 01:54:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:54:41.026229 | orchestrator | 2026-04-08 01:54:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:54:41.026285 | orchestrator | 2026-04-08 01:54:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:54:44.075419 | orchestrator | 2026-04-08 01:54:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:54:44.077231 | orchestrator | 2026-04-08 01:54:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:54:44.077295 | orchestrator | 2026-04-08 01:54:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:54:47.130182 | orchestrator | 2026-04-08 01:54:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:54:47.131750 | orchestrator | 2026-04-08 01:54:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:54:47.131984 | orchestrator | 2026-04-08 01:54:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:54:50.179677 | orchestrator | 2026-04-08 01:54:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:54:50.181279 | orchestrator | 2026-04-08 01:54:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:54:50.181359 | orchestrator | 2026-04-08 01:54:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:54:53.231960 | orchestrator | 2026-04-08 01:54:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:54:53.233737 | orchestrator | 2026-04-08 01:54:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:54:53.233788 | orchestrator | 2026-04-08 01:54:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:54:56.281894 | orchestrator | 2026-04-08 01:54:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:54:56.283464 | orchestrator | 2026-04-08 01:54:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:54:56.283927 | orchestrator | 2026-04-08 01:54:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:54:59.328024 | orchestrator | 2026-04-08 01:54:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:54:59.329651 | orchestrator | 2026-04-08 01:54:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:54:59.329684 | orchestrator | 2026-04-08 01:54:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:55:02.379567 | orchestrator | 2026-04-08 01:55:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:55:02.381291 | orchestrator | 2026-04-08 01:55:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:55:02.381927 | orchestrator | 2026-04-08 01:55:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:55:05.424941 | orchestrator | 2026-04-08 01:55:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:55:05.426192 | orchestrator | 2026-04-08 01:55:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:55:05.426354 | orchestrator | 2026-04-08 01:55:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:55:08.469915 | orchestrator | 2026-04-08 01:55:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:55:08.472158 | orchestrator | 2026-04-08 01:55:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:55:08.472739 | orchestrator | 2026-04-08 01:55:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:55:11.516807 | orchestrator | 2026-04-08 01:55:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:55:11.518272 | orchestrator | 2026-04-08 01:55:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:55:11.518309 | orchestrator | 2026-04-08 01:55:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:55:14.569602 | orchestrator | 2026-04-08 01:55:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:55:14.572570 | orchestrator | 2026-04-08 01:55:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:55:14.572643 | orchestrator | 2026-04-08 01:55:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:55:17.624351 | orchestrator | 2026-04-08 01:55:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:55:17.628158 | orchestrator | 2026-04-08 01:55:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:55:17.628375 | orchestrator | 2026-04-08 01:55:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:55:20.674305 | orchestrator | 2026-04-08 01:55:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:55:20.675774 | orchestrator | 2026-04-08 01:55:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:55:20.675859 | orchestrator | 2026-04-08 01:55:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:55:23.723365 | orchestrator | 2026-04-08 01:55:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:55:23.724602 | orchestrator | 2026-04-08 01:55:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:55:23.724659 | orchestrator | 2026-04-08 01:55:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:55:26.770265 | orchestrator | 2026-04-08 01:55:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:55:26.771895 | orchestrator | 2026-04-08 01:55:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:55:26.771983 | orchestrator | 2026-04-08 01:55:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:55:29.817188 | orchestrator | 2026-04-08 01:55:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:55:29.818836 | orchestrator | 2026-04-08 01:55:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:55:29.819044 | orchestrator | 2026-04-08 01:55:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:55:32.867086 | orchestrator | 2026-04-08 01:55:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:55:32.868612 | orchestrator | 2026-04-08 01:55:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:55:32.868656 | orchestrator | 2026-04-08 01:55:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:55:35.910253 | orchestrator | 2026-04-08 01:55:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:55:35.911977 | orchestrator | 2026-04-08 01:55:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:55:35.912038 | orchestrator | 2026-04-08 01:55:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:55:38.961472 | orchestrator | 2026-04-08 01:55:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:55:38.962817 | orchestrator | 2026-04-08 01:55:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:55:38.962887 | orchestrator | 2026-04-08 01:55:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:55:42.008181 | orchestrator | 2026-04-08 01:55:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:55:42.009840 | orchestrator | 2026-04-08 01:55:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:55:42.009895 | orchestrator | 2026-04-08 01:55:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:55:45.051506 | orchestrator | 2026-04-08 01:55:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:55:45.052849 | orchestrator | 2026-04-08 01:55:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:55:45.052902 | orchestrator | 2026-04-08 01:55:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:55:48.105570 | orchestrator | 2026-04-08 01:55:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:55:48.107484 | orchestrator | 2026-04-08 01:55:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:55:48.107535 | orchestrator | 2026-04-08 01:55:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:55:51.148302 | orchestrator | 2026-04-08 01:55:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:55:51.149964 | orchestrator | 2026-04-08 01:55:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:55:51.150113 | orchestrator | 2026-04-08 01:55:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:55:54.189061 | orchestrator | 2026-04-08 01:55:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:55:54.190283 | orchestrator | 2026-04-08 01:55:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:55:54.190349 | orchestrator | 2026-04-08 01:55:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:55:57.234384 | orchestrator | 2026-04-08 01:55:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:55:57.238097 | orchestrator | 2026-04-08 01:55:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:55:57.238192 | orchestrator | 2026-04-08 01:55:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:56:00.285962 | orchestrator | 2026-04-08 01:56:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:56:00.287385 | orchestrator | 2026-04-08 01:56:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:56:00.287473 | orchestrator | 2026-04-08 01:56:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:56:03.340600 | orchestrator | 2026-04-08 01:56:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:56:03.342894 | orchestrator | 2026-04-08 01:56:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:56:03.342983 | orchestrator | 2026-04-08 01:56:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:56:06.388829 | orchestrator | 2026-04-08 01:56:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:56:06.390170 | orchestrator | 2026-04-08 01:56:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:56:06.390600 | orchestrator | 2026-04-08 01:56:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:56:09.437413 | orchestrator | 2026-04-08 01:56:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:56:09.440010 | orchestrator | 2026-04-08 01:56:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:56:09.440067 | orchestrator | 2026-04-08 01:56:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:56:12.485159 | orchestrator | 2026-04-08 01:56:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:56:12.486673 | orchestrator | 2026-04-08 01:56:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:56:12.486763 | orchestrator | 2026-04-08 01:56:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:56:15.530164 | orchestrator | 2026-04-08 01:56:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:56:15.531691 | orchestrator | 2026-04-08 01:56:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:56:15.531963 | orchestrator | 2026-04-08 01:56:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:56:18.579546 | orchestrator | 2026-04-08 01:56:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:56:18.582092 | orchestrator | 2026-04-08 01:56:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:56:18.582170 | orchestrator | 2026-04-08 01:56:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:56:21.625424 | orchestrator | 2026-04-08 01:56:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:56:21.627272 | orchestrator | 2026-04-08 01:56:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:56:21.627313 | orchestrator | 2026-04-08 01:56:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:56:24.673408 | orchestrator | 2026-04-08 01:56:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:56:24.675623 | orchestrator | 2026-04-08 01:56:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:56:24.675679 | orchestrator | 2026-04-08 01:56:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:56:27.724136 | orchestrator | 2026-04-08 01:56:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:56:27.725886 | orchestrator | 2026-04-08 01:56:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:56:27.725926 | orchestrator | 2026-04-08 01:56:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:56:30.770274 | orchestrator | 2026-04-08 01:56:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:56:30.772551 | orchestrator | 2026-04-08 01:56:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:56:30.773238 | orchestrator | 2026-04-08 01:56:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:56:33.817771 | orchestrator | 2026-04-08 01:56:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:56:33.819892 | orchestrator | 2026-04-08 01:56:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:56:33.819930 | orchestrator | 2026-04-08 01:56:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:56:36.864596 | orchestrator | 2026-04-08 01:56:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:56:36.866897 | orchestrator | 2026-04-08 01:56:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:56:36.867001 | orchestrator | 2026-04-08 01:56:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:56:39.915961 | orchestrator | 2026-04-08 01:56:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:56:39.919378 | orchestrator | 2026-04-08 01:56:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:56:39.919447 | orchestrator | 2026-04-08 01:56:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:56:42.969467 | orchestrator | 2026-04-08 01:56:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:56:42.970315 | orchestrator | 2026-04-08 01:56:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:56:42.970354 | orchestrator | 2026-04-08 01:56:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:56:46.017976 | orchestrator | 2026-04-08 01:56:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:56:46.019604 | orchestrator | 2026-04-08 01:56:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:56:46.019727 | orchestrator | 2026-04-08 01:56:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:56:49.071916 | orchestrator | 2026-04-08 01:56:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:56:49.074481 | orchestrator | 2026-04-08 01:56:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:56:49.074509 | orchestrator | 2026-04-08 01:56:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:56:52.122209 | orchestrator | 2026-04-08 01:56:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:56:52.122538 | orchestrator | 2026-04-08 01:56:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:56:52.122579 | orchestrator | 2026-04-08 01:56:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:56:55.160537 | orchestrator | 2026-04-08 01:56:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:56:55.162599 | orchestrator | 2026-04-08 01:56:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:56:55.162663 | orchestrator | 2026-04-08 01:56:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:56:58.204065 | orchestrator | 2026-04-08 01:56:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:56:58.205721 | orchestrator | 2026-04-08 01:56:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:56:58.205808 | orchestrator | 2026-04-08 01:56:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:57:01.252732 | orchestrator | 2026-04-08 01:57:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:57:01.254391 | orchestrator | 2026-04-08 01:57:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:57:01.254463 | orchestrator | 2026-04-08 01:57:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:57:04.302232 | orchestrator | 2026-04-08 01:57:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:57:04.303318 | orchestrator | 2026-04-08 01:57:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:57:04.303361 | orchestrator | 2026-04-08 01:57:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:57:07.345596 | orchestrator | 2026-04-08 01:57:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:57:07.348233 | orchestrator | 2026-04-08 01:57:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:57:07.348354 | orchestrator | 2026-04-08 01:57:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:57:10.389928 | orchestrator | 2026-04-08 01:57:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:57:10.391436 | orchestrator | 2026-04-08 01:57:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:57:10.391729 | orchestrator | 2026-04-08 01:57:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:57:13.443059 | orchestrator | 2026-04-08 01:57:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:57:13.445575 | orchestrator | 2026-04-08 01:57:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:57:13.445631 | orchestrator | 2026-04-08 01:57:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:57:16.505264 | orchestrator | 2026-04-08 01:57:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:57:16.506798 | orchestrator | 2026-04-08 01:57:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:57:16.507028 | orchestrator | 2026-04-08 01:57:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:57:19.552066 | orchestrator | 2026-04-08 01:57:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:57:19.554588 | orchestrator | 2026-04-08 01:57:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:57:19.554739 | orchestrator | 2026-04-08 01:57:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:57:22.595327 | orchestrator | 2026-04-08 01:57:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:57:22.596376 | orchestrator | 2026-04-08 01:57:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:57:22.596403 | orchestrator | 2026-04-08 01:57:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:57:25.644958 | orchestrator | 2026-04-08 01:57:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:57:25.647060 | orchestrator | 2026-04-08 01:57:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:57:25.647125 | orchestrator | 2026-04-08 01:57:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:57:28.688742 | orchestrator | 2026-04-08 01:57:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:57:28.690341 | orchestrator | 2026-04-08 01:57:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:57:28.690468 | orchestrator | 2026-04-08 01:57:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:57:31.737355 | orchestrator | 2026-04-08 01:57:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:57:31.740214 | orchestrator | 2026-04-08 01:57:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:57:31.740304 | orchestrator | 2026-04-08 01:57:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:57:34.784373 | orchestrator | 2026-04-08 01:57:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:57:34.785206 | orchestrator | 2026-04-08 01:57:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:57:34.785238 | orchestrator | 2026-04-08 01:57:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:57:37.832958 | orchestrator | 2026-04-08 01:57:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:57:37.834938 | orchestrator | 2026-04-08 01:57:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:57:37.834979 | orchestrator | 2026-04-08 01:57:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:57:40.879003 | orchestrator | 2026-04-08 01:57:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:57:40.880083 | orchestrator | 2026-04-08 01:57:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:57:40.880144 | orchestrator | 2026-04-08 01:57:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:57:43.923102 | orchestrator | 2026-04-08 01:57:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:57:43.925738 | orchestrator | 2026-04-08 01:57:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:57:43.925763 | orchestrator | 2026-04-08 01:57:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:57:46.970309 | orchestrator | 2026-04-08 01:57:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:57:46.971303 | orchestrator | 2026-04-08 01:57:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:57:46.971335 | orchestrator | 2026-04-08 01:57:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:57:50.016878 | orchestrator | 2026-04-08 01:57:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:57:50.017576 | orchestrator | 2026-04-08 01:57:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:57:50.017871 | orchestrator | 2026-04-08 01:57:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:57:53.064663 | orchestrator | 2026-04-08 01:57:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:57:53.067516 | orchestrator | 2026-04-08 01:57:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:57:53.068200 | orchestrator | 2026-04-08 01:57:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:57:56.107753 | orchestrator | 2026-04-08 01:57:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:57:56.109824 | orchestrator | 2026-04-08 01:57:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:57:56.109927 | orchestrator | 2026-04-08 01:57:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:57:59.156789 | orchestrator | 2026-04-08 01:57:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:57:59.158832 | orchestrator | 2026-04-08 01:57:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:57:59.158866 | orchestrator | 2026-04-08 01:57:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:58:02.204630 | orchestrator | 2026-04-08 01:58:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:58:02.206086 | orchestrator | 2026-04-08 01:58:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:58:02.206145 | orchestrator | 2026-04-08 01:58:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:58:05.254220 | orchestrator | 2026-04-08 01:58:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:58:05.256205 | orchestrator | 2026-04-08 01:58:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:58:05.256327 | orchestrator | 2026-04-08 01:58:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:58:08.298608 | orchestrator | 2026-04-08 01:58:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:58:08.300168 | orchestrator | 2026-04-08 01:58:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:58:08.300258 | orchestrator | 2026-04-08 01:58:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:58:11.349338 | orchestrator | 2026-04-08 01:58:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:58:11.350856 | orchestrator | 2026-04-08 01:58:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:58:11.350889 | orchestrator | 2026-04-08 01:58:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:58:14.401773 | orchestrator | 2026-04-08 01:58:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:58:14.406177 | orchestrator | 2026-04-08 01:58:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:58:14.406268 | orchestrator | 2026-04-08 01:58:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:58:17.450333 | orchestrator | 2026-04-08 01:58:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:58:17.450781 | orchestrator | 2026-04-08 01:58:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:58:17.450816 | orchestrator | 2026-04-08 01:58:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:58:20.501426 | orchestrator | 2026-04-08 01:58:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:58:20.502300 | orchestrator | 2026-04-08 01:58:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:58:20.503115 | orchestrator | 2026-04-08 01:58:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:58:23.557362 | orchestrator | 2026-04-08 01:58:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:58:23.558990 | orchestrator | 2026-04-08 01:58:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:58:23.559142 | orchestrator | 2026-04-08 01:58:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:58:26.605310 | orchestrator | 2026-04-08 01:58:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:58:26.607118 | orchestrator | 2026-04-08 01:58:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:58:26.607185 | orchestrator | 2026-04-08 01:58:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:58:29.651099 | orchestrator | 2026-04-08 01:58:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:58:29.652815 | orchestrator | 2026-04-08 01:58:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:58:29.652874 | orchestrator | 2026-04-08 01:58:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:58:32.698477 | orchestrator | 2026-04-08 01:58:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:58:32.700004 | orchestrator | 2026-04-08 01:58:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:58:32.700041 | orchestrator | 2026-04-08 01:58:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:58:35.749629 | orchestrator | 2026-04-08 01:58:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:58:35.751470 | orchestrator | 2026-04-08 01:58:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:58:35.751575 | orchestrator | 2026-04-08 01:58:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:58:38.799217 | orchestrator | 2026-04-08 01:58:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:58:38.801042 | orchestrator | 2026-04-08 01:58:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:58:38.801099 | orchestrator | 2026-04-08 01:58:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:58:41.851100 | orchestrator | 2026-04-08 01:58:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:58:41.851684 | orchestrator | 2026-04-08 01:58:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:58:41.851845 | orchestrator | 2026-04-08 01:58:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:58:44.900751 | orchestrator | 2026-04-08 01:58:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:58:44.902712 | orchestrator | 2026-04-08 01:58:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:58:44.902769 | orchestrator | 2026-04-08 01:58:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:58:47.942274 | orchestrator | 2026-04-08 01:58:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:58:47.944112 | orchestrator | 2026-04-08 01:58:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:58:47.944189 | orchestrator | 2026-04-08 01:58:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:58:50.992670 | orchestrator | 2026-04-08 01:58:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:58:50.994779 | orchestrator | 2026-04-08 01:58:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:58:50.994843 | orchestrator | 2026-04-08 01:58:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:58:54.038190 | orchestrator | 2026-04-08 01:58:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:58:54.040181 | orchestrator | 2026-04-08 01:58:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:58:54.040264 | orchestrator | 2026-04-08 01:58:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:58:57.084929 | orchestrator | 2026-04-08 01:58:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:58:57.087401 | orchestrator | 2026-04-08 01:58:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:58:57.087618 | orchestrator | 2026-04-08 01:58:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:59:00.133116 | orchestrator | 2026-04-08 01:59:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:59:00.134583 | orchestrator | 2026-04-08 01:59:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:59:00.134822 | orchestrator | 2026-04-08 01:59:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:59:03.182610 | orchestrator | 2026-04-08 01:59:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:59:03.184667 | orchestrator | 2026-04-08 01:59:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:59:03.184721 | orchestrator | 2026-04-08 01:59:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:59:06.244146 | orchestrator | 2026-04-08 01:59:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:59:06.245490 | orchestrator | 2026-04-08 01:59:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:59:06.245517 | orchestrator | 2026-04-08 01:59:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:59:09.295891 | orchestrator | 2026-04-08 01:59:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:59:09.298390 | orchestrator | 2026-04-08 01:59:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:59:09.298475 | orchestrator | 2026-04-08 01:59:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:59:12.344102 | orchestrator | 2026-04-08 01:59:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:59:12.345604 | orchestrator | 2026-04-08 01:59:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:59:12.345667 | orchestrator | 2026-04-08 01:59:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:59:15.396860 | orchestrator | 2026-04-08 01:59:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:59:15.399091 | orchestrator | 2026-04-08 01:59:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:59:15.399279 | orchestrator | 2026-04-08 01:59:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:59:18.444182 | orchestrator | 2026-04-08 01:59:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:59:18.445316 | orchestrator | 2026-04-08 01:59:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:59:18.445463 | orchestrator | 2026-04-08 01:59:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:59:21.487379 | orchestrator | 2026-04-08 01:59:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:59:21.489196 | orchestrator | 2026-04-08 01:59:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:59:21.489251 | orchestrator | 2026-04-08 01:59:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:59:24.534823 | orchestrator | 2026-04-08 01:59:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:59:24.536551 | orchestrator | 2026-04-08 01:59:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:59:24.536633 | orchestrator | 2026-04-08 01:59:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:59:27.581919 | orchestrator | 2026-04-08 01:59:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:59:27.583193 | orchestrator | 2026-04-08 01:59:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:59:27.583232 | orchestrator | 2026-04-08 01:59:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:59:30.624713 | orchestrator | 2026-04-08 01:59:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:59:30.624783 | orchestrator | 2026-04-08 01:59:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:59:30.624789 | orchestrator | 2026-04-08 01:59:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:59:33.666560 | orchestrator | 2026-04-08 01:59:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:59:33.668289 | orchestrator | 2026-04-08 01:59:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:59:33.668407 | orchestrator | 2026-04-08 01:59:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:59:36.721415 | orchestrator | 2026-04-08 01:59:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:59:36.723509 | orchestrator | 2026-04-08 01:59:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:59:36.723604 | orchestrator | 2026-04-08 01:59:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:59:39.765884 | orchestrator | 2026-04-08 01:59:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:59:39.766996 | orchestrator | 2026-04-08 01:59:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:59:39.767051 | orchestrator | 2026-04-08 01:59:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:59:42.810164 | orchestrator | 2026-04-08 01:59:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:59:42.811120 | orchestrator | 2026-04-08 01:59:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:59:42.811172 | orchestrator | 2026-04-08 01:59:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:59:45.857887 | orchestrator | 2026-04-08 01:59:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:59:45.858780 | orchestrator | 2026-04-08 01:59:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:59:45.858822 | orchestrator | 2026-04-08 01:59:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:59:48.913466 | orchestrator | 2026-04-08 01:59:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:59:48.915090 | orchestrator | 2026-04-08 01:59:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:59:48.915187 | orchestrator | 2026-04-08 01:59:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:59:51.971338 | orchestrator | 2026-04-08 01:59:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:59:51.971462 | orchestrator | 2026-04-08 01:59:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:59:51.971479 | orchestrator | 2026-04-08 01:59:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:59:55.022854 | orchestrator | 2026-04-08 01:59:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:59:55.024495 | orchestrator | 2026-04-08 01:59:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:59:55.024574 | orchestrator | 2026-04-08 01:59:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 01:59:58.076686 | orchestrator | 2026-04-08 01:59:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 01:59:58.078713 | orchestrator | 2026-04-08 01:59:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 01:59:58.078772 | orchestrator | 2026-04-08 01:59:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:00:01.130489 | orchestrator | 2026-04-08 02:00:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:00:01.133773 | orchestrator | 2026-04-08 02:00:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:00:01.134375 | orchestrator | 2026-04-08 02:00:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:00:04.181879 | orchestrator | 2026-04-08 02:00:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:00:04.183928 | orchestrator | 2026-04-08 02:00:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:00:04.184004 | orchestrator | 2026-04-08 02:00:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:00:07.231828 | orchestrator | 2026-04-08 02:00:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:00:07.233045 | orchestrator | 2026-04-08 02:00:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:00:07.233094 | orchestrator | 2026-04-08 02:00:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:00:10.279615 | orchestrator | 2026-04-08 02:00:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:00:10.280993 | orchestrator | 2026-04-08 02:00:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:00:10.281176 | orchestrator | 2026-04-08 02:00:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:00:13.328148 | orchestrator | 2026-04-08 02:00:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:00:13.329850 | orchestrator | 2026-04-08 02:00:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:00:13.330161 | orchestrator | 2026-04-08 02:00:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:00:16.374326 | orchestrator | 2026-04-08 02:00:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:00:16.375842 | orchestrator | 2026-04-08 02:00:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:00:16.375887 | orchestrator | 2026-04-08 02:00:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:00:19.422399 | orchestrator | 2026-04-08 02:00:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:00:19.423752 | orchestrator | 2026-04-08 02:00:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:00:19.423946 | orchestrator | 2026-04-08 02:00:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:00:22.474584 | orchestrator | 2026-04-08 02:00:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:00:22.475914 | orchestrator | 2026-04-08 02:00:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:00:22.476023 | orchestrator | 2026-04-08 02:00:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:00:25.521249 | orchestrator | 2026-04-08 02:00:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:00:25.524594 | orchestrator | 2026-04-08 02:00:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:00:25.524644 | orchestrator | 2026-04-08 02:00:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:00:28.569245 | orchestrator | 2026-04-08 02:00:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:00:28.570921 | orchestrator | 2026-04-08 02:00:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:00:28.570985 | orchestrator | 2026-04-08 02:00:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:00:31.618928 | orchestrator | 2026-04-08 02:00:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:00:31.619877 | orchestrator | 2026-04-08 02:00:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:00:31.619985 | orchestrator | 2026-04-08 02:00:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:00:34.667087 | orchestrator | 2026-04-08 02:00:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:00:34.668700 | orchestrator | 2026-04-08 02:00:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:00:34.668750 | orchestrator | 2026-04-08 02:00:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:00:37.712925 | orchestrator | 2026-04-08 02:00:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:00:37.714765 | orchestrator | 2026-04-08 02:00:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:00:37.714816 | orchestrator | 2026-04-08 02:00:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:00:40.760972 | orchestrator | 2026-04-08 02:00:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:00:40.762761 | orchestrator | 2026-04-08 02:00:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:00:40.762798 | orchestrator | 2026-04-08 02:00:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:00:43.812354 | orchestrator | 2026-04-08 02:00:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:00:43.813559 | orchestrator | 2026-04-08 02:00:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:00:43.813795 | orchestrator | 2026-04-08 02:00:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:00:46.854867 | orchestrator | 2026-04-08 02:00:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:00:46.856336 | orchestrator | 2026-04-08 02:00:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:00:46.856402 | orchestrator | 2026-04-08 02:00:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:00:49.900530 | orchestrator | 2026-04-08 02:00:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:00:49.901854 | orchestrator | 2026-04-08 02:00:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:00:49.901912 | orchestrator | 2026-04-08 02:00:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:00:52.949946 | orchestrator | 2026-04-08 02:00:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:00:52.951164 | orchestrator | 2026-04-08 02:00:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:00:52.951184 | orchestrator | 2026-04-08 02:00:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:00:56.002369 | orchestrator | 2026-04-08 02:00:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:00:56.003910 | orchestrator | 2026-04-08 02:00:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:00:56.003955 | orchestrator | 2026-04-08 02:00:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:00:59.052403 | orchestrator | 2026-04-08 02:00:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:00:59.052912 | orchestrator | 2026-04-08 02:00:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:00:59.052940 | orchestrator | 2026-04-08 02:00:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:01:02.096247 | orchestrator | 2026-04-08 02:01:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:01:02.098292 | orchestrator | 2026-04-08 02:01:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:01:02.098326 | orchestrator | 2026-04-08 02:01:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:01:05.143148 | orchestrator | 2026-04-08 02:01:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:01:05.144767 | orchestrator | 2026-04-08 02:01:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:01:05.144812 | orchestrator | 2026-04-08 02:01:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:01:08.190926 | orchestrator | 2026-04-08 02:01:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:01:08.192470 | orchestrator | 2026-04-08 02:01:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:01:08.192502 | orchestrator | 2026-04-08 02:01:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:01:11.239408 | orchestrator | 2026-04-08 02:01:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:01:11.240966 | orchestrator | 2026-04-08 02:01:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:01:11.241144 | orchestrator | 2026-04-08 02:01:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:01:14.290527 | orchestrator | 2026-04-08 02:01:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:01:14.292023 | orchestrator | 2026-04-08 02:01:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:01:14.292143 | orchestrator | 2026-04-08 02:01:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:01:17.335782 | orchestrator | 2026-04-08 02:01:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:01:17.336841 | orchestrator | 2026-04-08 02:01:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:01:17.336929 | orchestrator | 2026-04-08 02:01:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:01:20.383337 | orchestrator | 2026-04-08 02:01:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:01:20.384741 | orchestrator | 2026-04-08 02:01:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:01:20.384779 | orchestrator | 2026-04-08 02:01:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:01:23.424927 | orchestrator | 2026-04-08 02:01:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:01:23.426458 | orchestrator | 2026-04-08 02:01:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:01:23.426513 | orchestrator | 2026-04-08 02:01:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:01:26.470704 | orchestrator | 2026-04-08 02:01:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:01:26.471882 | orchestrator | 2026-04-08 02:01:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:01:26.472033 | orchestrator | 2026-04-08 02:01:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:01:29.520311 | orchestrator | 2026-04-08 02:01:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:01:29.521657 | orchestrator | 2026-04-08 02:01:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:01:29.522173 | orchestrator | 2026-04-08 02:01:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:01:32.568576 | orchestrator | 2026-04-08 02:01:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:01:32.570827 | orchestrator | 2026-04-08 02:01:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:01:32.570909 | orchestrator | 2026-04-08 02:01:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:01:35.623687 | orchestrator | 2026-04-08 02:01:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:01:35.626680 | orchestrator | 2026-04-08 02:01:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:01:35.626751 | orchestrator | 2026-04-08 02:01:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:01:38.675445 | orchestrator | 2026-04-08 02:01:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:01:38.677161 | orchestrator | 2026-04-08 02:01:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:01:38.677279 | orchestrator | 2026-04-08 02:01:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:01:41.737086 | orchestrator | 2026-04-08 02:01:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:01:41.739594 | orchestrator | 2026-04-08 02:01:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:01:41.739628 | orchestrator | 2026-04-08 02:01:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:01:44.791294 | orchestrator | 2026-04-08 02:01:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:01:44.792644 | orchestrator | 2026-04-08 02:01:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:01:44.792686 | orchestrator | 2026-04-08 02:01:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:01:47.845878 | orchestrator | 2026-04-08 02:01:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:01:47.847418 | orchestrator | 2026-04-08 02:01:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:01:47.847457 | orchestrator | 2026-04-08 02:01:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:01:50.899654 | orchestrator | 2026-04-08 02:01:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:01:50.901078 | orchestrator | 2026-04-08 02:01:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:01:50.901301 | orchestrator | 2026-04-08 02:01:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:01:53.952865 | orchestrator | 2026-04-08 02:01:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:01:53.954769 | orchestrator | 2026-04-08 02:01:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:01:53.954807 | orchestrator | 2026-04-08 02:01:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:01:57.006879 | orchestrator | 2026-04-08 02:01:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:01:57.008456 | orchestrator | 2026-04-08 02:01:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:01:57.008580 | orchestrator | 2026-04-08 02:01:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:02:00.051870 | orchestrator | 2026-04-08 02:02:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:02:00.054931 | orchestrator | 2026-04-08 02:02:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:02:00.055848 | orchestrator | 2026-04-08 02:02:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:02:03.107171 | orchestrator | 2026-04-08 02:02:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:02:03.109773 | orchestrator | 2026-04-08 02:02:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:02:03.109864 | orchestrator | 2026-04-08 02:02:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:02:06.159507 | orchestrator | 2026-04-08 02:02:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:02:06.160985 | orchestrator | 2026-04-08 02:02:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:02:06.161025 | orchestrator | 2026-04-08 02:02:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:02:09.204014 | orchestrator | 2026-04-08 02:02:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:02:09.205645 | orchestrator | 2026-04-08 02:02:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:02:09.205698 | orchestrator | 2026-04-08 02:02:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:02:12.252638 | orchestrator | 2026-04-08 02:02:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:02:12.253969 | orchestrator | 2026-04-08 02:02:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:02:12.254001 | orchestrator | 2026-04-08 02:02:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:02:15.306623 | orchestrator | 2026-04-08 02:02:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:02:15.307307 | orchestrator | 2026-04-08 02:02:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:02:15.307348 | orchestrator | 2026-04-08 02:02:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:02:18.359106 | orchestrator | 2026-04-08 02:02:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:02:18.360773 | orchestrator | 2026-04-08 02:02:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:02:18.360806 | orchestrator | 2026-04-08 02:02:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:02:21.412465 | orchestrator | 2026-04-08 02:02:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:02:21.413518 | orchestrator | 2026-04-08 02:02:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:02:21.413568 | orchestrator | 2026-04-08 02:02:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:02:24.465606 | orchestrator | 2026-04-08 02:02:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:02:24.466920 | orchestrator | 2026-04-08 02:02:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:02:24.466953 | orchestrator | 2026-04-08 02:02:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:02:27.511872 | orchestrator | 2026-04-08 02:02:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:02:27.514408 | orchestrator | 2026-04-08 02:02:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:02:27.514465 | orchestrator | 2026-04-08 02:02:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:02:30.564585 | orchestrator | 2026-04-08 02:02:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:02:30.567066 | orchestrator | 2026-04-08 02:02:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:02:30.567798 | orchestrator | 2026-04-08 02:02:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:02:33.615665 | orchestrator | 2026-04-08 02:02:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:02:33.617489 | orchestrator | 2026-04-08 02:02:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:02:33.617540 | orchestrator | 2026-04-08 02:02:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:02:36.667104 | orchestrator | 2026-04-08 02:02:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:02:36.669259 | orchestrator | 2026-04-08 02:02:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:02:36.669390 | orchestrator | 2026-04-08 02:02:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:02:39.722345 | orchestrator | 2026-04-08 02:02:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:02:39.723729 | orchestrator | 2026-04-08 02:02:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:02:39.723804 | orchestrator | 2026-04-08 02:02:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:02:42.763844 | orchestrator | 2026-04-08 02:02:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:02:42.765019 | orchestrator | 2026-04-08 02:02:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:02:42.765068 | orchestrator | 2026-04-08 02:02:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:02:45.811831 | orchestrator | 2026-04-08 02:02:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:02:45.813241 | orchestrator | 2026-04-08 02:02:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:02:45.813277 | orchestrator | 2026-04-08 02:02:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:02:48.865241 | orchestrator | 2026-04-08 02:02:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:02:48.867133 | orchestrator | 2026-04-08 02:02:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:02:48.867238 | orchestrator | 2026-04-08 02:02:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:02:51.913871 | orchestrator | 2026-04-08 02:02:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:02:51.915391 | orchestrator | 2026-04-08 02:02:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:02:51.915441 | orchestrator | 2026-04-08 02:02:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:02:54.958791 | orchestrator | 2026-04-08 02:02:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:02:54.960854 | orchestrator | 2026-04-08 02:02:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:02:54.961204 | orchestrator | 2026-04-08 02:02:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:02:58.012687 | orchestrator | 2026-04-08 02:02:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:02:58.014882 | orchestrator | 2026-04-08 02:02:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:02:58.014957 | orchestrator | 2026-04-08 02:02:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:03:01.057481 | orchestrator | 2026-04-08 02:03:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:03:01.058878 | orchestrator | 2026-04-08 02:03:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:03:01.059282 | orchestrator | 2026-04-08 02:03:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:03:04.108547 | orchestrator | 2026-04-08 02:03:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:03:04.109585 | orchestrator | 2026-04-08 02:03:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:03:04.109615 | orchestrator | 2026-04-08 02:03:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:03:07.161429 | orchestrator | 2026-04-08 02:03:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:03:07.163715 | orchestrator | 2026-04-08 02:03:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:03:07.163767 | orchestrator | 2026-04-08 02:03:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:03:10.209857 | orchestrator | 2026-04-08 02:03:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:03:10.210095 | orchestrator | 2026-04-08 02:03:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:03:10.210120 | orchestrator | 2026-04-08 02:03:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:03:13.261224 | orchestrator | 2026-04-08 02:03:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:03:13.262659 | orchestrator | 2026-04-08 02:03:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:03:13.262698 | orchestrator | 2026-04-08 02:03:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:03:16.308508 | orchestrator | 2026-04-08 02:03:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:03:16.310423 | orchestrator | 2026-04-08 02:03:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:03:16.310497 | orchestrator | 2026-04-08 02:03:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:03:19.357936 | orchestrator | 2026-04-08 02:03:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:03:19.360186 | orchestrator | 2026-04-08 02:03:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:03:19.360305 | orchestrator | 2026-04-08 02:03:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:03:22.408430 | orchestrator | 2026-04-08 02:03:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:03:22.409820 | orchestrator | 2026-04-08 02:03:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:03:22.409872 | orchestrator | 2026-04-08 02:03:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:03:25.455389 | orchestrator | 2026-04-08 02:03:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:03:25.456581 | orchestrator | 2026-04-08 02:03:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:03:25.456614 | orchestrator | 2026-04-08 02:03:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:03:28.507163 | orchestrator | 2026-04-08 02:03:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:03:28.507898 | orchestrator | 2026-04-08 02:03:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:03:28.507961 | orchestrator | 2026-04-08 02:03:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:03:31.555461 | orchestrator | 2026-04-08 02:03:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:03:31.559282 | orchestrator | 2026-04-08 02:03:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:03:31.559452 | orchestrator | 2026-04-08 02:03:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:03:34.605968 | orchestrator | 2026-04-08 02:03:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:03:34.608829 | orchestrator | 2026-04-08 02:03:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:03:34.608925 | orchestrator | 2026-04-08 02:03:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:03:37.659613 | orchestrator | 2026-04-08 02:03:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:03:37.662977 | orchestrator | 2026-04-08 02:03:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:03:37.663032 | orchestrator | 2026-04-08 02:03:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:03:40.711569 | orchestrator | 2026-04-08 02:03:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:03:40.711689 | orchestrator | 2026-04-08 02:03:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:03:40.711707 | orchestrator | 2026-04-08 02:03:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:03:43.755322 | orchestrator | 2026-04-08 02:03:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:03:43.757215 | orchestrator | 2026-04-08 02:03:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:03:43.757387 | orchestrator | 2026-04-08 02:03:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:03:46.802999 | orchestrator | 2026-04-08 02:03:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:03:46.805027 | orchestrator | 2026-04-08 02:03:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:03:46.805073 | orchestrator | 2026-04-08 02:03:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:03:49.855836 | orchestrator | 2026-04-08 02:03:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:03:49.857687 | orchestrator | 2026-04-08 02:03:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:03:49.857737 | orchestrator | 2026-04-08 02:03:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:03:52.906307 | orchestrator | 2026-04-08 02:03:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:03:52.907529 | orchestrator | 2026-04-08 02:03:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:03:52.907551 | orchestrator | 2026-04-08 02:03:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:03:55.954609 | orchestrator | 2026-04-08 02:03:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:03:55.955978 | orchestrator | 2026-04-08 02:03:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:03:55.956019 | orchestrator | 2026-04-08 02:03:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:03:58.998183 | orchestrator | 2026-04-08 02:03:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:03:58.998341 | orchestrator | 2026-04-08 02:03:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:03:58.998355 | orchestrator | 2026-04-08 02:03:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:04:02.045035 | orchestrator | 2026-04-08 02:04:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:04:02.047497 | orchestrator | 2026-04-08 02:04:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:04:02.047578 | orchestrator | 2026-04-08 02:04:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:04:05.093729 | orchestrator | 2026-04-08 02:04:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:04:05.094480 | orchestrator | 2026-04-08 02:04:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:04:05.094516 | orchestrator | 2026-04-08 02:04:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:04:08.136449 | orchestrator | 2026-04-08 02:04:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:04:08.138097 | orchestrator | 2026-04-08 02:04:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:04:08.138130 | orchestrator | 2026-04-08 02:04:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:04:11.179563 | orchestrator | 2026-04-08 02:04:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:04:11.181896 | orchestrator | 2026-04-08 02:04:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:04:11.182089 | orchestrator | 2026-04-08 02:04:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:04:14.228969 | orchestrator | 2026-04-08 02:04:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:04:14.230969 | orchestrator | 2026-04-08 02:04:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:04:14.231229 | orchestrator | 2026-04-08 02:04:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:04:17.282396 | orchestrator | 2026-04-08 02:04:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:04:17.283604 | orchestrator | 2026-04-08 02:04:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:04:17.283796 | orchestrator | 2026-04-08 02:04:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:04:20.342706 | orchestrator | 2026-04-08 02:04:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:04:20.344104 | orchestrator | 2026-04-08 02:04:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:04:20.344427 | orchestrator | 2026-04-08 02:04:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:04:23.395113 | orchestrator | 2026-04-08 02:04:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:04:23.397587 | orchestrator | 2026-04-08 02:04:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:04:23.397646 | orchestrator | 2026-04-08 02:04:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:04:26.445038 | orchestrator | 2026-04-08 02:04:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:04:26.446414 | orchestrator | 2026-04-08 02:04:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:04:26.446465 | orchestrator | 2026-04-08 02:04:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:04:29.495874 | orchestrator | 2026-04-08 02:04:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:04:29.496954 | orchestrator | 2026-04-08 02:04:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:04:29.497052 | orchestrator | 2026-04-08 02:04:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:04:32.538318 | orchestrator | 2026-04-08 02:04:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:04:32.539888 | orchestrator | 2026-04-08 02:04:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:04:32.539918 | orchestrator | 2026-04-08 02:04:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:04:35.588091 | orchestrator | 2026-04-08 02:04:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:04:35.590686 | orchestrator | 2026-04-08 02:04:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:04:35.590731 | orchestrator | 2026-04-08 02:04:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:04:38.637453 | orchestrator | 2026-04-08 02:04:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:04:38.638262 | orchestrator | 2026-04-08 02:04:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:04:38.638352 | orchestrator | 2026-04-08 02:04:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:04:41.691341 | orchestrator | 2026-04-08 02:04:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:04:41.692659 | orchestrator | 2026-04-08 02:04:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:04:41.692702 | orchestrator | 2026-04-08 02:04:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:04:44.745874 | orchestrator | 2026-04-08 02:04:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:04:44.746567 | orchestrator | 2026-04-08 02:04:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:04:44.746638 | orchestrator | 2026-04-08 02:04:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:04:47.798414 | orchestrator | 2026-04-08 02:04:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:04:47.799873 | orchestrator | 2026-04-08 02:04:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:04:47.799910 | orchestrator | 2026-04-08 02:04:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:04:50.846666 | orchestrator | 2026-04-08 02:04:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:04:50.848396 | orchestrator | 2026-04-08 02:04:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:04:50.848462 | orchestrator | 2026-04-08 02:04:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:04:53.896506 | orchestrator | 2026-04-08 02:04:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:04:53.898263 | orchestrator | 2026-04-08 02:04:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:04:53.898404 | orchestrator | 2026-04-08 02:04:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:04:56.944394 | orchestrator | 2026-04-08 02:04:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:04:56.946200 | orchestrator | 2026-04-08 02:04:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:04:56.946269 | orchestrator | 2026-04-08 02:04:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:04:59.996751 | orchestrator | 2026-04-08 02:04:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:05:00.000772 | orchestrator | 2026-04-08 02:05:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:05:00.000814 | orchestrator | 2026-04-08 02:05:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:05:03.039783 | orchestrator | 2026-04-08 02:05:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:05:03.041688 | orchestrator | 2026-04-08 02:05:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:05:03.041774 | orchestrator | 2026-04-08 02:05:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:05:06.086875 | orchestrator | 2026-04-08 02:05:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:05:06.089110 | orchestrator | 2026-04-08 02:05:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:05:06.089234 | orchestrator | 2026-04-08 02:05:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:05:09.138951 | orchestrator | 2026-04-08 02:05:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:05:09.142185 | orchestrator | 2026-04-08 02:05:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:05:09.142254 | orchestrator | 2026-04-08 02:05:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:05:12.188326 | orchestrator | 2026-04-08 02:05:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:05:12.190096 | orchestrator | 2026-04-08 02:05:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:05:12.190119 | orchestrator | 2026-04-08 02:05:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:05:15.236162 | orchestrator | 2026-04-08 02:05:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:05:15.237880 | orchestrator | 2026-04-08 02:05:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:05:15.237965 | orchestrator | 2026-04-08 02:05:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:05:18.290688 | orchestrator | 2026-04-08 02:05:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:05:18.293158 | orchestrator | 2026-04-08 02:05:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:05:18.293228 | orchestrator | 2026-04-08 02:05:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:05:21.334416 | orchestrator | 2026-04-08 02:05:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:05:21.336041 | orchestrator | 2026-04-08 02:05:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:05:21.336089 | orchestrator | 2026-04-08 02:05:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:05:24.387495 | orchestrator | 2026-04-08 02:05:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:05:24.388882 | orchestrator | 2026-04-08 02:05:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:05:24.389011 | orchestrator | 2026-04-08 02:05:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:05:27.438202 | orchestrator | 2026-04-08 02:05:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:05:27.439568 | orchestrator | 2026-04-08 02:05:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:05:27.439628 | orchestrator | 2026-04-08 02:05:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:05:30.488455 | orchestrator | 2026-04-08 02:05:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:05:30.491393 | orchestrator | 2026-04-08 02:05:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:05:30.491434 | orchestrator | 2026-04-08 02:05:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:05:33.540969 | orchestrator | 2026-04-08 02:05:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:05:33.542286 | orchestrator | 2026-04-08 02:05:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:05:33.542599 | orchestrator | 2026-04-08 02:05:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:05:36.586870 | orchestrator | 2026-04-08 02:05:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:05:36.589151 | orchestrator | 2026-04-08 02:05:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:05:36.589197 | orchestrator | 2026-04-08 02:05:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:05:39.640865 | orchestrator | 2026-04-08 02:05:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:05:39.641862 | orchestrator | 2026-04-08 02:05:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:05:39.641908 | orchestrator | 2026-04-08 02:05:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:05:42.688648 | orchestrator | 2026-04-08 02:05:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:05:42.690159 | orchestrator | 2026-04-08 02:05:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:05:42.690208 | orchestrator | 2026-04-08 02:05:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:05:45.732547 | orchestrator | 2026-04-08 02:05:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:05:45.733060 | orchestrator | 2026-04-08 02:05:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:05:45.733139 | orchestrator | 2026-04-08 02:05:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:05:48.779599 | orchestrator | 2026-04-08 02:05:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:05:48.781720 | orchestrator | 2026-04-08 02:05:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:05:48.782107 | orchestrator | 2026-04-08 02:05:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:05:51.821021 | orchestrator | 2026-04-08 02:05:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:05:51.822773 | orchestrator | 2026-04-08 02:05:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:05:51.822807 | orchestrator | 2026-04-08 02:05:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:05:54.870723 | orchestrator | 2026-04-08 02:05:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:05:54.873801 | orchestrator | 2026-04-08 02:05:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:05:54.873836 | orchestrator | 2026-04-08 02:05:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:05:57.925139 | orchestrator | 2026-04-08 02:05:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:05:57.929104 | orchestrator | 2026-04-08 02:05:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:05:57.929189 | orchestrator | 2026-04-08 02:05:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:06:00.975424 | orchestrator | 2026-04-08 02:06:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:06:00.977281 | orchestrator | 2026-04-08 02:06:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:06:00.977392 | orchestrator | 2026-04-08 02:06:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:06:04.023707 | orchestrator | 2026-04-08 02:06:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:06:04.025489 | orchestrator | 2026-04-08 02:06:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:06:04.025541 | orchestrator | 2026-04-08 02:06:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:06:07.070291 | orchestrator | 2026-04-08 02:06:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:06:07.072053 | orchestrator | 2026-04-08 02:06:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:06:07.072141 | orchestrator | 2026-04-08 02:06:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:06:10.118261 | orchestrator | 2026-04-08 02:06:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:06:10.120196 | orchestrator | 2026-04-08 02:06:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:06:10.120273 | orchestrator | 2026-04-08 02:06:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:06:13.162958 | orchestrator | 2026-04-08 02:06:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:06:13.164930 | orchestrator | 2026-04-08 02:06:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:06:13.164987 | orchestrator | 2026-04-08 02:06:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:06:16.215898 | orchestrator | 2026-04-08 02:06:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:06:16.218311 | orchestrator | 2026-04-08 02:06:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:06:16.218432 | orchestrator | 2026-04-08 02:06:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:06:19.259525 | orchestrator | 2026-04-08 02:06:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:06:19.262214 | orchestrator | 2026-04-08 02:06:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:06:19.262266 | orchestrator | 2026-04-08 02:06:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:06:22.318616 | orchestrator | 2026-04-08 02:06:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:06:22.319755 | orchestrator | 2026-04-08 02:06:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:06:22.319799 | orchestrator | 2026-04-08 02:06:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:06:25.371474 | orchestrator | 2026-04-08 02:06:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:06:25.373718 | orchestrator | 2026-04-08 02:06:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:06:25.373785 | orchestrator | 2026-04-08 02:06:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:06:28.425281 | orchestrator | 2026-04-08 02:06:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:06:28.426187 | orchestrator | 2026-04-08 02:06:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:06:28.426284 | orchestrator | 2026-04-08 02:06:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:06:31.472562 | orchestrator | 2026-04-08 02:06:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:06:31.473214 | orchestrator | 2026-04-08 02:06:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:06:31.473252 | orchestrator | 2026-04-08 02:06:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:06:34.518426 | orchestrator | 2026-04-08 02:06:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:06:34.520411 | orchestrator | 2026-04-08 02:06:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:06:34.520494 | orchestrator | 2026-04-08 02:06:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:06:37.571676 | orchestrator | 2026-04-08 02:06:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:06:37.572858 | orchestrator | 2026-04-08 02:06:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:06:37.572925 | orchestrator | 2026-04-08 02:06:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:06:40.619489 | orchestrator | 2026-04-08 02:06:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:06:40.622742 | orchestrator | 2026-04-08 02:06:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:06:40.622815 | orchestrator | 2026-04-08 02:06:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:06:43.677418 | orchestrator | 2026-04-08 02:06:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:06:43.680500 | orchestrator | 2026-04-08 02:06:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:06:43.680577 | orchestrator | 2026-04-08 02:06:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:06:46.729498 | orchestrator | 2026-04-08 02:06:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:06:46.730619 | orchestrator | 2026-04-08 02:06:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:06:46.730727 | orchestrator | 2026-04-08 02:06:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:06:49.778297 | orchestrator | 2026-04-08 02:06:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:06:49.779669 | orchestrator | 2026-04-08 02:06:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:06:49.779706 | orchestrator | 2026-04-08 02:06:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:06:52.832734 | orchestrator | 2026-04-08 02:06:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:06:52.834701 | orchestrator | 2026-04-08 02:06:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:06:52.834740 | orchestrator | 2026-04-08 02:06:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:06:55.883485 | orchestrator | 2026-04-08 02:06:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:06:55.885219 | orchestrator | 2026-04-08 02:06:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:06:55.885281 | orchestrator | 2026-04-08 02:06:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:06:58.933035 | orchestrator | 2026-04-08 02:06:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:06:58.935967 | orchestrator | 2026-04-08 02:06:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:06:58.936013 | orchestrator | 2026-04-08 02:06:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:07:01.984614 | orchestrator | 2026-04-08 02:07:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:07:01.985852 | orchestrator | 2026-04-08 02:07:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:07:01.985958 | orchestrator | 2026-04-08 02:07:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:07:05.039124 | orchestrator | 2026-04-08 02:07:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:07:05.040527 | orchestrator | 2026-04-08 02:07:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:07:05.040584 | orchestrator | 2026-04-08 02:07:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:07:08.093917 | orchestrator | 2026-04-08 02:07:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:07:08.096373 | orchestrator | 2026-04-08 02:07:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:07:08.096486 | orchestrator | 2026-04-08 02:07:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:07:11.147428 | orchestrator | 2026-04-08 02:07:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:07:11.149057 | orchestrator | 2026-04-08 02:07:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:07:11.149104 | orchestrator | 2026-04-08 02:07:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:07:14.196787 | orchestrator | 2026-04-08 02:07:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:07:14.198893 | orchestrator | 2026-04-08 02:07:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:07:14.198991 | orchestrator | 2026-04-08 02:07:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:07:17.246762 | orchestrator | 2026-04-08 02:07:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:07:17.250919 | orchestrator | 2026-04-08 02:07:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:07:17.251032 | orchestrator | 2026-04-08 02:07:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:07:20.304577 | orchestrator | 2026-04-08 02:07:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:07:20.306553 | orchestrator | 2026-04-08 02:07:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:07:20.306595 | orchestrator | 2026-04-08 02:07:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:07:23.357865 | orchestrator | 2026-04-08 02:07:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:07:23.358782 | orchestrator | 2026-04-08 02:07:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:07:23.358879 | orchestrator | 2026-04-08 02:07:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:07:26.407321 | orchestrator | 2026-04-08 02:07:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:07:26.408562 | orchestrator | 2026-04-08 02:07:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:07:26.408647 | orchestrator | 2026-04-08 02:07:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:07:29.453113 | orchestrator | 2026-04-08 02:07:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:07:29.454630 | orchestrator | 2026-04-08 02:07:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:07:29.454837 | orchestrator | 2026-04-08 02:07:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:07:32.495491 | orchestrator | 2026-04-08 02:07:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:07:32.496280 | orchestrator | 2026-04-08 02:07:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:07:32.496658 | orchestrator | 2026-04-08 02:07:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:07:35.543942 | orchestrator | 2026-04-08 02:07:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:07:35.545751 | orchestrator | 2026-04-08 02:07:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:07:35.545820 | orchestrator | 2026-04-08 02:07:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:07:38.594631 | orchestrator | 2026-04-08 02:07:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:07:38.597222 | orchestrator | 2026-04-08 02:07:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:07:38.597510 | orchestrator | 2026-04-08 02:07:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:07:41.639906 | orchestrator | 2026-04-08 02:07:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:07:41.643649 | orchestrator | 2026-04-08 02:07:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:07:41.646433 | orchestrator | 2026-04-08 02:07:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:07:44.696068 | orchestrator | 2026-04-08 02:07:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:07:44.697032 | orchestrator | 2026-04-08 02:07:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:07:44.697316 | orchestrator | 2026-04-08 02:07:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:07:47.752593 | orchestrator | 2026-04-08 02:07:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:07:47.753661 | orchestrator | 2026-04-08 02:07:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:07:47.753709 | orchestrator | 2026-04-08 02:07:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:07:50.803761 | orchestrator | 2026-04-08 02:07:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:07:50.806317 | orchestrator | 2026-04-08 02:07:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:07:50.806373 | orchestrator | 2026-04-08 02:07:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:07:53.864923 | orchestrator | 2026-04-08 02:07:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:07:53.866282 | orchestrator | 2026-04-08 02:07:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:07:53.866341 | orchestrator | 2026-04-08 02:07:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:07:56.911286 | orchestrator | 2026-04-08 02:07:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:07:56.912686 | orchestrator | 2026-04-08 02:07:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:07:56.912759 | orchestrator | 2026-04-08 02:07:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:07:59.959025 | orchestrator | 2026-04-08 02:07:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:07:59.960720 | orchestrator | 2026-04-08 02:07:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:07:59.960772 | orchestrator | 2026-04-08 02:07:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:08:03.009838 | orchestrator | 2026-04-08 02:08:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:08:03.010505 | orchestrator | 2026-04-08 02:08:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:08:03.010565 | orchestrator | 2026-04-08 02:08:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:08:06.060056 | orchestrator | 2026-04-08 02:08:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:08:06.062547 | orchestrator | 2026-04-08 02:08:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:08:06.062852 | orchestrator | 2026-04-08 02:08:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:08:09.112154 | orchestrator | 2026-04-08 02:08:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:08:09.115475 | orchestrator | 2026-04-08 02:08:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:08:09.115530 | orchestrator | 2026-04-08 02:08:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:08:12.165671 | orchestrator | 2026-04-08 02:08:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:08:12.167933 | orchestrator | 2026-04-08 02:08:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:08:12.167982 | orchestrator | 2026-04-08 02:08:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:08:15.220175 | orchestrator | 2026-04-08 02:08:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:08:15.221177 | orchestrator | 2026-04-08 02:08:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:08:15.221332 | orchestrator | 2026-04-08 02:08:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:08:18.273384 | orchestrator | 2026-04-08 02:08:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:08:18.274520 | orchestrator | 2026-04-08 02:08:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:08:18.274561 | orchestrator | 2026-04-08 02:08:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:08:21.325378 | orchestrator | 2026-04-08 02:08:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:08:21.327089 | orchestrator | 2026-04-08 02:08:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:08:21.327162 | orchestrator | 2026-04-08 02:08:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:08:24.372232 | orchestrator | 2026-04-08 02:08:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:08:24.373493 | orchestrator | 2026-04-08 02:08:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:08:24.373542 | orchestrator | 2026-04-08 02:08:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:08:27.433201 | orchestrator | 2026-04-08 02:08:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:08:27.435229 | orchestrator | 2026-04-08 02:08:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:08:27.435406 | orchestrator | 2026-04-08 02:08:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:08:30.479878 | orchestrator | 2026-04-08 02:08:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:08:30.482216 | orchestrator | 2026-04-08 02:08:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:08:30.482279 | orchestrator | 2026-04-08 02:08:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:08:33.530531 | orchestrator | 2026-04-08 02:08:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:08:33.534221 | orchestrator | 2026-04-08 02:08:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:08:33.534399 | orchestrator | 2026-04-08 02:08:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:08:36.584902 | orchestrator | 2026-04-08 02:08:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:08:36.586308 | orchestrator | 2026-04-08 02:08:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:08:36.586355 | orchestrator | 2026-04-08 02:08:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:08:39.638651 | orchestrator | 2026-04-08 02:08:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:08:39.639523 | orchestrator | 2026-04-08 02:08:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:08:39.639647 | orchestrator | 2026-04-08 02:08:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:08:42.683905 | orchestrator | 2026-04-08 02:08:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:08:42.685058 | orchestrator | 2026-04-08 02:08:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:08:42.685102 | orchestrator | 2026-04-08 02:08:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:08:45.736980 | orchestrator | 2026-04-08 02:08:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:08:45.738424 | orchestrator | 2026-04-08 02:08:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:08:45.738557 | orchestrator | 2026-04-08 02:08:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:08:48.790369 | orchestrator | 2026-04-08 02:08:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:08:48.793547 | orchestrator | 2026-04-08 02:08:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:08:48.793626 | orchestrator | 2026-04-08 02:08:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:08:51.841149 | orchestrator | 2026-04-08 02:08:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:08:51.843242 | orchestrator | 2026-04-08 02:08:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:08:51.843329 | orchestrator | 2026-04-08 02:08:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:08:54.887993 | orchestrator | 2026-04-08 02:08:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:08:54.891164 | orchestrator | 2026-04-08 02:08:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:08:54.891445 | orchestrator | 2026-04-08 02:08:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:08:57.935131 | orchestrator | 2026-04-08 02:08:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:08:57.937895 | orchestrator | 2026-04-08 02:08:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:08:57.937973 | orchestrator | 2026-04-08 02:08:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:09:00.986142 | orchestrator | 2026-04-08 02:09:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:09:00.988439 | orchestrator | 2026-04-08 02:09:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:09:00.988540 | orchestrator | 2026-04-08 02:09:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:09:04.040913 | orchestrator | 2026-04-08 02:09:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:09:04.043705 | orchestrator | 2026-04-08 02:09:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:09:04.043776 | orchestrator | 2026-04-08 02:09:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:09:07.090279 | orchestrator | 2026-04-08 02:09:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:09:07.091926 | orchestrator | 2026-04-08 02:09:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:09:07.091983 | orchestrator | 2026-04-08 02:09:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:09:10.139984 | orchestrator | 2026-04-08 02:09:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:09:10.140909 | orchestrator | 2026-04-08 02:09:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:09:10.141146 | orchestrator | 2026-04-08 02:09:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:09:13.182374 | orchestrator | 2026-04-08 02:09:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:09:13.184178 | orchestrator | 2026-04-08 02:09:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:09:13.184226 | orchestrator | 2026-04-08 02:09:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:09:16.226443 | orchestrator | 2026-04-08 02:09:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:09:16.227942 | orchestrator | 2026-04-08 02:09:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:09:16.228020 | orchestrator | 2026-04-08 02:09:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:09:19.277635 | orchestrator | 2026-04-08 02:09:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:09:19.279157 | orchestrator | 2026-04-08 02:09:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:09:19.279240 | orchestrator | 2026-04-08 02:09:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:09:22.324646 | orchestrator | 2026-04-08 02:09:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:09:22.326730 | orchestrator | 2026-04-08 02:09:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:09:22.326777 | orchestrator | 2026-04-08 02:09:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:09:25.383446 | orchestrator | 2026-04-08 02:09:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:09:25.385626 | orchestrator | 2026-04-08 02:09:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:09:25.385712 | orchestrator | 2026-04-08 02:09:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:09:28.434439 | orchestrator | 2026-04-08 02:09:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:09:28.437270 | orchestrator | 2026-04-08 02:09:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:09:28.437394 | orchestrator | 2026-04-08 02:09:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:09:31.486270 | orchestrator | 2026-04-08 02:09:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:09:31.489203 | orchestrator | 2026-04-08 02:09:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:09:31.489304 | orchestrator | 2026-04-08 02:09:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:09:34.536070 | orchestrator | 2026-04-08 02:09:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:09:34.537548 | orchestrator | 2026-04-08 02:09:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:09:34.537589 | orchestrator | 2026-04-08 02:09:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:09:37.583113 | orchestrator | 2026-04-08 02:09:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:09:37.585862 | orchestrator | 2026-04-08 02:09:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:09:37.585927 | orchestrator | 2026-04-08 02:09:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:09:40.635229 | orchestrator | 2026-04-08 02:09:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:09:40.637426 | orchestrator | 2026-04-08 02:09:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:09:40.637530 | orchestrator | 2026-04-08 02:09:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:09:43.680283 | orchestrator | 2026-04-08 02:09:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:09:43.681966 | orchestrator | 2026-04-08 02:09:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:09:43.682092 | orchestrator | 2026-04-08 02:09:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:09:46.723133 | orchestrator | 2026-04-08 02:09:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:09:46.725384 | orchestrator | 2026-04-08 02:09:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:09:46.725707 | orchestrator | 2026-04-08 02:09:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:09:49.777984 | orchestrator | 2026-04-08 02:09:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:09:49.779788 | orchestrator | 2026-04-08 02:09:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:09:49.779858 | orchestrator | 2026-04-08 02:09:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:09:52.827457 | orchestrator | 2026-04-08 02:09:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:09:52.829354 | orchestrator | 2026-04-08 02:09:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:09:52.829420 | orchestrator | 2026-04-08 02:09:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:09:55.873685 | orchestrator | 2026-04-08 02:09:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:09:55.874692 | orchestrator | 2026-04-08 02:09:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:09:55.874746 | orchestrator | 2026-04-08 02:09:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:09:58.920592 | orchestrator | 2026-04-08 02:09:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:09:58.922943 | orchestrator | 2026-04-08 02:09:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:09:58.923016 | orchestrator | 2026-04-08 02:09:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:10:01.970473 | orchestrator | 2026-04-08 02:10:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:10:01.972109 | orchestrator | 2026-04-08 02:10:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:10:01.972171 | orchestrator | 2026-04-08 02:10:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:10:05.024986 | orchestrator | 2026-04-08 02:10:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:10:05.027754 | orchestrator | 2026-04-08 02:10:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:10:05.027849 | orchestrator | 2026-04-08 02:10:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:10:08.071444 | orchestrator | 2026-04-08 02:10:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:10:08.073468 | orchestrator | 2026-04-08 02:10:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:10:08.073567 | orchestrator | 2026-04-08 02:10:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:10:11.127227 | orchestrator | 2026-04-08 02:10:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:10:11.128093 | orchestrator | 2026-04-08 02:10:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:10:11.128144 | orchestrator | 2026-04-08 02:10:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:10:14.175801 | orchestrator | 2026-04-08 02:10:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:10:14.178398 | orchestrator | 2026-04-08 02:10:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:10:14.178505 | orchestrator | 2026-04-08 02:10:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:10:17.224118 | orchestrator | 2026-04-08 02:10:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:10:17.226560 | orchestrator | 2026-04-08 02:10:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:10:17.226630 | orchestrator | 2026-04-08 02:10:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:10:20.267953 | orchestrator | 2026-04-08 02:10:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:10:20.268154 | orchestrator | 2026-04-08 02:10:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:10:20.268841 | orchestrator | 2026-04-08 02:10:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:10:23.321772 | orchestrator | 2026-04-08 02:10:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:10:23.324318 | orchestrator | 2026-04-08 02:10:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:10:23.324411 | orchestrator | 2026-04-08 02:10:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:10:26.369417 | orchestrator | 2026-04-08 02:10:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:10:26.371701 | orchestrator | 2026-04-08 02:10:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:10:26.371749 | orchestrator | 2026-04-08 02:10:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:10:29.421458 | orchestrator | 2026-04-08 02:10:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:10:29.424039 | orchestrator | 2026-04-08 02:10:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:10:29.424080 | orchestrator | 2026-04-08 02:10:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:10:32.469435 | orchestrator | 2026-04-08 02:10:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:10:32.470884 | orchestrator | 2026-04-08 02:10:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:10:32.470946 | orchestrator | 2026-04-08 02:10:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:10:35.519593 | orchestrator | 2026-04-08 02:10:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:10:35.521023 | orchestrator | 2026-04-08 02:10:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:10:35.521088 | orchestrator | 2026-04-08 02:10:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:10:38.569938 | orchestrator | 2026-04-08 02:10:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:10:38.571171 | orchestrator | 2026-04-08 02:10:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:10:38.571247 | orchestrator | 2026-04-08 02:10:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:10:41.626089 | orchestrator | 2026-04-08 02:10:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:10:41.627625 | orchestrator | 2026-04-08 02:10:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:10:41.627684 | orchestrator | 2026-04-08 02:10:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:10:44.681202 | orchestrator | 2026-04-08 02:10:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:10:44.682600 | orchestrator | 2026-04-08 02:10:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:10:44.682637 | orchestrator | 2026-04-08 02:10:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:10:47.732278 | orchestrator | 2026-04-08 02:10:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:10:47.734700 | orchestrator | 2026-04-08 02:10:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:10:47.734760 | orchestrator | 2026-04-08 02:10:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:10:50.783980 | orchestrator | 2026-04-08 02:10:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:10:50.785512 | orchestrator | 2026-04-08 02:10:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:10:50.785595 | orchestrator | 2026-04-08 02:10:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:10:53.834356 | orchestrator | 2026-04-08 02:10:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:10:53.835811 | orchestrator | 2026-04-08 02:10:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:10:53.835961 | orchestrator | 2026-04-08 02:10:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:10:56.881075 | orchestrator | 2026-04-08 02:10:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:10:56.882628 | orchestrator | 2026-04-08 02:10:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:10:56.882695 | orchestrator | 2026-04-08 02:10:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:10:59.929426 | orchestrator | 2026-04-08 02:10:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:10:59.931122 | orchestrator | 2026-04-08 02:10:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:10:59.931174 | orchestrator | 2026-04-08 02:10:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:11:02.977755 | orchestrator | 2026-04-08 02:11:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:11:02.979173 | orchestrator | 2026-04-08 02:11:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:11:02.979244 | orchestrator | 2026-04-08 02:11:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:11:06.026920 | orchestrator | 2026-04-08 02:11:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:11:06.028767 | orchestrator | 2026-04-08 02:11:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:11:06.028815 | orchestrator | 2026-04-08 02:11:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:11:09.072284 | orchestrator | 2026-04-08 02:11:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:11:09.074755 | orchestrator | 2026-04-08 02:11:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:11:09.074856 | orchestrator | 2026-04-08 02:11:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:11:12.119637 | orchestrator | 2026-04-08 02:11:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:11:12.121810 | orchestrator | 2026-04-08 02:11:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:11:12.121898 | orchestrator | 2026-04-08 02:11:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:11:15.163440 | orchestrator | 2026-04-08 02:11:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:11:15.165941 | orchestrator | 2026-04-08 02:11:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:11:15.166124 | orchestrator | 2026-04-08 02:11:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:11:18.207645 | orchestrator | 2026-04-08 02:11:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:11:18.209904 | orchestrator | 2026-04-08 02:11:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:11:18.209973 | orchestrator | 2026-04-08 02:11:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:11:21.248180 | orchestrator | 2026-04-08 02:11:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:11:21.250001 | orchestrator | 2026-04-08 02:11:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:11:21.250104 | orchestrator | 2026-04-08 02:11:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:11:24.292967 | orchestrator | 2026-04-08 02:11:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:11:24.294163 | orchestrator | 2026-04-08 02:11:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:11:24.294204 | orchestrator | 2026-04-08 02:11:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:11:27.335341 | orchestrator | 2026-04-08 02:11:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:11:27.336666 | orchestrator | 2026-04-08 02:11:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:11:27.336910 | orchestrator | 2026-04-08 02:11:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:11:30.381909 | orchestrator | 2026-04-08 02:11:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:11:30.384464 | orchestrator | 2026-04-08 02:11:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:11:30.384545 | orchestrator | 2026-04-08 02:11:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:11:33.434667 | orchestrator | 2026-04-08 02:11:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:11:33.436337 | orchestrator | 2026-04-08 02:11:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:11:33.436374 | orchestrator | 2026-04-08 02:11:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:11:36.485355 | orchestrator | 2026-04-08 02:11:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:11:36.486719 | orchestrator | 2026-04-08 02:11:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:11:36.486780 | orchestrator | 2026-04-08 02:11:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:11:39.538853 | orchestrator | 2026-04-08 02:11:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:11:39.542275 | orchestrator | 2026-04-08 02:11:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:11:39.542332 | orchestrator | 2026-04-08 02:11:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:11:42.590174 | orchestrator | 2026-04-08 02:11:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:11:42.591632 | orchestrator | 2026-04-08 02:11:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:11:42.591677 | orchestrator | 2026-04-08 02:11:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:11:45.641049 | orchestrator | 2026-04-08 02:11:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:11:45.641484 | orchestrator | 2026-04-08 02:11:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:11:45.641643 | orchestrator | 2026-04-08 02:11:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:11:48.685530 | orchestrator | 2026-04-08 02:11:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:11:48.688025 | orchestrator | 2026-04-08 02:11:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:11:48.688103 | orchestrator | 2026-04-08 02:11:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:11:51.729723 | orchestrator | 2026-04-08 02:11:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:11:51.730430 | orchestrator | 2026-04-08 02:11:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:11:51.730615 | orchestrator | 2026-04-08 02:11:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:11:54.783848 | orchestrator | 2026-04-08 02:11:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:11:54.785255 | orchestrator | 2026-04-08 02:11:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:11:54.785317 | orchestrator | 2026-04-08 02:11:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:11:57.839207 | orchestrator | 2026-04-08 02:11:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:11:57.841216 | orchestrator | 2026-04-08 02:11:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:11:57.841273 | orchestrator | 2026-04-08 02:11:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:12:00.892440 | orchestrator | 2026-04-08 02:12:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:12:00.894554 | orchestrator | 2026-04-08 02:12:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:12:00.894762 | orchestrator | 2026-04-08 02:12:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:12:03.932990 | orchestrator | 2026-04-08 02:12:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:12:03.934665 | orchestrator | 2026-04-08 02:12:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:12:03.934711 | orchestrator | 2026-04-08 02:12:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:12:06.983512 | orchestrator | 2026-04-08 02:12:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:12:06.986100 | orchestrator | 2026-04-08 02:12:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:12:06.986225 | orchestrator | 2026-04-08 02:12:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:12:10.036576 | orchestrator | 2026-04-08 02:12:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:12:10.040061 | orchestrator | 2026-04-08 02:12:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:12:10.040122 | orchestrator | 2026-04-08 02:12:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:12:13.087178 | orchestrator | 2026-04-08 02:12:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:12:13.088855 | orchestrator | 2026-04-08 02:12:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:12:13.089060 | orchestrator | 2026-04-08 02:12:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:12:16.139245 | orchestrator | 2026-04-08 02:12:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:12:16.141176 | orchestrator | 2026-04-08 02:12:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:12:16.141236 | orchestrator | 2026-04-08 02:12:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:12:19.188222 | orchestrator | 2026-04-08 02:12:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:12:19.188318 | orchestrator | 2026-04-08 02:12:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:12:19.188333 | orchestrator | 2026-04-08 02:12:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:12:22.239315 | orchestrator | 2026-04-08 02:12:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:12:22.240792 | orchestrator | 2026-04-08 02:12:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:12:22.240941 | orchestrator | 2026-04-08 02:12:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:12:25.294077 | orchestrator | 2026-04-08 02:12:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:12:25.295512 | orchestrator | 2026-04-08 02:12:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:12:25.295733 | orchestrator | 2026-04-08 02:12:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:12:28.346223 | orchestrator | 2026-04-08 02:12:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:12:28.348039 | orchestrator | 2026-04-08 02:12:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:12:28.348101 | orchestrator | 2026-04-08 02:12:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:12:31.411052 | orchestrator | 2026-04-08 02:12:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:12:31.412553 | orchestrator | 2026-04-08 02:12:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:12:31.412731 | orchestrator | 2026-04-08 02:12:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:12:34.459277 | orchestrator | 2026-04-08 02:12:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:12:34.460242 | orchestrator | 2026-04-08 02:12:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:12:34.460441 | orchestrator | 2026-04-08 02:12:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:12:37.509524 | orchestrator | 2026-04-08 02:12:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:12:37.511748 | orchestrator | 2026-04-08 02:12:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:12:37.511797 | orchestrator | 2026-04-08 02:12:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:12:40.560256 | orchestrator | 2026-04-08 02:12:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:12:40.562960 | orchestrator | 2026-04-08 02:12:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:12:40.563051 | orchestrator | 2026-04-08 02:12:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:12:43.614315 | orchestrator | 2026-04-08 02:12:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:12:43.615541 | orchestrator | 2026-04-08 02:12:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:12:43.615669 | orchestrator | 2026-04-08 02:12:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:12:46.661967 | orchestrator | 2026-04-08 02:12:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:12:46.663795 | orchestrator | 2026-04-08 02:12:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:12:46.663853 | orchestrator | 2026-04-08 02:12:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:12:49.705322 | orchestrator | 2026-04-08 02:12:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:12:49.708452 | orchestrator | 2026-04-08 02:12:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:12:49.708516 | orchestrator | 2026-04-08 02:12:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:12:52.746737 | orchestrator | 2026-04-08 02:12:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:12:52.750228 | orchestrator | 2026-04-08 02:12:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:12:52.750296 | orchestrator | 2026-04-08 02:12:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:12:55.803929 | orchestrator | 2026-04-08 02:12:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:12:55.805219 | orchestrator | 2026-04-08 02:12:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:12:55.805346 | orchestrator | 2026-04-08 02:12:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:12:58.854529 | orchestrator | 2026-04-08 02:12:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:12:58.856781 | orchestrator | 2026-04-08 02:12:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:12:58.856853 | orchestrator | 2026-04-08 02:12:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:13:01.911447 | orchestrator | 2026-04-08 02:13:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:13:01.913198 | orchestrator | 2026-04-08 02:13:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:13:01.913242 | orchestrator | 2026-04-08 02:13:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:13:04.962084 | orchestrator | 2026-04-08 02:13:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:13:04.965016 | orchestrator | 2026-04-08 02:13:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:13:04.965114 | orchestrator | 2026-04-08 02:13:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:13:08.009144 | orchestrator | 2026-04-08 02:13:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:13:08.010621 | orchestrator | 2026-04-08 02:13:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:13:08.010965 | orchestrator | 2026-04-08 02:13:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:13:11.056352 | orchestrator | 2026-04-08 02:13:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:13:11.058532 | orchestrator | 2026-04-08 02:13:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:13:11.058567 | orchestrator | 2026-04-08 02:13:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:13:14.106308 | orchestrator | 2026-04-08 02:13:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:13:14.108711 | orchestrator | 2026-04-08 02:13:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:13:14.108772 | orchestrator | 2026-04-08 02:13:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:13:17.160963 | orchestrator | 2026-04-08 02:13:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:13:17.164559 | orchestrator | 2026-04-08 02:13:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:13:17.164819 | orchestrator | 2026-04-08 02:13:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:13:20.213908 | orchestrator | 2026-04-08 02:13:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:13:20.215060 | orchestrator | 2026-04-08 02:13:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:13:20.215138 | orchestrator | 2026-04-08 02:13:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:13:23.261494 | orchestrator | 2026-04-08 02:13:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:13:23.263165 | orchestrator | 2026-04-08 02:13:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:13:23.263230 | orchestrator | 2026-04-08 02:13:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:13:26.312209 | orchestrator | 2026-04-08 02:13:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:13:26.314182 | orchestrator | 2026-04-08 02:13:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:13:26.314277 | orchestrator | 2026-04-08 02:13:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:13:29.363889 | orchestrator | 2026-04-08 02:13:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:13:29.365677 | orchestrator | 2026-04-08 02:13:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:13:29.365754 | orchestrator | 2026-04-08 02:13:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:13:32.414245 | orchestrator | 2026-04-08 02:13:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:13:32.415464 | orchestrator | 2026-04-08 02:13:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:13:32.415496 | orchestrator | 2026-04-08 02:13:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:13:35.471050 | orchestrator | 2026-04-08 02:13:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:13:35.472079 | orchestrator | 2026-04-08 02:13:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:13:35.472174 | orchestrator | 2026-04-08 02:13:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:13:38.523371 | orchestrator | 2026-04-08 02:13:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:13:38.526628 | orchestrator | 2026-04-08 02:13:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:13:38.526769 | orchestrator | 2026-04-08 02:13:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:13:41.578410 | orchestrator | 2026-04-08 02:13:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:13:41.580080 | orchestrator | 2026-04-08 02:13:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:13:41.580169 | orchestrator | 2026-04-08 02:13:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:13:44.633327 | orchestrator | 2026-04-08 02:13:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:13:44.634578 | orchestrator | 2026-04-08 02:13:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:13:44.634610 | orchestrator | 2026-04-08 02:13:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:13:47.691869 | orchestrator | 2026-04-08 02:13:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:13:47.694990 | orchestrator | 2026-04-08 02:13:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:13:47.695046 | orchestrator | 2026-04-08 02:13:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:13:50.743842 | orchestrator | 2026-04-08 02:13:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:13:50.746849 | orchestrator | 2026-04-08 02:13:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:13:50.751519 | orchestrator | 2026-04-08 02:13:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:13:53.818634 | orchestrator | 2026-04-08 02:13:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:13:53.820843 | orchestrator | 2026-04-08 02:13:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:13:53.820924 | orchestrator | 2026-04-08 02:13:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:13:56.870565 | orchestrator | 2026-04-08 02:13:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:13:56.872132 | orchestrator | 2026-04-08 02:13:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:13:56.872167 | orchestrator | 2026-04-08 02:13:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:13:59.921644 | orchestrator | 2026-04-08 02:13:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:13:59.923951 | orchestrator | 2026-04-08 02:13:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:13:59.924019 | orchestrator | 2026-04-08 02:13:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:14:02.978824 | orchestrator | 2026-04-08 02:14:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:14:02.981155 | orchestrator | 2026-04-08 02:14:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:14:02.981211 | orchestrator | 2026-04-08 02:14:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:14:06.027163 | orchestrator | 2026-04-08 02:14:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:14:06.028513 | orchestrator | 2026-04-08 02:14:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:14:06.028801 | orchestrator | 2026-04-08 02:14:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:14:09.072193 | orchestrator | 2026-04-08 02:14:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:14:09.073580 | orchestrator | 2026-04-08 02:14:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:14:09.073617 | orchestrator | 2026-04-08 02:14:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:14:12.120795 | orchestrator | 2026-04-08 02:14:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:14:12.122739 | orchestrator | 2026-04-08 02:14:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:14:12.122858 | orchestrator | 2026-04-08 02:14:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:14:15.166893 | orchestrator | 2026-04-08 02:14:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:14:15.168627 | orchestrator | 2026-04-08 02:14:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:14:15.168746 | orchestrator | 2026-04-08 02:14:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:14:18.217274 | orchestrator | 2026-04-08 02:14:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:14:18.218587 | orchestrator | 2026-04-08 02:14:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:14:18.218640 | orchestrator | 2026-04-08 02:14:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:14:21.259951 | orchestrator | 2026-04-08 02:14:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:14:21.262235 | orchestrator | 2026-04-08 02:14:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:14:21.262286 | orchestrator | 2026-04-08 02:14:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:14:24.307243 | orchestrator | 2026-04-08 02:14:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:14:24.309639 | orchestrator | 2026-04-08 02:14:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:14:24.309884 | orchestrator | 2026-04-08 02:14:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:14:27.353583 | orchestrator | 2026-04-08 02:14:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:14:27.355396 | orchestrator | 2026-04-08 02:14:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:14:27.355420 | orchestrator | 2026-04-08 02:14:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:14:30.403375 | orchestrator | 2026-04-08 02:14:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:14:30.403456 | orchestrator | 2026-04-08 02:14:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:14:30.403495 | orchestrator | 2026-04-08 02:14:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:14:33.450123 | orchestrator | 2026-04-08 02:14:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:14:33.452054 | orchestrator | 2026-04-08 02:14:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:14:33.452118 | orchestrator | 2026-04-08 02:14:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:14:36.497396 | orchestrator | 2026-04-08 02:14:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:14:36.499826 | orchestrator | 2026-04-08 02:14:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:14:36.500183 | orchestrator | 2026-04-08 02:14:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:14:39.549979 | orchestrator | 2026-04-08 02:14:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:14:39.551977 | orchestrator | 2026-04-08 02:14:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:14:39.551996 | orchestrator | 2026-04-08 02:14:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:14:42.594785 | orchestrator | 2026-04-08 02:14:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:14:42.596407 | orchestrator | 2026-04-08 02:14:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:14:42.596425 | orchestrator | 2026-04-08 02:14:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:14:45.644409 | orchestrator | 2026-04-08 02:14:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:14:45.647794 | orchestrator | 2026-04-08 02:14:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:14:45.647854 | orchestrator | 2026-04-08 02:14:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:14:48.693270 | orchestrator | 2026-04-08 02:14:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:14:48.693439 | orchestrator | 2026-04-08 02:14:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:14:48.693459 | orchestrator | 2026-04-08 02:14:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:14:51.732306 | orchestrator | 2026-04-08 02:14:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:14:51.734845 | orchestrator | 2026-04-08 02:14:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:14:51.734996 | orchestrator | 2026-04-08 02:14:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:14:54.780009 | orchestrator | 2026-04-08 02:14:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:14:54.782077 | orchestrator | 2026-04-08 02:14:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:14:54.782119 | orchestrator | 2026-04-08 02:14:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:14:57.833120 | orchestrator | 2026-04-08 02:14:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:14:57.835798 | orchestrator | 2026-04-08 02:14:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:14:57.835946 | orchestrator | 2026-04-08 02:14:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:15:00.883159 | orchestrator | 2026-04-08 02:15:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:15:00.885270 | orchestrator | 2026-04-08 02:15:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:15:00.885325 | orchestrator | 2026-04-08 02:15:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:15:03.931983 | orchestrator | 2026-04-08 02:15:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:15:03.933431 | orchestrator | 2026-04-08 02:15:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:15:03.933515 | orchestrator | 2026-04-08 02:15:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:15:06.980377 | orchestrator | 2026-04-08 02:15:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:15:06.982265 | orchestrator | 2026-04-08 02:15:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:15:06.982409 | orchestrator | 2026-04-08 02:15:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:15:10.032832 | orchestrator | 2026-04-08 02:15:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:15:10.032918 | orchestrator | 2026-04-08 02:15:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:15:10.032929 | orchestrator | 2026-04-08 02:15:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:15:13.079900 | orchestrator | 2026-04-08 02:15:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:15:13.082476 | orchestrator | 2026-04-08 02:15:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:15:13.082529 | orchestrator | 2026-04-08 02:15:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:15:16.137587 | orchestrator | 2026-04-08 02:15:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:15:16.138859 | orchestrator | 2026-04-08 02:15:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:15:16.138898 | orchestrator | 2026-04-08 02:15:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:15:19.187878 | orchestrator | 2026-04-08 02:15:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:15:19.188697 | orchestrator | 2026-04-08 02:15:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:15:19.188827 | orchestrator | 2026-04-08 02:15:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:15:22.241835 | orchestrator | 2026-04-08 02:15:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:15:22.243930 | orchestrator | 2026-04-08 02:15:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:15:22.244143 | orchestrator | 2026-04-08 02:15:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:15:25.289208 | orchestrator | 2026-04-08 02:15:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:15:25.290782 | orchestrator | 2026-04-08 02:15:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:15:25.290876 | orchestrator | 2026-04-08 02:15:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:15:28.338489 | orchestrator | 2026-04-08 02:15:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:15:28.340091 | orchestrator | 2026-04-08 02:15:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:15:28.340135 | orchestrator | 2026-04-08 02:15:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:15:31.390330 | orchestrator | 2026-04-08 02:15:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:15:31.391241 | orchestrator | 2026-04-08 02:15:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:15:31.391279 | orchestrator | 2026-04-08 02:15:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:15:34.442560 | orchestrator | 2026-04-08 02:15:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:15:34.443246 | orchestrator | 2026-04-08 02:15:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:15:34.443623 | orchestrator | 2026-04-08 02:15:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:15:37.486262 | orchestrator | 2026-04-08 02:15:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:15:37.488198 | orchestrator | 2026-04-08 02:15:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:15:37.488380 | orchestrator | 2026-04-08 02:15:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:15:40.533822 | orchestrator | 2026-04-08 02:15:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:15:40.536548 | orchestrator | 2026-04-08 02:15:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:15:40.536602 | orchestrator | 2026-04-08 02:15:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:15:43.581954 | orchestrator | 2026-04-08 02:15:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:15:43.586491 | orchestrator | 2026-04-08 02:15:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:15:43.586556 | orchestrator | 2026-04-08 02:15:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:15:46.628562 | orchestrator | 2026-04-08 02:15:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:15:46.629463 | orchestrator | 2026-04-08 02:15:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:15:46.629488 | orchestrator | 2026-04-08 02:15:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:15:49.680343 | orchestrator | 2026-04-08 02:15:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:15:49.687986 | orchestrator | 2026-04-08 02:15:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:15:49.688058 | orchestrator | 2026-04-08 02:15:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:15:52.740044 | orchestrator | 2026-04-08 02:15:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:15:52.741866 | orchestrator | 2026-04-08 02:15:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:15:52.741915 | orchestrator | 2026-04-08 02:15:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:15:55.786357 | orchestrator | 2026-04-08 02:15:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:15:55.788731 | orchestrator | 2026-04-08 02:15:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:15:55.788789 | orchestrator | 2026-04-08 02:15:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:15:58.834570 | orchestrator | 2026-04-08 02:15:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:15:58.834794 | orchestrator | 2026-04-08 02:15:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:15:58.834810 | orchestrator | 2026-04-08 02:15:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:16:01.877955 | orchestrator | 2026-04-08 02:16:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:16:01.879793 | orchestrator | 2026-04-08 02:16:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:16:01.879830 | orchestrator | 2026-04-08 02:16:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:16:04.925949 | orchestrator | 2026-04-08 02:16:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:16:04.926376 | orchestrator | 2026-04-08 02:16:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:16:04.926479 | orchestrator | 2026-04-08 02:16:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:16:07.973545 | orchestrator | 2026-04-08 02:16:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:16:07.975589 | orchestrator | 2026-04-08 02:16:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:16:07.975726 | orchestrator | 2026-04-08 02:16:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:16:11.037196 | orchestrator | 2026-04-08 02:16:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:16:11.039162 | orchestrator | 2026-04-08 02:16:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:16:11.039490 | orchestrator | 2026-04-08 02:16:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:16:14.083376 | orchestrator | 2026-04-08 02:16:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:16:14.084510 | orchestrator | 2026-04-08 02:16:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:16:14.084590 | orchestrator | 2026-04-08 02:16:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:16:17.133187 | orchestrator | 2026-04-08 02:16:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:16:17.135522 | orchestrator | 2026-04-08 02:16:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:16:17.135809 | orchestrator | 2026-04-08 02:16:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:16:20.183644 | orchestrator | 2026-04-08 02:16:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:16:20.185177 | orchestrator | 2026-04-08 02:16:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:16:20.185221 | orchestrator | 2026-04-08 02:16:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:16:23.236884 | orchestrator | 2026-04-08 02:16:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:16:23.240365 | orchestrator | 2026-04-08 02:16:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:16:23.240485 | orchestrator | 2026-04-08 02:16:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:16:26.287046 | orchestrator | 2026-04-08 02:16:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:18:26.369495 | orchestrator | 2026-04-08 02:18:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:18:26.369616 | orchestrator | 2026-04-08 02:18:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:18:29.418337 | orchestrator | 2026-04-08 02:18:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:18:29.420802 | orchestrator | 2026-04-08 02:18:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:18:29.420854 | orchestrator | 2026-04-08 02:18:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:18:32.463940 | orchestrator | 2026-04-08 02:18:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:18:32.465908 | orchestrator | 2026-04-08 02:18:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:18:32.465972 | orchestrator | 2026-04-08 02:18:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:18:35.518111 | orchestrator | 2026-04-08 02:18:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:18:35.520042 | orchestrator | 2026-04-08 02:18:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:18:35.520132 | orchestrator | 2026-04-08 02:18:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:18:38.562766 | orchestrator | 2026-04-08 02:18:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:18:38.564724 | orchestrator | 2026-04-08 02:18:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:18:38.564773 | orchestrator | 2026-04-08 02:18:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:18:41.609050 | orchestrator | 2026-04-08 02:18:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:18:41.610945 | orchestrator | 2026-04-08 02:18:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:18:41.611022 | orchestrator | 2026-04-08 02:18:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:18:44.656444 | orchestrator | 2026-04-08 02:18:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:18:44.658128 | orchestrator | 2026-04-08 02:18:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:18:44.658185 | orchestrator | 2026-04-08 02:18:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:18:47.702093 | orchestrator | 2026-04-08 02:18:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:18:47.704787 | orchestrator | 2026-04-08 02:18:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:18:47.704902 | orchestrator | 2026-04-08 02:18:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:18:50.761854 | orchestrator | 2026-04-08 02:18:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:18:50.761951 | orchestrator | 2026-04-08 02:18:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:18:50.762459 | orchestrator | 2026-04-08 02:18:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:18:53.808977 | orchestrator | 2026-04-08 02:18:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:18:53.809985 | orchestrator | 2026-04-08 02:18:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:18:53.810276 | orchestrator | 2026-04-08 02:18:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:18:56.855697 | orchestrator | 2026-04-08 02:18:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:18:56.856960 | orchestrator | 2026-04-08 02:18:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:18:56.856998 | orchestrator | 2026-04-08 02:18:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:18:59.899200 | orchestrator | 2026-04-08 02:18:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:18:59.900989 | orchestrator | 2026-04-08 02:18:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:18:59.901041 | orchestrator | 2026-04-08 02:18:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:19:02.944130 | orchestrator | 2026-04-08 02:19:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:19:02.946143 | orchestrator | 2026-04-08 02:19:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:19:02.946222 | orchestrator | 2026-04-08 02:19:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:19:05.989843 | orchestrator | 2026-04-08 02:19:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:19:05.991009 | orchestrator | 2026-04-08 02:19:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:19:05.991034 | orchestrator | 2026-04-08 02:19:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:19:09.039508 | orchestrator | 2026-04-08 02:19:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:19:09.041637 | orchestrator | 2026-04-08 02:19:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:19:09.041772 | orchestrator | 2026-04-08 02:19:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:19:12.084548 | orchestrator | 2026-04-08 02:19:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:19:12.087394 | orchestrator | 2026-04-08 02:19:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:19:12.087435 | orchestrator | 2026-04-08 02:19:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:19:15.131062 | orchestrator | 2026-04-08 02:19:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:19:15.132560 | orchestrator | 2026-04-08 02:19:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:19:15.132615 | orchestrator | 2026-04-08 02:19:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:19:18.180489 | orchestrator | 2026-04-08 02:19:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:19:18.182376 | orchestrator | 2026-04-08 02:19:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:19:18.182435 | orchestrator | 2026-04-08 02:19:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:19:21.223851 | orchestrator | 2026-04-08 02:19:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:19:21.224910 | orchestrator | 2026-04-08 02:19:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:19:21.225174 | orchestrator | 2026-04-08 02:19:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:19:24.273571 | orchestrator | 2026-04-08 02:19:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:19:24.275101 | orchestrator | 2026-04-08 02:19:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:19:24.275183 | orchestrator | 2026-04-08 02:19:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:19:27.319614 | orchestrator | 2026-04-08 02:19:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:19:27.320815 | orchestrator | 2026-04-08 02:19:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:19:27.320843 | orchestrator | 2026-04-08 02:19:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:19:30.367368 | orchestrator | 2026-04-08 02:19:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:19:30.369229 | orchestrator | 2026-04-08 02:19:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:19:30.369270 | orchestrator | 2026-04-08 02:19:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:19:33.414364 | orchestrator | 2026-04-08 02:19:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:19:33.416115 | orchestrator | 2026-04-08 02:19:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:19:33.416161 | orchestrator | 2026-04-08 02:19:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:19:36.455155 | orchestrator | 2026-04-08 02:19:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:19:36.456237 | orchestrator | 2026-04-08 02:19:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:19:36.457773 | orchestrator | 2026-04-08 02:19:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:19:39.497433 | orchestrator | 2026-04-08 02:19:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:19:39.498756 | orchestrator | 2026-04-08 02:19:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:19:39.498812 | orchestrator | 2026-04-08 02:19:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:19:42.544077 | orchestrator | 2026-04-08 02:19:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:19:42.545317 | orchestrator | 2026-04-08 02:19:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:19:42.545377 | orchestrator | 2026-04-08 02:19:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:19:45.591898 | orchestrator | 2026-04-08 02:19:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:19:45.593818 | orchestrator | 2026-04-08 02:19:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:19:45.593871 | orchestrator | 2026-04-08 02:19:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:19:48.642487 | orchestrator | 2026-04-08 02:19:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:19:48.644647 | orchestrator | 2026-04-08 02:19:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:19:48.644671 | orchestrator | 2026-04-08 02:19:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:19:51.682887 | orchestrator | 2026-04-08 02:19:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:19:51.684525 | orchestrator | 2026-04-08 02:19:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:19:51.684575 | orchestrator | 2026-04-08 02:19:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:19:54.725213 | orchestrator | 2026-04-08 02:19:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:19:54.728569 | orchestrator | 2026-04-08 02:19:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:19:54.728621 | orchestrator | 2026-04-08 02:19:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:19:57.774183 | orchestrator | 2026-04-08 02:19:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:19:57.775997 | orchestrator | 2026-04-08 02:19:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:19:57.776057 | orchestrator | 2026-04-08 02:19:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:20:00.819606 | orchestrator | 2026-04-08 02:20:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:20:00.821140 | orchestrator | 2026-04-08 02:20:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:20:00.821361 | orchestrator | 2026-04-08 02:20:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:20:03.864465 | orchestrator | 2026-04-08 02:20:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:20:03.865544 | orchestrator | 2026-04-08 02:20:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:20:03.865590 | orchestrator | 2026-04-08 02:20:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:20:06.906285 | orchestrator | 2026-04-08 02:20:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:20:06.907594 | orchestrator | 2026-04-08 02:20:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:20:06.907627 | orchestrator | 2026-04-08 02:20:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:20:09.957399 | orchestrator | 2026-04-08 02:20:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:20:09.958561 | orchestrator | 2026-04-08 02:20:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:20:09.958623 | orchestrator | 2026-04-08 02:20:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:20:13.001051 | orchestrator | 2026-04-08 02:20:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:20:13.003935 | orchestrator | 2026-04-08 02:20:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:20:13.004004 | orchestrator | 2026-04-08 02:20:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:20:16.045884 | orchestrator | 2026-04-08 02:20:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:20:16.048013 | orchestrator | 2026-04-08 02:20:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:20:16.048060 | orchestrator | 2026-04-08 02:20:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:20:19.093359 | orchestrator | 2026-04-08 02:20:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:20:19.094495 | orchestrator | 2026-04-08 02:20:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:20:19.094612 | orchestrator | 2026-04-08 02:20:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:20:22.134197 | orchestrator | 2026-04-08 02:20:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:20:22.135172 | orchestrator | 2026-04-08 02:20:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:20:22.135227 | orchestrator | 2026-04-08 02:20:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:20:25.182534 | orchestrator | 2026-04-08 02:20:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:20:25.184904 | orchestrator | 2026-04-08 02:20:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:20:25.185033 | orchestrator | 2026-04-08 02:20:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:20:28.224323 | orchestrator | 2026-04-08 02:20:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:20:28.225953 | orchestrator | 2026-04-08 02:20:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:20:28.226114 | orchestrator | 2026-04-08 02:20:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:20:31.262829 | orchestrator | 2026-04-08 02:20:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:20:31.264218 | orchestrator | 2026-04-08 02:20:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:20:31.264259 | orchestrator | 2026-04-08 02:20:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:20:34.311726 | orchestrator | 2026-04-08 02:20:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:20:34.312638 | orchestrator | 2026-04-08 02:20:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:20:34.312722 | orchestrator | 2026-04-08 02:20:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:20:37.355778 | orchestrator | 2026-04-08 02:20:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:20:37.356899 | orchestrator | 2026-04-08 02:20:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:20:37.356951 | orchestrator | 2026-04-08 02:20:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:20:40.404847 | orchestrator | 2026-04-08 02:20:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:20:40.405354 | orchestrator | 2026-04-08 02:20:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:20:40.405379 | orchestrator | 2026-04-08 02:20:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:20:43.450813 | orchestrator | 2026-04-08 02:20:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:20:43.452368 | orchestrator | 2026-04-08 02:20:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:20:43.452456 | orchestrator | 2026-04-08 02:20:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:20:46.498236 | orchestrator | 2026-04-08 02:20:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:20:46.499912 | orchestrator | 2026-04-08 02:20:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:20:46.500042 | orchestrator | 2026-04-08 02:20:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:20:49.545178 | orchestrator | 2026-04-08 02:20:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:20:49.545374 | orchestrator | 2026-04-08 02:20:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:20:49.545405 | orchestrator | 2026-04-08 02:20:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:20:52.592686 | orchestrator | 2026-04-08 02:20:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:20:52.594619 | orchestrator | 2026-04-08 02:20:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:20:52.594726 | orchestrator | 2026-04-08 02:20:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:20:55.643112 | orchestrator | 2026-04-08 02:20:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:20:55.644159 | orchestrator | 2026-04-08 02:20:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:20:55.644408 | orchestrator | 2026-04-08 02:20:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:20:58.691892 | orchestrator | 2026-04-08 02:20:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:20:58.693081 | orchestrator | 2026-04-08 02:20:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:20:58.693133 | orchestrator | 2026-04-08 02:20:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:21:01.731316 | orchestrator | 2026-04-08 02:21:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:21:01.733617 | orchestrator | 2026-04-08 02:21:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:21:01.733729 | orchestrator | 2026-04-08 02:21:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:21:04.783745 | orchestrator | 2026-04-08 02:21:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:21:04.785474 | orchestrator | 2026-04-08 02:21:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:21:04.785508 | orchestrator | 2026-04-08 02:21:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:21:07.818995 | orchestrator | 2026-04-08 02:21:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:21:07.820649 | orchestrator | 2026-04-08 02:21:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:21:07.820662 | orchestrator | 2026-04-08 02:21:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:21:10.860459 | orchestrator | 2026-04-08 02:21:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:21:10.861602 | orchestrator | 2026-04-08 02:21:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:21:10.861783 | orchestrator | 2026-04-08 02:21:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:21:13.908205 | orchestrator | 2026-04-08 02:21:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:21:13.911214 | orchestrator | 2026-04-08 02:21:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:21:13.911232 | orchestrator | 2026-04-08 02:21:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:21:16.955378 | orchestrator | 2026-04-08 02:21:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:21:16.957260 | orchestrator | 2026-04-08 02:21:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:21:16.957317 | orchestrator | 2026-04-08 02:21:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:21:20.006138 | orchestrator | 2026-04-08 02:21:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:21:20.008223 | orchestrator | 2026-04-08 02:21:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:21:20.008279 | orchestrator | 2026-04-08 02:21:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:21:23.049062 | orchestrator | 2026-04-08 02:21:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:21:23.050233 | orchestrator | 2026-04-08 02:21:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:21:23.050285 | orchestrator | 2026-04-08 02:21:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:21:26.096679 | orchestrator | 2026-04-08 02:21:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:21:26.099394 | orchestrator | 2026-04-08 02:21:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:21:26.099468 | orchestrator | 2026-04-08 02:21:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:21:29.140221 | orchestrator | 2026-04-08 02:21:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:21:29.141083 | orchestrator | 2026-04-08 02:21:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:21:29.141106 | orchestrator | 2026-04-08 02:21:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:21:32.181280 | orchestrator | 2026-04-08 02:21:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:21:32.182709 | orchestrator | 2026-04-08 02:21:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:21:32.182861 | orchestrator | 2026-04-08 02:21:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:21:35.221762 | orchestrator | 2026-04-08 02:21:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:21:35.222366 | orchestrator | 2026-04-08 02:21:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:21:35.222396 | orchestrator | 2026-04-08 02:21:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:21:38.268011 | orchestrator | 2026-04-08 02:21:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:21:38.269643 | orchestrator | 2026-04-08 02:21:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:21:38.269708 | orchestrator | 2026-04-08 02:21:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:21:41.314301 | orchestrator | 2026-04-08 02:21:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:21:41.315296 | orchestrator | 2026-04-08 02:21:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:21:41.315346 | orchestrator | 2026-04-08 02:21:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:21:44.367096 | orchestrator | 2026-04-08 02:21:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:21:44.368792 | orchestrator | 2026-04-08 02:21:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:21:44.368843 | orchestrator | 2026-04-08 02:21:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:21:47.415228 | orchestrator | 2026-04-08 02:21:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:21:47.416359 | orchestrator | 2026-04-08 02:21:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:21:47.416412 | orchestrator | 2026-04-08 02:21:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:21:50.450906 | orchestrator | 2026-04-08 02:21:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:21:50.452678 | orchestrator | 2026-04-08 02:21:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:21:50.453025 | orchestrator | 2026-04-08 02:21:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:21:53.496544 | orchestrator | 2026-04-08 02:21:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:21:53.498297 | orchestrator | 2026-04-08 02:21:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:21:53.498378 | orchestrator | 2026-04-08 02:21:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:21:56.549651 | orchestrator | 2026-04-08 02:21:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:21:56.551995 | orchestrator | 2026-04-08 02:21:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:21:56.552066 | orchestrator | 2026-04-08 02:21:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:21:59.592062 | orchestrator | 2026-04-08 02:21:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:21:59.593624 | orchestrator | 2026-04-08 02:21:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:21:59.593670 | orchestrator | 2026-04-08 02:21:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:22:02.638190 | orchestrator | 2026-04-08 02:22:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:22:02.639698 | orchestrator | 2026-04-08 02:22:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:22:02.639723 | orchestrator | 2026-04-08 02:22:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:22:05.688263 | orchestrator | 2026-04-08 02:22:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:22:05.690276 | orchestrator | 2026-04-08 02:22:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:22:05.690356 | orchestrator | 2026-04-08 02:22:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:22:08.740229 | orchestrator | 2026-04-08 02:22:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:22:08.743373 | orchestrator | 2026-04-08 02:22:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:22:08.743429 | orchestrator | 2026-04-08 02:22:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:22:11.788790 | orchestrator | 2026-04-08 02:22:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:22:11.791387 | orchestrator | 2026-04-08 02:22:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:22:11.791444 | orchestrator | 2026-04-08 02:22:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:22:14.836989 | orchestrator | 2026-04-08 02:22:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:22:14.838881 | orchestrator | 2026-04-08 02:22:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:22:14.838940 | orchestrator | 2026-04-08 02:22:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:22:17.885878 | orchestrator | 2026-04-08 02:22:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:22:17.886904 | orchestrator | 2026-04-08 02:22:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:22:17.887049 | orchestrator | 2026-04-08 02:22:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:22:20.936124 | orchestrator | 2026-04-08 02:22:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:22:20.938971 | orchestrator | 2026-04-08 02:22:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:22:20.939070 | orchestrator | 2026-04-08 02:22:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:22:23.980972 | orchestrator | 2026-04-08 02:22:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:22:23.982204 | orchestrator | 2026-04-08 02:22:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:22:23.982381 | orchestrator | 2026-04-08 02:22:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:22:27.023942 | orchestrator | 2026-04-08 02:22:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:22:27.025345 | orchestrator | 2026-04-08 02:22:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:22:27.025440 | orchestrator | 2026-04-08 02:22:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:22:30.069909 | orchestrator | 2026-04-08 02:22:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:22:30.070514 | orchestrator | 2026-04-08 02:22:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:22:30.070759 | orchestrator | 2026-04-08 02:22:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:22:33.116090 | orchestrator | 2026-04-08 02:22:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:22:33.117860 | orchestrator | 2026-04-08 02:22:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:22:33.117928 | orchestrator | 2026-04-08 02:22:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:22:36.163941 | orchestrator | 2026-04-08 02:22:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:22:36.164975 | orchestrator | 2026-04-08 02:22:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:22:36.165531 | orchestrator | 2026-04-08 02:22:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:22:39.210940 | orchestrator | 2026-04-08 02:22:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:22:39.212360 | orchestrator | 2026-04-08 02:22:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:22:39.212423 | orchestrator | 2026-04-08 02:22:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:22:42.243355 | orchestrator | 2026-04-08 02:22:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:22:42.244126 | orchestrator | 2026-04-08 02:22:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:22:42.244380 | orchestrator | 2026-04-08 02:22:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:22:45.285265 | orchestrator | 2026-04-08 02:22:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:22:45.288642 | orchestrator | 2026-04-08 02:22:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:22:45.288726 | orchestrator | 2026-04-08 02:22:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:22:48.333706 | orchestrator | 2026-04-08 02:22:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:22:48.335808 | orchestrator | 2026-04-08 02:22:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:22:48.335853 | orchestrator | 2026-04-08 02:22:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:22:51.378444 | orchestrator | 2026-04-08 02:22:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:22:51.379691 | orchestrator | 2026-04-08 02:22:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:22:51.379813 | orchestrator | 2026-04-08 02:22:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:22:54.428848 | orchestrator | 2026-04-08 02:22:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:22:54.431340 | orchestrator | 2026-04-08 02:22:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:22:54.431401 | orchestrator | 2026-04-08 02:22:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:22:57.474672 | orchestrator | 2026-04-08 02:22:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:22:57.476732 | orchestrator | 2026-04-08 02:22:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:22:57.476813 | orchestrator | 2026-04-08 02:22:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:23:00.518536 | orchestrator | 2026-04-08 02:23:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:23:00.519392 | orchestrator | 2026-04-08 02:23:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:23:00.519447 | orchestrator | 2026-04-08 02:23:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:23:03.553465 | orchestrator | 2026-04-08 02:23:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:23:03.554008 | orchestrator | 2026-04-08 02:23:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:23:03.554204 | orchestrator | 2026-04-08 02:23:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:23:06.598700 | orchestrator | 2026-04-08 02:23:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:23:06.600398 | orchestrator | 2026-04-08 02:23:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:23:06.600582 | orchestrator | 2026-04-08 02:23:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:23:09.645443 | orchestrator | 2026-04-08 02:23:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:23:09.646530 | orchestrator | 2026-04-08 02:23:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:23:09.646558 | orchestrator | 2026-04-08 02:23:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:23:12.692198 | orchestrator | 2026-04-08 02:23:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:23:12.692589 | orchestrator | 2026-04-08 02:23:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:23:12.692653 | orchestrator | 2026-04-08 02:23:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:23:15.733119 | orchestrator | 2026-04-08 02:23:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:23:15.735576 | orchestrator | 2026-04-08 02:23:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:23:15.735643 | orchestrator | 2026-04-08 02:23:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:23:18.784290 | orchestrator | 2026-04-08 02:23:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:23:18.785920 | orchestrator | 2026-04-08 02:23:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:23:18.785951 | orchestrator | 2026-04-08 02:23:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:23:21.826962 | orchestrator | 2026-04-08 02:23:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:23:21.829296 | orchestrator | 2026-04-08 02:23:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:23:21.829522 | orchestrator | 2026-04-08 02:23:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:23:24.875188 | orchestrator | 2026-04-08 02:23:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:23:24.877222 | orchestrator | 2026-04-08 02:23:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:23:24.877279 | orchestrator | 2026-04-08 02:23:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:23:27.932523 | orchestrator | 2026-04-08 02:23:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:23:27.934467 | orchestrator | 2026-04-08 02:23:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:23:27.934553 | orchestrator | 2026-04-08 02:23:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:23:30.981021 | orchestrator | 2026-04-08 02:23:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:23:30.983172 | orchestrator | 2026-04-08 02:23:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:23:30.983262 | orchestrator | 2026-04-08 02:23:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:23:34.028425 | orchestrator | 2026-04-08 02:23:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:23:34.029511 | orchestrator | 2026-04-08 02:23:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:23:34.029575 | orchestrator | 2026-04-08 02:23:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:23:37.072034 | orchestrator | 2026-04-08 02:23:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:23:37.073031 | orchestrator | 2026-04-08 02:23:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:23:37.073082 | orchestrator | 2026-04-08 02:23:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:23:40.122458 | orchestrator | 2026-04-08 02:23:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:23:40.123809 | orchestrator | 2026-04-08 02:23:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:23:40.123870 | orchestrator | 2026-04-08 02:23:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:23:43.177552 | orchestrator | 2026-04-08 02:23:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:23:43.178933 | orchestrator | 2026-04-08 02:23:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:23:43.179010 | orchestrator | 2026-04-08 02:23:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:23:46.225394 | orchestrator | 2026-04-08 02:23:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:23:46.227342 | orchestrator | 2026-04-08 02:23:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:23:46.227397 | orchestrator | 2026-04-08 02:23:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:23:49.264889 | orchestrator | 2026-04-08 02:23:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:23:49.266906 | orchestrator | 2026-04-08 02:23:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:23:49.267276 | orchestrator | 2026-04-08 02:23:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:23:52.313417 | orchestrator | 2026-04-08 02:23:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:23:52.316186 | orchestrator | 2026-04-08 02:23:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:23:52.316266 | orchestrator | 2026-04-08 02:23:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:23:55.361343 | orchestrator | 2026-04-08 02:23:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:23:55.363239 | orchestrator | 2026-04-08 02:23:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:23:55.363295 | orchestrator | 2026-04-08 02:23:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:23:58.418649 | orchestrator | 2026-04-08 02:23:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:23:58.419514 | orchestrator | 2026-04-08 02:23:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:23:58.419858 | orchestrator | 2026-04-08 02:23:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:24:01.472348 | orchestrator | 2026-04-08 02:24:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:24:01.474618 | orchestrator | 2026-04-08 02:24:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:24:01.474680 | orchestrator | 2026-04-08 02:24:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:24:04.524038 | orchestrator | 2026-04-08 02:24:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:24:04.525978 | orchestrator | 2026-04-08 02:24:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:24:04.526174 | orchestrator | 2026-04-08 02:24:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:24:07.570333 | orchestrator | 2026-04-08 02:24:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:24:07.572295 | orchestrator | 2026-04-08 02:24:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:24:07.572368 | orchestrator | 2026-04-08 02:24:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:24:10.618131 | orchestrator | 2026-04-08 02:24:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:24:10.618316 | orchestrator | 2026-04-08 02:24:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:24:10.618333 | orchestrator | 2026-04-08 02:24:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:24:13.667099 | orchestrator | 2026-04-08 02:24:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:24:13.669421 | orchestrator | 2026-04-08 02:24:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:24:13.669485 | orchestrator | 2026-04-08 02:24:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:24:16.717603 | orchestrator | 2026-04-08 02:24:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:24:16.719653 | orchestrator | 2026-04-08 02:24:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:24:16.719729 | orchestrator | 2026-04-08 02:24:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:24:19.763139 | orchestrator | 2026-04-08 02:24:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:24:19.764785 | orchestrator | 2026-04-08 02:24:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:24:19.764906 | orchestrator | 2026-04-08 02:24:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:24:22.813760 | orchestrator | 2026-04-08 02:24:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:24:22.816015 | orchestrator | 2026-04-08 02:24:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:24:22.816076 | orchestrator | 2026-04-08 02:24:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:24:25.871531 | orchestrator | 2026-04-08 02:24:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:24:25.872983 | orchestrator | 2026-04-08 02:24:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:24:25.873015 | orchestrator | 2026-04-08 02:24:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:24:28.909278 | orchestrator | 2026-04-08 02:24:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:24:28.909463 | orchestrator | 2026-04-08 02:24:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:24:28.909485 | orchestrator | 2026-04-08 02:24:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:24:31.943360 | orchestrator | 2026-04-08 02:24:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:24:31.946778 | orchestrator | 2026-04-08 02:24:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:24:31.946917 | orchestrator | 2026-04-08 02:24:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:24:34.987134 | orchestrator | 2026-04-08 02:24:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:24:34.988434 | orchestrator | 2026-04-08 02:24:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:24:34.988503 | orchestrator | 2026-04-08 02:24:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:24:38.032894 | orchestrator | 2026-04-08 02:24:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:24:38.035670 | orchestrator | 2026-04-08 02:24:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:24:38.035762 | orchestrator | 2026-04-08 02:24:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:24:41.074586 | orchestrator | 2026-04-08 02:24:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:24:41.075867 | orchestrator | 2026-04-08 02:24:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:24:41.075933 | orchestrator | 2026-04-08 02:24:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:24:44.116874 | orchestrator | 2026-04-08 02:24:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:24:44.118543 | orchestrator | 2026-04-08 02:24:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:24:44.118587 | orchestrator | 2026-04-08 02:24:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:24:47.172557 | orchestrator | 2026-04-08 02:24:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:24:47.175391 | orchestrator | 2026-04-08 02:24:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:24:47.175472 | orchestrator | 2026-04-08 02:24:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:24:50.219562 | orchestrator | 2026-04-08 02:24:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:24:50.220900 | orchestrator | 2026-04-08 02:24:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:24:50.221000 | orchestrator | 2026-04-08 02:24:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:24:53.277966 | orchestrator | 2026-04-08 02:24:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:24:53.279728 | orchestrator | 2026-04-08 02:24:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:24:53.279943 | orchestrator | 2026-04-08 02:24:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:24:56.332370 | orchestrator | 2026-04-08 02:24:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:24:56.335385 | orchestrator | 2026-04-08 02:24:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:24:56.335449 | orchestrator | 2026-04-08 02:24:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:24:59.388571 | orchestrator | 2026-04-08 02:24:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:24:59.390557 | orchestrator | 2026-04-08 02:24:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:24:59.390592 | orchestrator | 2026-04-08 02:24:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:25:02.444436 | orchestrator | 2026-04-08 02:25:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:25:02.446395 | orchestrator | 2026-04-08 02:25:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:25:02.446479 | orchestrator | 2026-04-08 02:25:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:25:05.492660 | orchestrator | 2026-04-08 02:25:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:25:05.494305 | orchestrator | 2026-04-08 02:25:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:25:05.494375 | orchestrator | 2026-04-08 02:25:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:25:08.552272 | orchestrator | 2026-04-08 02:25:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:25:08.552393 | orchestrator | 2026-04-08 02:25:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:25:08.552407 | orchestrator | 2026-04-08 02:25:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:25:11.599966 | orchestrator | 2026-04-08 02:25:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:25:11.600046 | orchestrator | 2026-04-08 02:25:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:25:11.600075 | orchestrator | 2026-04-08 02:25:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:25:14.646302 | orchestrator | 2026-04-08 02:25:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:25:14.647840 | orchestrator | 2026-04-08 02:25:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:25:14.647876 | orchestrator | 2026-04-08 02:25:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:25:17.698432 | orchestrator | 2026-04-08 02:25:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:25:17.699466 | orchestrator | 2026-04-08 02:25:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:25:17.699507 | orchestrator | 2026-04-08 02:25:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:25:20.749160 | orchestrator | 2026-04-08 02:25:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:25:20.750435 | orchestrator | 2026-04-08 02:25:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:25:20.750464 | orchestrator | 2026-04-08 02:25:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:25:23.801364 | orchestrator | 2026-04-08 02:25:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:25:23.803239 | orchestrator | 2026-04-08 02:25:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:25:23.803533 | orchestrator | 2026-04-08 02:25:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:25:26.850298 | orchestrator | 2026-04-08 02:25:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:25:26.851792 | orchestrator | 2026-04-08 02:25:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:25:26.852115 | orchestrator | 2026-04-08 02:25:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:25:29.893918 | orchestrator | 2026-04-08 02:25:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:25:29.895324 | orchestrator | 2026-04-08 02:25:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:25:29.895373 | orchestrator | 2026-04-08 02:25:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:25:32.947097 | orchestrator | 2026-04-08 02:25:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:25:32.949161 | orchestrator | 2026-04-08 02:25:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:25:32.949236 | orchestrator | 2026-04-08 02:25:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:25:35.996451 | orchestrator | 2026-04-08 02:25:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:25:36.000521 | orchestrator | 2026-04-08 02:25:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:25:36.001180 | orchestrator | 2026-04-08 02:25:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:25:39.047771 | orchestrator | 2026-04-08 02:25:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:25:39.049677 | orchestrator | 2026-04-08 02:25:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:25:39.049770 | orchestrator | 2026-04-08 02:25:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:25:42.096199 | orchestrator | 2026-04-08 02:25:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:25:42.097790 | orchestrator | 2026-04-08 02:25:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:25:42.097882 | orchestrator | 2026-04-08 02:25:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:25:45.146403 | orchestrator | 2026-04-08 02:25:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:25:45.148523 | orchestrator | 2026-04-08 02:25:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:25:45.148690 | orchestrator | 2026-04-08 02:25:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:25:48.201026 | orchestrator | 2026-04-08 02:25:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:25:48.203482 | orchestrator | 2026-04-08 02:25:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:25:48.203576 | orchestrator | 2026-04-08 02:25:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:25:51.245230 | orchestrator | 2026-04-08 02:25:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:25:51.246873 | orchestrator | 2026-04-08 02:25:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:25:51.246920 | orchestrator | 2026-04-08 02:25:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:25:54.294290 | orchestrator | 2026-04-08 02:25:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:25:54.296336 | orchestrator | 2026-04-08 02:25:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:25:54.296662 | orchestrator | 2026-04-08 02:25:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:25:57.340739 | orchestrator | 2026-04-08 02:25:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:25:57.342231 | orchestrator | 2026-04-08 02:25:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:25:57.342274 | orchestrator | 2026-04-08 02:25:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:26:00.388172 | orchestrator | 2026-04-08 02:26:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:26:00.389483 | orchestrator | 2026-04-08 02:26:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:26:00.389533 | orchestrator | 2026-04-08 02:26:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:26:03.443124 | orchestrator | 2026-04-08 02:26:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:26:03.444186 | orchestrator | 2026-04-08 02:26:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:26:03.444265 | orchestrator | 2026-04-08 02:26:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:26:06.490983 | orchestrator | 2026-04-08 02:26:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:26:06.492268 | orchestrator | 2026-04-08 02:26:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:26:06.492319 | orchestrator | 2026-04-08 02:26:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:26:09.540114 | orchestrator | 2026-04-08 02:26:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:26:09.542286 | orchestrator | 2026-04-08 02:26:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:26:09.542347 | orchestrator | 2026-04-08 02:26:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:26:12.588772 | orchestrator | 2026-04-08 02:26:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:26:12.590298 | orchestrator | 2026-04-08 02:26:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:26:12.590356 | orchestrator | 2026-04-08 02:26:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:26:15.636755 | orchestrator | 2026-04-08 02:26:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:26:15.638677 | orchestrator | 2026-04-08 02:26:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:26:15.638733 | orchestrator | 2026-04-08 02:26:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:26:18.684928 | orchestrator | 2026-04-08 02:26:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:26:18.686567 | orchestrator | 2026-04-08 02:26:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:26:18.686644 | orchestrator | 2026-04-08 02:26:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:26:21.731023 | orchestrator | 2026-04-08 02:26:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:26:21.732422 | orchestrator | 2026-04-08 02:26:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:26:21.732716 | orchestrator | 2026-04-08 02:26:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:26:24.784261 | orchestrator | 2026-04-08 02:26:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:26:24.785777 | orchestrator | 2026-04-08 02:26:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:26:24.786086 | orchestrator | 2026-04-08 02:26:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:26:27.835352 | orchestrator | 2026-04-08 02:26:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:26:27.839277 | orchestrator | 2026-04-08 02:26:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:26:27.839412 | orchestrator | 2026-04-08 02:26:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:26:30.893546 | orchestrator | 2026-04-08 02:26:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:26:30.897142 | orchestrator | 2026-04-08 02:26:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:26:30.897215 | orchestrator | 2026-04-08 02:26:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:26:33.950191 | orchestrator | 2026-04-08 02:26:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:26:33.955203 | orchestrator | 2026-04-08 02:26:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:26:33.955423 | orchestrator | 2026-04-08 02:26:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:26:37.006750 | orchestrator | 2026-04-08 02:26:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:26:37.011966 | orchestrator | 2026-04-08 02:26:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:26:37.012053 | orchestrator | 2026-04-08 02:26:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:26:40.057953 | orchestrator | 2026-04-08 02:26:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:26:40.058291 | orchestrator | 2026-04-08 02:26:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:26:40.058550 | orchestrator | 2026-04-08 02:26:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:26:43.104818 | orchestrator | 2026-04-08 02:26:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:26:43.106227 | orchestrator | 2026-04-08 02:26:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:26:43.106466 | orchestrator | 2026-04-08 02:26:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:26:46.154134 | orchestrator | 2026-04-08 02:26:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:26:46.155708 | orchestrator | 2026-04-08 02:26:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:26:46.155764 | orchestrator | 2026-04-08 02:26:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:26:49.212313 | orchestrator | 2026-04-08 02:26:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:26:49.213876 | orchestrator | 2026-04-08 02:26:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:26:49.213915 | orchestrator | 2026-04-08 02:26:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:26:52.258736 | orchestrator | 2026-04-08 02:26:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:26:52.260217 | orchestrator | 2026-04-08 02:26:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:26:52.260255 | orchestrator | 2026-04-08 02:26:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:26:55.309217 | orchestrator | 2026-04-08 02:26:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:26:55.311761 | orchestrator | 2026-04-08 02:26:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:26:55.311881 | orchestrator | 2026-04-08 02:26:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:26:58.359051 | orchestrator | 2026-04-08 02:26:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:26:58.360117 | orchestrator | 2026-04-08 02:26:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:26:58.360149 | orchestrator | 2026-04-08 02:26:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:27:01.408543 | orchestrator | 2026-04-08 02:27:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:27:01.410528 | orchestrator | 2026-04-08 02:27:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:27:01.410570 | orchestrator | 2026-04-08 02:27:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:27:04.467039 | orchestrator | 2026-04-08 02:27:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:27:04.468198 | orchestrator | 2026-04-08 02:27:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:27:04.468238 | orchestrator | 2026-04-08 02:27:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:27:07.512036 | orchestrator | 2026-04-08 02:27:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:27:07.513542 | orchestrator | 2026-04-08 02:27:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:27:07.513598 | orchestrator | 2026-04-08 02:27:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:27:10.565627 | orchestrator | 2026-04-08 02:27:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:27:10.568192 | orchestrator | 2026-04-08 02:27:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:27:10.568238 | orchestrator | 2026-04-08 02:27:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:27:13.613192 | orchestrator | 2026-04-08 02:27:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:27:13.613902 | orchestrator | 2026-04-08 02:27:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:27:13.613966 | orchestrator | 2026-04-08 02:27:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:27:16.661303 | orchestrator | 2026-04-08 02:27:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:27:16.662710 | orchestrator | 2026-04-08 02:27:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:27:16.662764 | orchestrator | 2026-04-08 02:27:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:27:19.714184 | orchestrator | 2026-04-08 02:27:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:27:19.715406 | orchestrator | 2026-04-08 02:27:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:27:19.715440 | orchestrator | 2026-04-08 02:27:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:27:22.771784 | orchestrator | 2026-04-08 02:27:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:27:22.771919 | orchestrator | 2026-04-08 02:27:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:27:22.771933 | orchestrator | 2026-04-08 02:27:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:27:25.816941 | orchestrator | 2026-04-08 02:27:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:27:25.818984 | orchestrator | 2026-04-08 02:27:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:27:25.819043 | orchestrator | 2026-04-08 02:27:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:27:28.871262 | orchestrator | 2026-04-08 02:27:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:27:28.873487 | orchestrator | 2026-04-08 02:27:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:27:28.873559 | orchestrator | 2026-04-08 02:27:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:27:31.921203 | orchestrator | 2026-04-08 02:27:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:27:31.924419 | orchestrator | 2026-04-08 02:27:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:27:31.924477 | orchestrator | 2026-04-08 02:27:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:27:34.977665 | orchestrator | 2026-04-08 02:27:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:27:34.982267 | orchestrator | 2026-04-08 02:27:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:27:34.982523 | orchestrator | 2026-04-08 02:27:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:27:38.036562 | orchestrator | 2026-04-08 02:27:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:27:38.039119 | orchestrator | 2026-04-08 02:27:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:27:38.039183 | orchestrator | 2026-04-08 02:27:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:27:41.091184 | orchestrator | 2026-04-08 02:27:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:27:41.093152 | orchestrator | 2026-04-08 02:27:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:27:41.093270 | orchestrator | 2026-04-08 02:27:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:27:44.142327 | orchestrator | 2026-04-08 02:27:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:27:44.144527 | orchestrator | 2026-04-08 02:27:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:27:44.144600 | orchestrator | 2026-04-08 02:27:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:27:47.197220 | orchestrator | 2026-04-08 02:27:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:27:47.199901 | orchestrator | 2026-04-08 02:27:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:27:47.199973 | orchestrator | 2026-04-08 02:27:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:27:50.254261 | orchestrator | 2026-04-08 02:27:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:27:50.257910 | orchestrator | 2026-04-08 02:27:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:27:50.259549 | orchestrator | 2026-04-08 02:27:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:27:53.306273 | orchestrator | 2026-04-08 02:27:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:27:53.308263 | orchestrator | 2026-04-08 02:27:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:27:53.308337 | orchestrator | 2026-04-08 02:27:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:27:56.361739 | orchestrator | 2026-04-08 02:27:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:27:56.363354 | orchestrator | 2026-04-08 02:27:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:27:56.363853 | orchestrator | 2026-04-08 02:27:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:27:59.413407 | orchestrator | 2026-04-08 02:27:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:27:59.416201 | orchestrator | 2026-04-08 02:27:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:27:59.416280 | orchestrator | 2026-04-08 02:27:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:28:02.457256 | orchestrator | 2026-04-08 02:28:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:28:02.458630 | orchestrator | 2026-04-08 02:28:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:28:02.458679 | orchestrator | 2026-04-08 02:28:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:28:05.500021 | orchestrator | 2026-04-08 02:28:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:28:05.500976 | orchestrator | 2026-04-08 02:28:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:28:05.501068 | orchestrator | 2026-04-08 02:28:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:28:08.545371 | orchestrator | 2026-04-08 02:28:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:28:08.547453 | orchestrator | 2026-04-08 02:28:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:28:08.547570 | orchestrator | 2026-04-08 02:28:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:28:11.603540 | orchestrator | 2026-04-08 02:28:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:28:11.605166 | orchestrator | 2026-04-08 02:28:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:28:11.605245 | orchestrator | 2026-04-08 02:28:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:28:14.654176 | orchestrator | 2026-04-08 02:28:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:28:14.656379 | orchestrator | 2026-04-08 02:28:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:28:14.656427 | orchestrator | 2026-04-08 02:28:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:28:17.708283 | orchestrator | 2026-04-08 02:28:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:28:17.710235 | orchestrator | 2026-04-08 02:28:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:28:17.710287 | orchestrator | 2026-04-08 02:28:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:28:20.762925 | orchestrator | 2026-04-08 02:28:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:28:20.764404 | orchestrator | 2026-04-08 02:28:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:28:20.764450 | orchestrator | 2026-04-08 02:28:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:28:23.815052 | orchestrator | 2026-04-08 02:28:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:28:23.817698 | orchestrator | 2026-04-08 02:28:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:28:23.817770 | orchestrator | 2026-04-08 02:28:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:28:26.866183 | orchestrator | 2026-04-08 02:28:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:28:26.867579 | orchestrator | 2026-04-08 02:28:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:28:26.867729 | orchestrator | 2026-04-08 02:28:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:28:29.917106 | orchestrator | 2026-04-08 02:28:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:28:29.918413 | orchestrator | 2026-04-08 02:28:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:28:29.918455 | orchestrator | 2026-04-08 02:28:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:28:32.970393 | orchestrator | 2026-04-08 02:28:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:28:32.973041 | orchestrator | 2026-04-08 02:28:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:28:32.973114 | orchestrator | 2026-04-08 02:28:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:28:36.023863 | orchestrator | 2026-04-08 02:28:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:28:36.025131 | orchestrator | 2026-04-08 02:28:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:28:36.025202 | orchestrator | 2026-04-08 02:28:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:28:39.061880 | orchestrator | 2026-04-08 02:28:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:28:39.063453 | orchestrator | 2026-04-08 02:28:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:28:39.063568 | orchestrator | 2026-04-08 02:28:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:28:42.110313 | orchestrator | 2026-04-08 02:28:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:28:42.112103 | orchestrator | 2026-04-08 02:28:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:28:42.112154 | orchestrator | 2026-04-08 02:28:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:28:45.161859 | orchestrator | 2026-04-08 02:28:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:28:45.162699 | orchestrator | 2026-04-08 02:28:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:28:45.162739 | orchestrator | 2026-04-08 02:28:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:28:48.214875 | orchestrator | 2026-04-08 02:28:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:28:48.216446 | orchestrator | 2026-04-08 02:28:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:28:48.216498 | orchestrator | 2026-04-08 02:28:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:28:51.266278 | orchestrator | 2026-04-08 02:28:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:28:51.267601 | orchestrator | 2026-04-08 02:28:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:28:51.267730 | orchestrator | 2026-04-08 02:28:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:28:54.310482 | orchestrator | 2026-04-08 02:28:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:28:54.311597 | orchestrator | 2026-04-08 02:28:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:28:54.311659 | orchestrator | 2026-04-08 02:28:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:28:57.361560 | orchestrator | 2026-04-08 02:28:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:28:57.362594 | orchestrator | 2026-04-08 02:28:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:28:57.362614 | orchestrator | 2026-04-08 02:28:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:29:00.412184 | orchestrator | 2026-04-08 02:29:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:29:00.414906 | orchestrator | 2026-04-08 02:29:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:29:00.414971 | orchestrator | 2026-04-08 02:29:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:29:03.464154 | orchestrator | 2026-04-08 02:29:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:29:03.466313 | orchestrator | 2026-04-08 02:29:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:29:03.466365 | orchestrator | 2026-04-08 02:29:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:29:06.515653 | orchestrator | 2026-04-08 02:29:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:29:06.517296 | orchestrator | 2026-04-08 02:29:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:29:06.517352 | orchestrator | 2026-04-08 02:29:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:29:09.568390 | orchestrator | 2026-04-08 02:29:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:29:09.569813 | orchestrator | 2026-04-08 02:29:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:29:09.569846 | orchestrator | 2026-04-08 02:29:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:29:12.618367 | orchestrator | 2026-04-08 02:29:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:29:12.620173 | orchestrator | 2026-04-08 02:29:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:29:12.620219 | orchestrator | 2026-04-08 02:29:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:29:15.667602 | orchestrator | 2026-04-08 02:29:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:29:15.669879 | orchestrator | 2026-04-08 02:29:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:29:15.669937 | orchestrator | 2026-04-08 02:29:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:29:18.722753 | orchestrator | 2026-04-08 02:29:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:29:18.725053 | orchestrator | 2026-04-08 02:29:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:29:18.725171 | orchestrator | 2026-04-08 02:29:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:29:21.772156 | orchestrator | 2026-04-08 02:29:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:29:21.773681 | orchestrator | 2026-04-08 02:29:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:29:21.773718 | orchestrator | 2026-04-08 02:29:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:29:24.818892 | orchestrator | 2026-04-08 02:29:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:29:24.821049 | orchestrator | 2026-04-08 02:29:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:29:24.821155 | orchestrator | 2026-04-08 02:29:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:29:27.868194 | orchestrator | 2026-04-08 02:29:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:29:27.870430 | orchestrator | 2026-04-08 02:29:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:29:27.870555 | orchestrator | 2026-04-08 02:29:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:29:30.921258 | orchestrator | 2026-04-08 02:29:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:29:30.924261 | orchestrator | 2026-04-08 02:29:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:29:30.924347 | orchestrator | 2026-04-08 02:29:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:29:33.974603 | orchestrator | 2026-04-08 02:29:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:29:33.978902 | orchestrator | 2026-04-08 02:29:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:29:33.978961 | orchestrator | 2026-04-08 02:29:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:29:37.034134 | orchestrator | 2026-04-08 02:29:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:29:37.035705 | orchestrator | 2026-04-08 02:29:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:29:37.035751 | orchestrator | 2026-04-08 02:29:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:29:40.087170 | orchestrator | 2026-04-08 02:29:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:29:40.092004 | orchestrator | 2026-04-08 02:29:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:29:40.092448 | orchestrator | 2026-04-08 02:29:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:29:43.144981 | orchestrator | 2026-04-08 02:29:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:29:43.146375 | orchestrator | 2026-04-08 02:29:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:29:43.146421 | orchestrator | 2026-04-08 02:29:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:29:46.201559 | orchestrator | 2026-04-08 02:29:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:29:46.205286 | orchestrator | 2026-04-08 02:29:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:29:46.205372 | orchestrator | 2026-04-08 02:29:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:29:49.258906 | orchestrator | 2026-04-08 02:29:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:29:49.260183 | orchestrator | 2026-04-08 02:29:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:29:49.260475 | orchestrator | 2026-04-08 02:29:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:29:52.316144 | orchestrator | 2026-04-08 02:29:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:29:52.319191 | orchestrator | 2026-04-08 02:29:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:29:52.319264 | orchestrator | 2026-04-08 02:29:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:29:55.374667 | orchestrator | 2026-04-08 02:29:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:29:55.375725 | orchestrator | 2026-04-08 02:29:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:29:55.375828 | orchestrator | 2026-04-08 02:29:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:29:58.424404 | orchestrator | 2026-04-08 02:29:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:29:58.426635 | orchestrator | 2026-04-08 02:29:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:29:58.426663 | orchestrator | 2026-04-08 02:29:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:30:01.476824 | orchestrator | 2026-04-08 02:30:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:30:01.478510 | orchestrator | 2026-04-08 02:30:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:30:01.478563 | orchestrator | 2026-04-08 02:30:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:30:04.530660 | orchestrator | 2026-04-08 02:30:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:30:04.532345 | orchestrator | 2026-04-08 02:30:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:30:04.532512 | orchestrator | 2026-04-08 02:30:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:30:07.583384 | orchestrator | 2026-04-08 02:30:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:30:07.585382 | orchestrator | 2026-04-08 02:30:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:30:07.585644 | orchestrator | 2026-04-08 02:30:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:30:10.634548 | orchestrator | 2026-04-08 02:30:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:30:10.635120 | orchestrator | 2026-04-08 02:30:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:30:10.635150 | orchestrator | 2026-04-08 02:30:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:30:13.688143 | orchestrator | 2026-04-08 02:30:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:30:13.690333 | orchestrator | 2026-04-08 02:30:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:30:13.690378 | orchestrator | 2026-04-08 02:30:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:30:16.742319 | orchestrator | 2026-04-08 02:30:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:30:16.743671 | orchestrator | 2026-04-08 02:30:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:30:16.743764 | orchestrator | 2026-04-08 02:30:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:30:19.794918 | orchestrator | 2026-04-08 02:30:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:30:19.796534 | orchestrator | 2026-04-08 02:30:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:30:19.796589 | orchestrator | 2026-04-08 02:30:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:30:22.847536 | orchestrator | 2026-04-08 02:30:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:30:22.849190 | orchestrator | 2026-04-08 02:30:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:30:22.849232 | orchestrator | 2026-04-08 02:30:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:30:25.890952 | orchestrator | 2026-04-08 02:30:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:30:25.891046 | orchestrator | 2026-04-08 02:30:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:30:25.891056 | orchestrator | 2026-04-08 02:30:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:30:28.934777 | orchestrator | 2026-04-08 02:30:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:30:28.935765 | orchestrator | 2026-04-08 02:30:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:30:28.935784 | orchestrator | 2026-04-08 02:30:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:30:31.986236 | orchestrator | 2026-04-08 02:30:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:30:31.987615 | orchestrator | 2026-04-08 02:30:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:30:31.987660 | orchestrator | 2026-04-08 02:30:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:30:35.043407 | orchestrator | 2026-04-08 02:30:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:30:35.043550 | orchestrator | 2026-04-08 02:30:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:30:35.043572 | orchestrator | 2026-04-08 02:30:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:30:38.096516 | orchestrator | 2026-04-08 02:30:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:30:38.097684 | orchestrator | 2026-04-08 02:30:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:30:38.097780 | orchestrator | 2026-04-08 02:30:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:30:41.141367 | orchestrator | 2026-04-08 02:30:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:30:41.143434 | orchestrator | 2026-04-08 02:30:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:30:41.143502 | orchestrator | 2026-04-08 02:30:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:30:44.192612 | orchestrator | 2026-04-08 02:30:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:30:44.193874 | orchestrator | 2026-04-08 02:30:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:30:44.193963 | orchestrator | 2026-04-08 02:30:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:30:47.239810 | orchestrator | 2026-04-08 02:30:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:30:47.240898 | orchestrator | 2026-04-08 02:30:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:30:47.241260 | orchestrator | 2026-04-08 02:30:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:30:50.290349 | orchestrator | 2026-04-08 02:30:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:30:50.291524 | orchestrator | 2026-04-08 02:30:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:30:50.291578 | orchestrator | 2026-04-08 02:30:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:30:53.347205 | orchestrator | 2026-04-08 02:30:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:30:53.348603 | orchestrator | 2026-04-08 02:30:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:30:53.348684 | orchestrator | 2026-04-08 02:30:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:30:56.396842 | orchestrator | 2026-04-08 02:30:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:30:56.399787 | orchestrator | 2026-04-08 02:30:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:30:56.399829 | orchestrator | 2026-04-08 02:30:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:30:59.445953 | orchestrator | 2026-04-08 02:30:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:30:59.447703 | orchestrator | 2026-04-08 02:30:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:30:59.447853 | orchestrator | 2026-04-08 02:30:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:31:02.501457 | orchestrator | 2026-04-08 02:31:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:31:02.503241 | orchestrator | 2026-04-08 02:31:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:31:02.503301 | orchestrator | 2026-04-08 02:31:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:31:05.551843 | orchestrator | 2026-04-08 02:31:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:31:05.553248 | orchestrator | 2026-04-08 02:31:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:31:05.553289 | orchestrator | 2026-04-08 02:31:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:31:08.604547 | orchestrator | 2026-04-08 02:31:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:31:08.605673 | orchestrator | 2026-04-08 02:31:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:31:08.605765 | orchestrator | 2026-04-08 02:31:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:31:11.654972 | orchestrator | 2026-04-08 02:31:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:31:11.656765 | orchestrator | 2026-04-08 02:31:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:31:11.656806 | orchestrator | 2026-04-08 02:31:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:31:14.706277 | orchestrator | 2026-04-08 02:31:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:31:14.708020 | orchestrator | 2026-04-08 02:31:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:31:14.708048 | orchestrator | 2026-04-08 02:31:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:31:17.760809 | orchestrator | 2026-04-08 02:31:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:31:17.761660 | orchestrator | 2026-04-08 02:31:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:31:17.761920 | orchestrator | 2026-04-08 02:31:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:31:20.810899 | orchestrator | 2026-04-08 02:31:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:31:20.812600 | orchestrator | 2026-04-08 02:31:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:31:20.812671 | orchestrator | 2026-04-08 02:31:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:31:23.865443 | orchestrator | 2026-04-08 02:31:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:31:23.866773 | orchestrator | 2026-04-08 02:31:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:31:23.866825 | orchestrator | 2026-04-08 02:31:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:31:26.913929 | orchestrator | 2026-04-08 02:31:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:31:26.915639 | orchestrator | 2026-04-08 02:31:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:31:26.915716 | orchestrator | 2026-04-08 02:31:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:31:29.967091 | orchestrator | 2026-04-08 02:31:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:31:29.969343 | orchestrator | 2026-04-08 02:31:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:31:29.969415 | orchestrator | 2026-04-08 02:31:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:31:33.019127 | orchestrator | 2026-04-08 02:31:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:31:33.021412 | orchestrator | 2026-04-08 02:31:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:31:33.021559 | orchestrator | 2026-04-08 02:31:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:31:36.067577 | orchestrator | 2026-04-08 02:31:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:31:36.068560 | orchestrator | 2026-04-08 02:31:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:31:36.068630 | orchestrator | 2026-04-08 02:31:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:31:39.118278 | orchestrator | 2026-04-08 02:31:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:31:39.120078 | orchestrator | 2026-04-08 02:31:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:31:39.120165 | orchestrator | 2026-04-08 02:31:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:31:42.159525 | orchestrator | 2026-04-08 02:31:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:31:42.160746 | orchestrator | 2026-04-08 02:31:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:31:42.160824 | orchestrator | 2026-04-08 02:31:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:31:45.209508 | orchestrator | 2026-04-08 02:31:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:31:45.211604 | orchestrator | 2026-04-08 02:31:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:31:45.211781 | orchestrator | 2026-04-08 02:31:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:31:48.266350 | orchestrator | 2026-04-08 02:31:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:31:48.266451 | orchestrator | 2026-04-08 02:31:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:31:48.266466 | orchestrator | 2026-04-08 02:31:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:31:51.312941 | orchestrator | 2026-04-08 02:31:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:31:51.313042 | orchestrator | 2026-04-08 02:31:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:31:51.313054 | orchestrator | 2026-04-08 02:31:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:31:54.362162 | orchestrator | 2026-04-08 02:31:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:31:54.364084 | orchestrator | 2026-04-08 02:31:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:31:54.364150 | orchestrator | 2026-04-08 02:31:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:31:57.416036 | orchestrator | 2026-04-08 02:31:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:31:57.417604 | orchestrator | 2026-04-08 02:31:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:31:57.417650 | orchestrator | 2026-04-08 02:31:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:32:00.470088 | orchestrator | 2026-04-08 02:32:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:32:00.471035 | orchestrator | 2026-04-08 02:32:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:32:00.471092 | orchestrator | 2026-04-08 02:32:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:32:03.510323 | orchestrator | 2026-04-08 02:32:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:32:03.511770 | orchestrator | 2026-04-08 02:32:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:32:03.511852 | orchestrator | 2026-04-08 02:32:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:32:06.558302 | orchestrator | 2026-04-08 02:32:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:32:06.559103 | orchestrator | 2026-04-08 02:32:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:32:06.559225 | orchestrator | 2026-04-08 02:32:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:32:09.613568 | orchestrator | 2026-04-08 02:32:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:32:09.614929 | orchestrator | 2026-04-08 02:32:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:32:09.614997 | orchestrator | 2026-04-08 02:32:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:32:12.662961 | orchestrator | 2026-04-08 02:32:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:32:12.664960 | orchestrator | 2026-04-08 02:32:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:32:12.664998 | orchestrator | 2026-04-08 02:32:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:32:15.712105 | orchestrator | 2026-04-08 02:32:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:32:15.712497 | orchestrator | 2026-04-08 02:32:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:32:15.712518 | orchestrator | 2026-04-08 02:32:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:32:18.759919 | orchestrator | 2026-04-08 02:32:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:32:18.761449 | orchestrator | 2026-04-08 02:32:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:32:18.761499 | orchestrator | 2026-04-08 02:32:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:32:21.818121 | orchestrator | 2026-04-08 02:32:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:32:21.820362 | orchestrator | 2026-04-08 02:32:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:32:21.820468 | orchestrator | 2026-04-08 02:32:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:32:24.869498 | orchestrator | 2026-04-08 02:32:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:32:24.871377 | orchestrator | 2026-04-08 02:32:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:32:24.871437 | orchestrator | 2026-04-08 02:32:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:32:27.918303 | orchestrator | 2026-04-08 02:32:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:32:27.920393 | orchestrator | 2026-04-08 02:32:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:32:27.920649 | orchestrator | 2026-04-08 02:32:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:32:30.965052 | orchestrator | 2026-04-08 02:32:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:32:30.966194 | orchestrator | 2026-04-08 02:32:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:32:30.966233 | orchestrator | 2026-04-08 02:32:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:32:34.016456 | orchestrator | 2026-04-08 02:32:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:32:34.018292 | orchestrator | 2026-04-08 02:32:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:32:34.018349 | orchestrator | 2026-04-08 02:32:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:32:37.062352 | orchestrator | 2026-04-08 02:32:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:32:37.064209 | orchestrator | 2026-04-08 02:32:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:32:37.064249 | orchestrator | 2026-04-08 02:32:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:32:40.113029 | orchestrator | 2026-04-08 02:32:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:32:40.113172 | orchestrator | 2026-04-08 02:32:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:32:40.113190 | orchestrator | 2026-04-08 02:32:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:32:43.164040 | orchestrator | 2026-04-08 02:32:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:32:43.166232 | orchestrator | 2026-04-08 02:32:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:32:43.166282 | orchestrator | 2026-04-08 02:32:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:32:46.218537 | orchestrator | 2026-04-08 02:32:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:32:46.221581 | orchestrator | 2026-04-08 02:32:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:32:46.221980 | orchestrator | 2026-04-08 02:32:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:32:49.272294 | orchestrator | 2026-04-08 02:32:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:32:49.274282 | orchestrator | 2026-04-08 02:32:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:32:49.274348 | orchestrator | 2026-04-08 02:32:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:32:52.318635 | orchestrator | 2026-04-08 02:32:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:32:52.320151 | orchestrator | 2026-04-08 02:32:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:32:52.320295 | orchestrator | 2026-04-08 02:32:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:32:55.374147 | orchestrator | 2026-04-08 02:32:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:32:55.376393 | orchestrator | 2026-04-08 02:32:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:32:55.376514 | orchestrator | 2026-04-08 02:32:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:32:58.424307 | orchestrator | 2026-04-08 02:32:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:32:58.425748 | orchestrator | 2026-04-08 02:32:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:32:58.425834 | orchestrator | 2026-04-08 02:32:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:33:01.472541 | orchestrator | 2026-04-08 02:33:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:33:01.474614 | orchestrator | 2026-04-08 02:33:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:33:01.474693 | orchestrator | 2026-04-08 02:33:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:33:04.522822 | orchestrator | 2026-04-08 02:33:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:33:04.524748 | orchestrator | 2026-04-08 02:33:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:33:04.525010 | orchestrator | 2026-04-08 02:33:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:33:07.576139 | orchestrator | 2026-04-08 02:33:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:33:07.577612 | orchestrator | 2026-04-08 02:33:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:33:07.577638 | orchestrator | 2026-04-08 02:33:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:33:10.629360 | orchestrator | 2026-04-08 02:33:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:33:10.630771 | orchestrator | 2026-04-08 02:33:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:33:10.630807 | orchestrator | 2026-04-08 02:33:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:33:13.677943 | orchestrator | 2026-04-08 02:33:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:33:13.681078 | orchestrator | 2026-04-08 02:33:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:33:13.681138 | orchestrator | 2026-04-08 02:33:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:33:16.728842 | orchestrator | 2026-04-08 02:33:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:33:16.731225 | orchestrator | 2026-04-08 02:33:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:33:16.731279 | orchestrator | 2026-04-08 02:33:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:33:19.779752 | orchestrator | 2026-04-08 02:33:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:33:19.781190 | orchestrator | 2026-04-08 02:33:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:33:19.781296 | orchestrator | 2026-04-08 02:33:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:33:22.832935 | orchestrator | 2026-04-08 02:33:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:33:22.833575 | orchestrator | 2026-04-08 02:33:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:33:22.834145 | orchestrator | 2026-04-08 02:33:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:33:25.888286 | orchestrator | 2026-04-08 02:33:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:33:25.889436 | orchestrator | 2026-04-08 02:33:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:33:25.889469 | orchestrator | 2026-04-08 02:33:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:33:28.936124 | orchestrator | 2026-04-08 02:33:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:33:28.938247 | orchestrator | 2026-04-08 02:33:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:33:28.938294 | orchestrator | 2026-04-08 02:33:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:33:31.993008 | orchestrator | 2026-04-08 02:33:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:33:31.994684 | orchestrator | 2026-04-08 02:33:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:33:31.994707 | orchestrator | 2026-04-08 02:33:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:33:35.043775 | orchestrator | 2026-04-08 02:33:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:33:35.046724 | orchestrator | 2026-04-08 02:33:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:33:35.046803 | orchestrator | 2026-04-08 02:33:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:33:38.094174 | orchestrator | 2026-04-08 02:33:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:33:38.096871 | orchestrator | 2026-04-08 02:33:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:33:38.096934 | orchestrator | 2026-04-08 02:33:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:33:41.147127 | orchestrator | 2026-04-08 02:33:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:33:41.149121 | orchestrator | 2026-04-08 02:33:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:33:41.149236 | orchestrator | 2026-04-08 02:33:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:33:44.194232 | orchestrator | 2026-04-08 02:33:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:33:44.195944 | orchestrator | 2026-04-08 02:33:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:33:44.196049 | orchestrator | 2026-04-08 02:33:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:33:47.248137 | orchestrator | 2026-04-08 02:33:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:33:47.250825 | orchestrator | 2026-04-08 02:33:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:33:47.250878 | orchestrator | 2026-04-08 02:33:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:33:50.301692 | orchestrator | 2026-04-08 02:33:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:33:50.305431 | orchestrator | 2026-04-08 02:33:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:33:50.305491 | orchestrator | 2026-04-08 02:33:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:33:53.358007 | orchestrator | 2026-04-08 02:33:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:33:53.360972 | orchestrator | 2026-04-08 02:33:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:33:53.361163 | orchestrator | 2026-04-08 02:33:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:33:56.414620 | orchestrator | 2026-04-08 02:33:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:33:56.417082 | orchestrator | 2026-04-08 02:33:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:33:56.417150 | orchestrator | 2026-04-08 02:33:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:33:59.466164 | orchestrator | 2026-04-08 02:33:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:33:59.467860 | orchestrator | 2026-04-08 02:33:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:33:59.467945 | orchestrator | 2026-04-08 02:33:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:34:02.515278 | orchestrator | 2026-04-08 02:34:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:34:02.516138 | orchestrator | 2026-04-08 02:34:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:34:02.516181 | orchestrator | 2026-04-08 02:34:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:34:05.565004 | orchestrator | 2026-04-08 02:34:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:34:05.567444 | orchestrator | 2026-04-08 02:34:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:34:05.567583 | orchestrator | 2026-04-08 02:34:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:34:08.616964 | orchestrator | 2026-04-08 02:34:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:34:08.619221 | orchestrator | 2026-04-08 02:34:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:34:08.619293 | orchestrator | 2026-04-08 02:34:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:34:11.668915 | orchestrator | 2026-04-08 02:34:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:34:11.670811 | orchestrator | 2026-04-08 02:34:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:34:11.670874 | orchestrator | 2026-04-08 02:34:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:34:14.724885 | orchestrator | 2026-04-08 02:34:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:34:14.726404 | orchestrator | 2026-04-08 02:34:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:34:14.726465 | orchestrator | 2026-04-08 02:34:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:34:17.777239 | orchestrator | 2026-04-08 02:34:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:34:17.778492 | orchestrator | 2026-04-08 02:34:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:34:17.778530 | orchestrator | 2026-04-08 02:34:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:34:20.824408 | orchestrator | 2026-04-08 02:34:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:34:20.826530 | orchestrator | 2026-04-08 02:34:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:34:20.826614 | orchestrator | 2026-04-08 02:34:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:34:23.875097 | orchestrator | 2026-04-08 02:34:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:34:23.877807 | orchestrator | 2026-04-08 02:34:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:34:23.877897 | orchestrator | 2026-04-08 02:34:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:34:26.927306 | orchestrator | 2026-04-08 02:34:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:34:26.928898 | orchestrator | 2026-04-08 02:34:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:34:26.928945 | orchestrator | 2026-04-08 02:34:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:34:29.980910 | orchestrator | 2026-04-08 02:34:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:34:29.982472 | orchestrator | 2026-04-08 02:34:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:34:29.982551 | orchestrator | 2026-04-08 02:34:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:34:33.032670 | orchestrator | 2026-04-08 02:34:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:34:33.034344 | orchestrator | 2026-04-08 02:34:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:34:33.034441 | orchestrator | 2026-04-08 02:34:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:34:36.082190 | orchestrator | 2026-04-08 02:34:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:34:36.083757 | orchestrator | 2026-04-08 02:34:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:34:36.083813 | orchestrator | 2026-04-08 02:34:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:34:39.127281 | orchestrator | 2026-04-08 02:34:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:34:39.129752 | orchestrator | 2026-04-08 02:34:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:34:39.129814 | orchestrator | 2026-04-08 02:34:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:34:42.175828 | orchestrator | 2026-04-08 02:34:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:34:42.177134 | orchestrator | 2026-04-08 02:34:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:34:42.177701 | orchestrator | 2026-04-08 02:34:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:34:45.231033 | orchestrator | 2026-04-08 02:34:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:34:45.233721 | orchestrator | 2026-04-08 02:34:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:34:45.233793 | orchestrator | 2026-04-08 02:34:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:34:48.283492 | orchestrator | 2026-04-08 02:34:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:34:48.285770 | orchestrator | 2026-04-08 02:34:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:34:48.285864 | orchestrator | 2026-04-08 02:34:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:34:51.340990 | orchestrator | 2026-04-08 02:34:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:34:51.342971 | orchestrator | 2026-04-08 02:34:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:34:51.343117 | orchestrator | 2026-04-08 02:34:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:34:54.398272 | orchestrator | 2026-04-08 02:34:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:34:54.400055 | orchestrator | 2026-04-08 02:34:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:34:54.400133 | orchestrator | 2026-04-08 02:34:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:34:57.455481 | orchestrator | 2026-04-08 02:34:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:34:57.457164 | orchestrator | 2026-04-08 02:34:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:34:57.457301 | orchestrator | 2026-04-08 02:34:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:35:00.510325 | orchestrator | 2026-04-08 02:35:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:35:00.511300 | orchestrator | 2026-04-08 02:35:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:35:00.511346 | orchestrator | 2026-04-08 02:35:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:35:03.563765 | orchestrator | 2026-04-08 02:35:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:35:03.565644 | orchestrator | 2026-04-08 02:35:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:35:03.565714 | orchestrator | 2026-04-08 02:35:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:35:06.620018 | orchestrator | 2026-04-08 02:35:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:35:06.621630 | orchestrator | 2026-04-08 02:35:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:35:06.621664 | orchestrator | 2026-04-08 02:35:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:35:09.672079 | orchestrator | 2026-04-08 02:35:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:35:09.673667 | orchestrator | 2026-04-08 02:35:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:35:09.673721 | orchestrator | 2026-04-08 02:35:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:35:12.727412 | orchestrator | 2026-04-08 02:35:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:35:12.730995 | orchestrator | 2026-04-08 02:35:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:35:12.731061 | orchestrator | 2026-04-08 02:35:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:35:15.782719 | orchestrator | 2026-04-08 02:35:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:35:15.784136 | orchestrator | 2026-04-08 02:35:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:35:15.784172 | orchestrator | 2026-04-08 02:35:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:35:18.833674 | orchestrator | 2026-04-08 02:35:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:35:18.836088 | orchestrator | 2026-04-08 02:35:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:35:18.836546 | orchestrator | 2026-04-08 02:35:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:35:21.881871 | orchestrator | 2026-04-08 02:35:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:35:21.882553 | orchestrator | 2026-04-08 02:35:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:35:21.882633 | orchestrator | 2026-04-08 02:35:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:35:24.925974 | orchestrator | 2026-04-08 02:35:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:35:24.928453 | orchestrator | 2026-04-08 02:35:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:35:24.928515 | orchestrator | 2026-04-08 02:35:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:35:27.980007 | orchestrator | 2026-04-08 02:35:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:35:27.980933 | orchestrator | 2026-04-08 02:35:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:35:27.980982 | orchestrator | 2026-04-08 02:35:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:35:31.043066 | orchestrator | 2026-04-08 02:35:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:35:31.044653 | orchestrator | 2026-04-08 02:35:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:35:31.044777 | orchestrator | 2026-04-08 02:35:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:35:34.094910 | orchestrator | 2026-04-08 02:35:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:35:34.098244 | orchestrator | 2026-04-08 02:35:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:35:34.098301 | orchestrator | 2026-04-08 02:35:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:35:37.146918 | orchestrator | 2026-04-08 02:35:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:35:37.148707 | orchestrator | 2026-04-08 02:35:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:35:37.148850 | orchestrator | 2026-04-08 02:35:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:35:40.196265 | orchestrator | 2026-04-08 02:35:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:35:40.198240 | orchestrator | 2026-04-08 02:35:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:35:40.198328 | orchestrator | 2026-04-08 02:35:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:35:43.249907 | orchestrator | 2026-04-08 02:35:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:35:43.252153 | orchestrator | 2026-04-08 02:35:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:35:43.252212 | orchestrator | 2026-04-08 02:35:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:35:46.310646 | orchestrator | 2026-04-08 02:35:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:35:46.312190 | orchestrator | 2026-04-08 02:35:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:35:46.312233 | orchestrator | 2026-04-08 02:35:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:35:49.357296 | orchestrator | 2026-04-08 02:35:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:35:49.359370 | orchestrator | 2026-04-08 02:35:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:35:49.359424 | orchestrator | 2026-04-08 02:35:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:35:52.406669 | orchestrator | 2026-04-08 02:35:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:35:52.409150 | orchestrator | 2026-04-08 02:35:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:35:52.409244 | orchestrator | 2026-04-08 02:35:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:35:55.467399 | orchestrator | 2026-04-08 02:35:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:35:55.469496 | orchestrator | 2026-04-08 02:35:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:35:55.469854 | orchestrator | 2026-04-08 02:35:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:35:58.520191 | orchestrator | 2026-04-08 02:35:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:35:58.521245 | orchestrator | 2026-04-08 02:35:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:35:58.521312 | orchestrator | 2026-04-08 02:35:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:36:01.571231 | orchestrator | 2026-04-08 02:36:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:36:01.572929 | orchestrator | 2026-04-08 02:36:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:36:01.573034 | orchestrator | 2026-04-08 02:36:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:36:04.613441 | orchestrator | 2026-04-08 02:36:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:36:04.615382 | orchestrator | 2026-04-08 02:36:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:36:04.615467 | orchestrator | 2026-04-08 02:36:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:36:07.670873 | orchestrator | 2026-04-08 02:36:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:36:07.672445 | orchestrator | 2026-04-08 02:36:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:36:07.672486 | orchestrator | 2026-04-08 02:36:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:36:10.719302 | orchestrator | 2026-04-08 02:36:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:36:10.724078 | orchestrator | 2026-04-08 02:36:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:36:10.724173 | orchestrator | 2026-04-08 02:36:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:36:13.780789 | orchestrator | 2026-04-08 02:36:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:36:13.783115 | orchestrator | 2026-04-08 02:36:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:36:13.783187 | orchestrator | 2026-04-08 02:36:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:36:16.829892 | orchestrator | 2026-04-08 02:36:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:36:16.832405 | orchestrator | 2026-04-08 02:36:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:36:16.832614 | orchestrator | 2026-04-08 02:36:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:36:19.874543 | orchestrator | 2026-04-08 02:36:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:36:19.876585 | orchestrator | 2026-04-08 02:36:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:36:19.876631 | orchestrator | 2026-04-08 02:36:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:36:22.920078 | orchestrator | 2026-04-08 02:36:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:36:22.921323 | orchestrator | 2026-04-08 02:36:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:36:22.921887 | orchestrator | 2026-04-08 02:36:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:36:25.972790 | orchestrator | 2026-04-08 02:36:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:36:25.974225 | orchestrator | 2026-04-08 02:36:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:36:25.974324 | orchestrator | 2026-04-08 02:36:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:36:29.024722 | orchestrator | 2026-04-08 02:36:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:36:29.027014 | orchestrator | 2026-04-08 02:36:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:36:29.027065 | orchestrator | 2026-04-08 02:36:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:36:32.072849 | orchestrator | 2026-04-08 02:36:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:36:32.073909 | orchestrator | 2026-04-08 02:36:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:36:32.073959 | orchestrator | 2026-04-08 02:36:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:36:35.122768 | orchestrator | 2026-04-08 02:36:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:36:35.125059 | orchestrator | 2026-04-08 02:36:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:36:35.125121 | orchestrator | 2026-04-08 02:36:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:36:38.172175 | orchestrator | 2026-04-08 02:36:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:36:38.173148 | orchestrator | 2026-04-08 02:36:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:36:38.173201 | orchestrator | 2026-04-08 02:36:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:36:41.229595 | orchestrator | 2026-04-08 02:36:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:36:41.232038 | orchestrator | 2026-04-08 02:36:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:36:41.232119 | orchestrator | 2026-04-08 02:36:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:36:44.278365 | orchestrator | 2026-04-08 02:36:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:36:44.280711 | orchestrator | 2026-04-08 02:36:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:36:44.280777 | orchestrator | 2026-04-08 02:36:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:36:47.332638 | orchestrator | 2026-04-08 02:36:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:36:47.334117 | orchestrator | 2026-04-08 02:36:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:36:47.334304 | orchestrator | 2026-04-08 02:36:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:36:50.380111 | orchestrator | 2026-04-08 02:36:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:36:50.381558 | orchestrator | 2026-04-08 02:36:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:36:50.381606 | orchestrator | 2026-04-08 02:36:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:36:53.424341 | orchestrator | 2026-04-08 02:36:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:36:53.428021 | orchestrator | 2026-04-08 02:36:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:36:53.428603 | orchestrator | 2026-04-08 02:36:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:36:56.472689 | orchestrator | 2026-04-08 02:36:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:36:56.474314 | orchestrator | 2026-04-08 02:36:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:36:56.474353 | orchestrator | 2026-04-08 02:36:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:36:59.523957 | orchestrator | 2026-04-08 02:36:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:36:59.526100 | orchestrator | 2026-04-08 02:36:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:36:59.526142 | orchestrator | 2026-04-08 02:36:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:37:02.572605 | orchestrator | 2026-04-08 02:37:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:37:02.576378 | orchestrator | 2026-04-08 02:37:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:37:02.576540 | orchestrator | 2026-04-08 02:37:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:37:05.637164 | orchestrator | 2026-04-08 02:37:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:37:05.638955 | orchestrator | 2026-04-08 02:37:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:37:05.639015 | orchestrator | 2026-04-08 02:37:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:37:08.692262 | orchestrator | 2026-04-08 02:37:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:37:08.693692 | orchestrator | 2026-04-08 02:37:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:37:08.693857 | orchestrator | 2026-04-08 02:37:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:37:11.746538 | orchestrator | 2026-04-08 02:37:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:37:11.748377 | orchestrator | 2026-04-08 02:37:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:37:11.748423 | orchestrator | 2026-04-08 02:37:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:37:14.804057 | orchestrator | 2026-04-08 02:37:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:37:14.805630 | orchestrator | 2026-04-08 02:37:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:37:14.805784 | orchestrator | 2026-04-08 02:37:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:37:17.857504 | orchestrator | 2026-04-08 02:37:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:37:17.858652 | orchestrator | 2026-04-08 02:37:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:37:17.858696 | orchestrator | 2026-04-08 02:37:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:37:20.912572 | orchestrator | 2026-04-08 02:37:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:37:20.914905 | orchestrator | 2026-04-08 02:37:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:37:20.915000 | orchestrator | 2026-04-08 02:37:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:37:23.963735 | orchestrator | 2026-04-08 02:37:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:37:23.965976 | orchestrator | 2026-04-08 02:37:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:37:23.966120 | orchestrator | 2026-04-08 02:37:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:37:27.016039 | orchestrator | 2026-04-08 02:37:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:37:27.017841 | orchestrator | 2026-04-08 02:37:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:37:27.017912 | orchestrator | 2026-04-08 02:37:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:37:30.065631 | orchestrator | 2026-04-08 02:37:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:37:30.066930 | orchestrator | 2026-04-08 02:37:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:37:30.067033 | orchestrator | 2026-04-08 02:37:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:37:33.117573 | orchestrator | 2026-04-08 02:37:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:37:33.120520 | orchestrator | 2026-04-08 02:37:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:37:33.120559 | orchestrator | 2026-04-08 02:37:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:37:36.162694 | orchestrator | 2026-04-08 02:37:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:37:36.163350 | orchestrator | 2026-04-08 02:37:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:37:36.163369 | orchestrator | 2026-04-08 02:37:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:37:39.216667 | orchestrator | 2026-04-08 02:37:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:37:39.218303 | orchestrator | 2026-04-08 02:37:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:37:39.218377 | orchestrator | 2026-04-08 02:37:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:37:42.264373 | orchestrator | 2026-04-08 02:37:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:37:42.266988 | orchestrator | 2026-04-08 02:37:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:37:42.267140 | orchestrator | 2026-04-08 02:37:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:37:45.319177 | orchestrator | 2026-04-08 02:37:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:37:45.320934 | orchestrator | 2026-04-08 02:37:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:37:45.320979 | orchestrator | 2026-04-08 02:37:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:37:48.367902 | orchestrator | 2026-04-08 02:37:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:37:48.370652 | orchestrator | 2026-04-08 02:37:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:37:48.370732 | orchestrator | 2026-04-08 02:37:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:37:51.415192 | orchestrator | 2026-04-08 02:37:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:37:51.416621 | orchestrator | 2026-04-08 02:37:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:37:51.416662 | orchestrator | 2026-04-08 02:37:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:37:54.465146 | orchestrator | 2026-04-08 02:37:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:37:54.467526 | orchestrator | 2026-04-08 02:37:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:37:54.467571 | orchestrator | 2026-04-08 02:37:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:37:57.511290 | orchestrator | 2026-04-08 02:37:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:37:57.513632 | orchestrator | 2026-04-08 02:37:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:37:57.513790 | orchestrator | 2026-04-08 02:37:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:38:00.561579 | orchestrator | 2026-04-08 02:38:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:38:00.562403 | orchestrator | 2026-04-08 02:38:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:38:00.562547 | orchestrator | 2026-04-08 02:38:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:38:03.616973 | orchestrator | 2026-04-08 02:38:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:38:03.618401 | orchestrator | 2026-04-08 02:38:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:38:03.618762 | orchestrator | 2026-04-08 02:38:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:38:06.665106 | orchestrator | 2026-04-08 02:38:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:38:06.667982 | orchestrator | 2026-04-08 02:38:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:38:06.668153 | orchestrator | 2026-04-08 02:38:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:38:09.710167 | orchestrator | 2026-04-08 02:38:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:38:09.711490 | orchestrator | 2026-04-08 02:38:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:38:09.711632 | orchestrator | 2026-04-08 02:38:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:38:12.753809 | orchestrator | 2026-04-08 02:38:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:38:12.755141 | orchestrator | 2026-04-08 02:38:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:38:12.755203 | orchestrator | 2026-04-08 02:38:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:38:15.793196 | orchestrator | 2026-04-08 02:38:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:38:15.794540 | orchestrator | 2026-04-08 02:38:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:38:15.794597 | orchestrator | 2026-04-08 02:38:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:38:18.845661 | orchestrator | 2026-04-08 02:38:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:38:18.847867 | orchestrator | 2026-04-08 02:38:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:38:18.847954 | orchestrator | 2026-04-08 02:38:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:38:21.892943 | orchestrator | 2026-04-08 02:38:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:38:21.894898 | orchestrator | 2026-04-08 02:38:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:38:21.895000 | orchestrator | 2026-04-08 02:38:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:38:24.939743 | orchestrator | 2026-04-08 02:38:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:38:24.942277 | orchestrator | 2026-04-08 02:38:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:38:24.942363 | orchestrator | 2026-04-08 02:38:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:38:27.982719 | orchestrator | 2026-04-08 02:38:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:38:27.984276 | orchestrator | 2026-04-08 02:38:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:38:27.984397 | orchestrator | 2026-04-08 02:38:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:38:31.032749 | orchestrator | 2026-04-08 02:38:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:38:31.034825 | orchestrator | 2026-04-08 02:38:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:38:31.034864 | orchestrator | 2026-04-08 02:38:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:38:34.091689 | orchestrator | 2026-04-08 02:38:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:38:34.094430 | orchestrator | 2026-04-08 02:38:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:38:34.094507 | orchestrator | 2026-04-08 02:38:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:38:37.148974 | orchestrator | 2026-04-08 02:38:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:38:37.149906 | orchestrator | 2026-04-08 02:38:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:38:37.149946 | orchestrator | 2026-04-08 02:38:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:38:40.205889 | orchestrator | 2026-04-08 02:38:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:38:40.207589 | orchestrator | 2026-04-08 02:38:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:38:40.207685 | orchestrator | 2026-04-08 02:38:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:38:43.257047 | orchestrator | 2026-04-08 02:38:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:38:43.260509 | orchestrator | 2026-04-08 02:38:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:38:43.260587 | orchestrator | 2026-04-08 02:38:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:38:46.308728 | orchestrator | 2026-04-08 02:38:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:38:46.310961 | orchestrator | 2026-04-08 02:38:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:38:46.311073 | orchestrator | 2026-04-08 02:38:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:38:49.354793 | orchestrator | 2026-04-08 02:38:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:38:49.356251 | orchestrator | 2026-04-08 02:38:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:38:49.356330 | orchestrator | 2026-04-08 02:38:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:38:52.402466 | orchestrator | 2026-04-08 02:38:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:38:52.404817 | orchestrator | 2026-04-08 02:38:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:38:52.404885 | orchestrator | 2026-04-08 02:38:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:38:55.447169 | orchestrator | 2026-04-08 02:38:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:38:55.449456 | orchestrator | 2026-04-08 02:38:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:38:55.449498 | orchestrator | 2026-04-08 02:38:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:38:58.497019 | orchestrator | 2026-04-08 02:38:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:38:58.498002 | orchestrator | 2026-04-08 02:38:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:38:58.498204 | orchestrator | 2026-04-08 02:38:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:39:01.550238 | orchestrator | 2026-04-08 02:39:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:39:01.551648 | orchestrator | 2026-04-08 02:39:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:39:01.551705 | orchestrator | 2026-04-08 02:39:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:39:04.602983 | orchestrator | 2026-04-08 02:39:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:39:04.604305 | orchestrator | 2026-04-08 02:39:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:39:04.604344 | orchestrator | 2026-04-08 02:39:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:39:07.655776 | orchestrator | 2026-04-08 02:39:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:39:07.657816 | orchestrator | 2026-04-08 02:39:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:39:07.657850 | orchestrator | 2026-04-08 02:39:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:39:10.710277 | orchestrator | 2026-04-08 02:39:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:39:10.711649 | orchestrator | 2026-04-08 02:39:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:39:10.711717 | orchestrator | 2026-04-08 02:39:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:39:13.763644 | orchestrator | 2026-04-08 02:39:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:39:13.764900 | orchestrator | 2026-04-08 02:39:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:39:13.764928 | orchestrator | 2026-04-08 02:39:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:39:16.816680 | orchestrator | 2026-04-08 02:39:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:39:16.819203 | orchestrator | 2026-04-08 02:39:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:39:16.819260 | orchestrator | 2026-04-08 02:39:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:39:19.871654 | orchestrator | 2026-04-08 02:39:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:39:19.875285 | orchestrator | 2026-04-08 02:39:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:39:19.875388 | orchestrator | 2026-04-08 02:39:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:39:22.925386 | orchestrator | 2026-04-08 02:39:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:39:22.926979 | orchestrator | 2026-04-08 02:39:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:39:22.927013 | orchestrator | 2026-04-08 02:39:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:39:25.980069 | orchestrator | 2026-04-08 02:39:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:39:25.982862 | orchestrator | 2026-04-08 02:39:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:39:25.982938 | orchestrator | 2026-04-08 02:39:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:39:29.033199 | orchestrator | 2026-04-08 02:39:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:39:29.034653 | orchestrator | 2026-04-08 02:39:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:39:29.034853 | orchestrator | 2026-04-08 02:39:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:39:32.087883 | orchestrator | 2026-04-08 02:39:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:39:32.090303 | orchestrator | 2026-04-08 02:39:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:39:32.090364 | orchestrator | 2026-04-08 02:39:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:39:35.137445 | orchestrator | 2026-04-08 02:39:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:39:35.139601 | orchestrator | 2026-04-08 02:39:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:39:35.139740 | orchestrator | 2026-04-08 02:39:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:39:38.189827 | orchestrator | 2026-04-08 02:39:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:39:38.191630 | orchestrator | 2026-04-08 02:39:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:39:38.191718 | orchestrator | 2026-04-08 02:39:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:39:41.248259 | orchestrator | 2026-04-08 02:39:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:39:41.251759 | orchestrator | 2026-04-08 02:39:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:39:41.251835 | orchestrator | 2026-04-08 02:39:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:39:44.302607 | orchestrator | 2026-04-08 02:39:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:39:44.303923 | orchestrator | 2026-04-08 02:39:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:39:44.304066 | orchestrator | 2026-04-08 02:39:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:39:47.354218 | orchestrator | 2026-04-08 02:39:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:39:47.357849 | orchestrator | 2026-04-08 02:39:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:39:47.357929 | orchestrator | 2026-04-08 02:39:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:39:50.409031 | orchestrator | 2026-04-08 02:39:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:39:50.413458 | orchestrator | 2026-04-08 02:39:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:39:50.413566 | orchestrator | 2026-04-08 02:39:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:39:53.468487 | orchestrator | 2026-04-08 02:39:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:39:53.470864 | orchestrator | 2026-04-08 02:39:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:39:53.470937 | orchestrator | 2026-04-08 02:39:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:39:56.519576 | orchestrator | 2026-04-08 02:39:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:39:56.521349 | orchestrator | 2026-04-08 02:39:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:39:56.521484 | orchestrator | 2026-04-08 02:39:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:39:59.575172 | orchestrator | 2026-04-08 02:39:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:39:59.576513 | orchestrator | 2026-04-08 02:39:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:39:59.576553 | orchestrator | 2026-04-08 02:39:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:40:02.625886 | orchestrator | 2026-04-08 02:40:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:40:02.628326 | orchestrator | 2026-04-08 02:40:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:40:02.628382 | orchestrator | 2026-04-08 02:40:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:40:05.679684 | orchestrator | 2026-04-08 02:40:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:40:05.682177 | orchestrator | 2026-04-08 02:40:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:40:05.682263 | orchestrator | 2026-04-08 02:40:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:40:08.732841 | orchestrator | 2026-04-08 02:40:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:40:08.733894 | orchestrator | 2026-04-08 02:40:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:40:08.733932 | orchestrator | 2026-04-08 02:40:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:40:11.781644 | orchestrator | 2026-04-08 02:40:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:40:11.783481 | orchestrator | 2026-04-08 02:40:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:40:11.783805 | orchestrator | 2026-04-08 02:40:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:40:14.831981 | orchestrator | 2026-04-08 02:40:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:40:14.834395 | orchestrator | 2026-04-08 02:40:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:40:14.834534 | orchestrator | 2026-04-08 02:40:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:40:17.879556 | orchestrator | 2026-04-08 02:40:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:40:17.881764 | orchestrator | 2026-04-08 02:40:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:40:17.881815 | orchestrator | 2026-04-08 02:40:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:40:20.935656 | orchestrator | 2026-04-08 02:40:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:40:20.936960 | orchestrator | 2026-04-08 02:40:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:40:20.937002 | orchestrator | 2026-04-08 02:40:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:40:23.984688 | orchestrator | 2026-04-08 02:40:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:40:23.986498 | orchestrator | 2026-04-08 02:40:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:40:23.988028 | orchestrator | 2026-04-08 02:40:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:40:27.042179 | orchestrator | 2026-04-08 02:40:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:40:27.044748 | orchestrator | 2026-04-08 02:40:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:40:27.044808 | orchestrator | 2026-04-08 02:40:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:40:30.090984 | orchestrator | 2026-04-08 02:40:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:40:30.092344 | orchestrator | 2026-04-08 02:40:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:40:30.092506 | orchestrator | 2026-04-08 02:40:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:40:33.140517 | orchestrator | 2026-04-08 02:40:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:40:33.141823 | orchestrator | 2026-04-08 02:40:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:40:33.141858 | orchestrator | 2026-04-08 02:40:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:40:36.190188 | orchestrator | 2026-04-08 02:40:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:40:36.192468 | orchestrator | 2026-04-08 02:40:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:40:36.192516 | orchestrator | 2026-04-08 02:40:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:40:39.240374 | orchestrator | 2026-04-08 02:40:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:40:39.241541 | orchestrator | 2026-04-08 02:40:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:40:39.241582 | orchestrator | 2026-04-08 02:40:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:40:42.283336 | orchestrator | 2026-04-08 02:40:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:40:42.283806 | orchestrator | 2026-04-08 02:40:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:40:42.283958 | orchestrator | 2026-04-08 02:40:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:40:45.336165 | orchestrator | 2026-04-08 02:40:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:40:45.338297 | orchestrator | 2026-04-08 02:40:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:40:45.338349 | orchestrator | 2026-04-08 02:40:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:40:48.394117 | orchestrator | 2026-04-08 02:40:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:40:48.396988 | orchestrator | 2026-04-08 02:40:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:40:48.397051 | orchestrator | 2026-04-08 02:40:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:40:51.451474 | orchestrator | 2026-04-08 02:40:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:40:51.452972 | orchestrator | 2026-04-08 02:40:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:40:51.453018 | orchestrator | 2026-04-08 02:40:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:40:54.503786 | orchestrator | 2026-04-08 02:40:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:40:54.507079 | orchestrator | 2026-04-08 02:40:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:40:54.507152 | orchestrator | 2026-04-08 02:40:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:40:57.556204 | orchestrator | 2026-04-08 02:40:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:40:57.558909 | orchestrator | 2026-04-08 02:40:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:40:57.559011 | orchestrator | 2026-04-08 02:40:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:41:00.602692 | orchestrator | 2026-04-08 02:41:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:41:00.603710 | orchestrator | 2026-04-08 02:41:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:41:00.603743 | orchestrator | 2026-04-08 02:41:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:41:03.645071 | orchestrator | 2026-04-08 02:41:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:41:03.648644 | orchestrator | 2026-04-08 02:41:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:41:03.648861 | orchestrator | 2026-04-08 02:41:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:41:06.694122 | orchestrator | 2026-04-08 02:41:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:41:06.695644 | orchestrator | 2026-04-08 02:41:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:41:06.695709 | orchestrator | 2026-04-08 02:41:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:41:09.742008 | orchestrator | 2026-04-08 02:41:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:41:09.743301 | orchestrator | 2026-04-08 02:41:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:41:09.743478 | orchestrator | 2026-04-08 02:41:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:41:12.788651 | orchestrator | 2026-04-08 02:41:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:41:12.790552 | orchestrator | 2026-04-08 02:41:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:41:12.790682 | orchestrator | 2026-04-08 02:41:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:41:15.836179 | orchestrator | 2026-04-08 02:41:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:41:15.838907 | orchestrator | 2026-04-08 02:41:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:41:15.839018 | orchestrator | 2026-04-08 02:41:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:41:18.883154 | orchestrator | 2026-04-08 02:41:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:41:18.884556 | orchestrator | 2026-04-08 02:41:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:41:18.884619 | orchestrator | 2026-04-08 02:41:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:41:21.933913 | orchestrator | 2026-04-08 02:41:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:41:21.935104 | orchestrator | 2026-04-08 02:41:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:41:21.935159 | orchestrator | 2026-04-08 02:41:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:41:24.982829 | orchestrator | 2026-04-08 02:41:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:41:24.984847 | orchestrator | 2026-04-08 02:41:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:41:24.985083 | orchestrator | 2026-04-08 02:41:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:41:28.030114 | orchestrator | 2026-04-08 02:41:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:41:28.031605 | orchestrator | 2026-04-08 02:41:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:41:28.031648 | orchestrator | 2026-04-08 02:41:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:41:31.064448 | orchestrator | 2026-04-08 02:41:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:41:31.067952 | orchestrator | 2026-04-08 02:41:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:41:31.068005 | orchestrator | 2026-04-08 02:41:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:41:34.118349 | orchestrator | 2026-04-08 02:41:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:41:34.119942 | orchestrator | 2026-04-08 02:41:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:41:34.120000 | orchestrator | 2026-04-08 02:41:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:41:37.168787 | orchestrator | 2026-04-08 02:41:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:41:37.171852 | orchestrator | 2026-04-08 02:41:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:41:37.171930 | orchestrator | 2026-04-08 02:41:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:41:40.224036 | orchestrator | 2026-04-08 02:41:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:41:40.225489 | orchestrator | 2026-04-08 02:41:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:41:40.225530 | orchestrator | 2026-04-08 02:41:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:41:43.273250 | orchestrator | 2026-04-08 02:41:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:41:43.275561 | orchestrator | 2026-04-08 02:41:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:41:43.275636 | orchestrator | 2026-04-08 02:41:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:41:46.329192 | orchestrator | 2026-04-08 02:41:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:41:46.330980 | orchestrator | 2026-04-08 02:41:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:41:46.331029 | orchestrator | 2026-04-08 02:41:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:41:49.381892 | orchestrator | 2026-04-08 02:41:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:41:49.384030 | orchestrator | 2026-04-08 02:41:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:41:49.384104 | orchestrator | 2026-04-08 02:41:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:41:52.431761 | orchestrator | 2026-04-08 02:41:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:41:52.433622 | orchestrator | 2026-04-08 02:41:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:41:52.433703 | orchestrator | 2026-04-08 02:41:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:41:55.483574 | orchestrator | 2026-04-08 02:41:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:41:55.485262 | orchestrator | 2026-04-08 02:41:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:41:55.485354 | orchestrator | 2026-04-08 02:41:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:41:58.535094 | orchestrator | 2026-04-08 02:41:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:41:58.537782 | orchestrator | 2026-04-08 02:41:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:41:58.537877 | orchestrator | 2026-04-08 02:41:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:42:01.581350 | orchestrator | 2026-04-08 02:42:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:42:01.583022 | orchestrator | 2026-04-08 02:42:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:42:01.583103 | orchestrator | 2026-04-08 02:42:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:42:04.627320 | orchestrator | 2026-04-08 02:42:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:42:04.627965 | orchestrator | 2026-04-08 02:42:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:42:04.628186 | orchestrator | 2026-04-08 02:42:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:42:07.671204 | orchestrator | 2026-04-08 02:42:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:42:07.672821 | orchestrator | 2026-04-08 02:42:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:42:07.672963 | orchestrator | 2026-04-08 02:42:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:42:10.716092 | orchestrator | 2026-04-08 02:42:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:42:10.716518 | orchestrator | 2026-04-08 02:42:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:42:10.716567 | orchestrator | 2026-04-08 02:42:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:42:13.762859 | orchestrator | 2026-04-08 02:42:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:42:13.765161 | orchestrator | 2026-04-08 02:42:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:42:13.765274 | orchestrator | 2026-04-08 02:42:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:42:16.814578 | orchestrator | 2026-04-08 02:42:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:42:16.816695 | orchestrator | 2026-04-08 02:42:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:42:16.816766 | orchestrator | 2026-04-08 02:42:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:42:19.857611 | orchestrator | 2026-04-08 02:42:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:42:19.859521 | orchestrator | 2026-04-08 02:42:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:42:19.859615 | orchestrator | 2026-04-08 02:42:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:42:22.912988 | orchestrator | 2026-04-08 02:42:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:42:22.914318 | orchestrator | 2026-04-08 02:42:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:42:22.914376 | orchestrator | 2026-04-08 02:42:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:42:25.964495 | orchestrator | 2026-04-08 02:42:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:42:25.966961 | orchestrator | 2026-04-08 02:42:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:42:25.967054 | orchestrator | 2026-04-08 02:42:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:42:29.021884 | orchestrator | 2026-04-08 02:42:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:42:29.022911 | orchestrator | 2026-04-08 02:42:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:42:29.022953 | orchestrator | 2026-04-08 02:42:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:42:32.075978 | orchestrator | 2026-04-08 02:42:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:42:32.076810 | orchestrator | 2026-04-08 02:42:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:42:32.076847 | orchestrator | 2026-04-08 02:42:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:42:35.134133 | orchestrator | 2026-04-08 02:42:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:42:35.135925 | orchestrator | 2026-04-08 02:42:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:42:35.135986 | orchestrator | 2026-04-08 02:42:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:42:38.185554 | orchestrator | 2026-04-08 02:42:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:42:38.188924 | orchestrator | 2026-04-08 02:42:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:42:38.188968 | orchestrator | 2026-04-08 02:42:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:42:41.244141 | orchestrator | 2026-04-08 02:42:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:42:41.249531 | orchestrator | 2026-04-08 02:42:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:42:41.249608 | orchestrator | 2026-04-08 02:42:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:42:44.298839 | orchestrator | 2026-04-08 02:42:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:42:44.301089 | orchestrator | 2026-04-08 02:42:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:42:44.301155 | orchestrator | 2026-04-08 02:42:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:42:47.351132 | orchestrator | 2026-04-08 02:42:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:42:47.352950 | orchestrator | 2026-04-08 02:42:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:42:47.352999 | orchestrator | 2026-04-08 02:42:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:42:50.403532 | orchestrator | 2026-04-08 02:42:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:42:50.406968 | orchestrator | 2026-04-08 02:42:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:42:50.407055 | orchestrator | 2026-04-08 02:42:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:42:53.448884 | orchestrator | 2026-04-08 02:42:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:42:53.450456 | orchestrator | 2026-04-08 02:42:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:42:53.450498 | orchestrator | 2026-04-08 02:42:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:42:56.486885 | orchestrator | 2026-04-08 02:42:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:42:56.489207 | orchestrator | 2026-04-08 02:42:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:42:56.489319 | orchestrator | 2026-04-08 02:42:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:42:59.536207 | orchestrator | 2026-04-08 02:42:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:42:59.537532 | orchestrator | 2026-04-08 02:42:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:42:59.537595 | orchestrator | 2026-04-08 02:42:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:43:02.581856 | orchestrator | 2026-04-08 02:43:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:43:02.583560 | orchestrator | 2026-04-08 02:43:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:43:02.583761 | orchestrator | 2026-04-08 02:43:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:43:05.630953 | orchestrator | 2026-04-08 02:43:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:43:05.632570 | orchestrator | 2026-04-08 02:43:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:43:05.632593 | orchestrator | 2026-04-08 02:43:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:43:08.676446 | orchestrator | 2026-04-08 02:43:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:43:08.678462 | orchestrator | 2026-04-08 02:43:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:43:08.678510 | orchestrator | 2026-04-08 02:43:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:43:11.723719 | orchestrator | 2026-04-08 02:43:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:43:11.724915 | orchestrator | 2026-04-08 02:43:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:43:11.724993 | orchestrator | 2026-04-08 02:43:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:43:14.774271 | orchestrator | 2026-04-08 02:43:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:43:14.775129 | orchestrator | 2026-04-08 02:43:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:43:14.775309 | orchestrator | 2026-04-08 02:43:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:43:17.825039 | orchestrator | 2026-04-08 02:43:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:43:17.826292 | orchestrator | 2026-04-08 02:43:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:43:17.826418 | orchestrator | 2026-04-08 02:43:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:43:20.881366 | orchestrator | 2026-04-08 02:43:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:43:20.884189 | orchestrator | 2026-04-08 02:43:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:43:20.884280 | orchestrator | 2026-04-08 02:43:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:43:23.918721 | orchestrator | 2026-04-08 02:43:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:43:23.920313 | orchestrator | 2026-04-08 02:43:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:43:23.920344 | orchestrator | 2026-04-08 02:43:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:43:26.968835 | orchestrator | 2026-04-08 02:43:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:43:26.969474 | orchestrator | 2026-04-08 02:43:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:43:26.969720 | orchestrator | 2026-04-08 02:43:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:43:30.017433 | orchestrator | 2026-04-08 02:43:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:43:30.018101 | orchestrator | 2026-04-08 02:43:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:43:30.018135 | orchestrator | 2026-04-08 02:43:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:43:33.074099 | orchestrator | 2026-04-08 02:43:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:43:33.076323 | orchestrator | 2026-04-08 02:43:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:43:33.076497 | orchestrator | 2026-04-08 02:43:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:43:36.122618 | orchestrator | 2026-04-08 02:43:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:43:36.123943 | orchestrator | 2026-04-08 02:43:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:43:36.123969 | orchestrator | 2026-04-08 02:43:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:43:39.174817 | orchestrator | 2026-04-08 02:43:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:43:39.178500 | orchestrator | 2026-04-08 02:43:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:43:39.178567 | orchestrator | 2026-04-08 02:43:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:43:42.226331 | orchestrator | 2026-04-08 02:43:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:43:42.227585 | orchestrator | 2026-04-08 02:43:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:43:42.227689 | orchestrator | 2026-04-08 02:43:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:43:45.268091 | orchestrator | 2026-04-08 02:43:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:43:45.268848 | orchestrator | 2026-04-08 02:43:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:43:45.268898 | orchestrator | 2026-04-08 02:43:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:43:48.318836 | orchestrator | 2026-04-08 02:43:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:43:48.320787 | orchestrator | 2026-04-08 02:43:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:43:48.320853 | orchestrator | 2026-04-08 02:43:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:43:51.371915 | orchestrator | 2026-04-08 02:43:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:43:51.377618 | orchestrator | 2026-04-08 02:43:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:43:51.377689 | orchestrator | 2026-04-08 02:43:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:43:54.419505 | orchestrator | 2026-04-08 02:43:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:43:54.421705 | orchestrator | 2026-04-08 02:43:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:43:54.421855 | orchestrator | 2026-04-08 02:43:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:43:57.468766 | orchestrator | 2026-04-08 02:43:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:43:57.470275 | orchestrator | 2026-04-08 02:43:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:43:57.470325 | orchestrator | 2026-04-08 02:43:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:44:00.522618 | orchestrator | 2026-04-08 02:44:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:44:00.524029 | orchestrator | 2026-04-08 02:44:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:44:00.524068 | orchestrator | 2026-04-08 02:44:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:44:03.571074 | orchestrator | 2026-04-08 02:44:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:44:03.572857 | orchestrator | 2026-04-08 02:44:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:44:03.572887 | orchestrator | 2026-04-08 02:44:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:44:06.617422 | orchestrator | 2026-04-08 02:44:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:44:06.619725 | orchestrator | 2026-04-08 02:44:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:44:06.620141 | orchestrator | 2026-04-08 02:44:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:44:09.666546 | orchestrator | 2026-04-08 02:44:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:44:09.668713 | orchestrator | 2026-04-08 02:44:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:44:09.668755 | orchestrator | 2026-04-08 02:44:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:44:12.711465 | orchestrator | 2026-04-08 02:44:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:44:12.714167 | orchestrator | 2026-04-08 02:44:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:44:12.714245 | orchestrator | 2026-04-08 02:44:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:44:15.753286 | orchestrator | 2026-04-08 02:44:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:44:15.754614 | orchestrator | 2026-04-08 02:44:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:44:15.754652 | orchestrator | 2026-04-08 02:44:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:44:18.796997 | orchestrator | 2026-04-08 02:44:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:44:18.798557 | orchestrator | 2026-04-08 02:44:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:44:18.798618 | orchestrator | 2026-04-08 02:44:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:44:21.850973 | orchestrator | 2026-04-08 02:44:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:44:21.853626 | orchestrator | 2026-04-08 02:44:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:44:21.853687 | orchestrator | 2026-04-08 02:44:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:44:24.906232 | orchestrator | 2026-04-08 02:44:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:44:24.908806 | orchestrator | 2026-04-08 02:44:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:44:24.908854 | orchestrator | 2026-04-08 02:44:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:44:27.953978 | orchestrator | 2026-04-08 02:44:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:44:27.955953 | orchestrator | 2026-04-08 02:44:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:44:27.956321 | orchestrator | 2026-04-08 02:44:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:44:30.995813 | orchestrator | 2026-04-08 02:44:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:44:30.997658 | orchestrator | 2026-04-08 02:44:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:44:30.997726 | orchestrator | 2026-04-08 02:44:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:44:34.046342 | orchestrator | 2026-04-08 02:44:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:44:34.047229 | orchestrator | 2026-04-08 02:44:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:44:34.047278 | orchestrator | 2026-04-08 02:44:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:44:37.098348 | orchestrator | 2026-04-08 02:44:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:44:37.100628 | orchestrator | 2026-04-08 02:44:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:44:37.100673 | orchestrator | 2026-04-08 02:44:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:44:40.146918 | orchestrator | 2026-04-08 02:44:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:44:40.147317 | orchestrator | 2026-04-08 02:44:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:44:40.147338 | orchestrator | 2026-04-08 02:44:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:44:43.195357 | orchestrator | 2026-04-08 02:44:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:44:43.198417 | orchestrator | 2026-04-08 02:44:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:44:43.198517 | orchestrator | 2026-04-08 02:44:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:44:46.255770 | orchestrator | 2026-04-08 02:44:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:44:46.257913 | orchestrator | 2026-04-08 02:44:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:44:46.257955 | orchestrator | 2026-04-08 02:44:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:44:49.305802 | orchestrator | 2026-04-08 02:44:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:44:49.306404 | orchestrator | 2026-04-08 02:44:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:44:49.306434 | orchestrator | 2026-04-08 02:44:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:44:52.348536 | orchestrator | 2026-04-08 02:44:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:44:52.349972 | orchestrator | 2026-04-08 02:44:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:44:52.350002 | orchestrator | 2026-04-08 02:44:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:44:55.400135 | orchestrator | 2026-04-08 02:44:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:44:55.401614 | orchestrator | 2026-04-08 02:44:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:44:55.401656 | orchestrator | 2026-04-08 02:44:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:44:58.449099 | orchestrator | 2026-04-08 02:44:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:44:58.450832 | orchestrator | 2026-04-08 02:44:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:44:58.450872 | orchestrator | 2026-04-08 02:44:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:45:01.493933 | orchestrator | 2026-04-08 02:45:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:45:01.496197 | orchestrator | 2026-04-08 02:45:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:45:01.496261 | orchestrator | 2026-04-08 02:45:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:45:04.541171 | orchestrator | 2026-04-08 02:45:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:45:04.543796 | orchestrator | 2026-04-08 02:45:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:45:04.543874 | orchestrator | 2026-04-08 02:45:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:45:07.597943 | orchestrator | 2026-04-08 02:45:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:45:07.599914 | orchestrator | 2026-04-08 02:45:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:45:07.600025 | orchestrator | 2026-04-08 02:45:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:45:10.651995 | orchestrator | 2026-04-08 02:45:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:45:10.653550 | orchestrator | 2026-04-08 02:45:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:45:10.653600 | orchestrator | 2026-04-08 02:45:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:45:13.697018 | orchestrator | 2026-04-08 02:45:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:45:13.699272 | orchestrator | 2026-04-08 02:45:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:45:13.699317 | orchestrator | 2026-04-08 02:45:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:45:16.753771 | orchestrator | 2026-04-08 02:45:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:45:16.755406 | orchestrator | 2026-04-08 02:45:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:45:16.755573 | orchestrator | 2026-04-08 02:45:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:45:19.806575 | orchestrator | 2026-04-08 02:45:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:45:19.809191 | orchestrator | 2026-04-08 02:45:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:45:19.809247 | orchestrator | 2026-04-08 02:45:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:45:22.858982 | orchestrator | 2026-04-08 02:45:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:45:22.859636 | orchestrator | 2026-04-08 02:45:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:45:22.859800 | orchestrator | 2026-04-08 02:45:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:45:25.898801 | orchestrator | 2026-04-08 02:45:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:45:25.900786 | orchestrator | 2026-04-08 02:45:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:45:25.900845 | orchestrator | 2026-04-08 02:45:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:45:28.946262 | orchestrator | 2026-04-08 02:45:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:45:28.948440 | orchestrator | 2026-04-08 02:45:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:45:28.948499 | orchestrator | 2026-04-08 02:45:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:45:31.992629 | orchestrator | 2026-04-08 02:45:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:45:31.993552 | orchestrator | 2026-04-08 02:45:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:45:31.993613 | orchestrator | 2026-04-08 02:45:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:45:35.042913 | orchestrator | 2026-04-08 02:45:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:45:35.044486 | orchestrator | 2026-04-08 02:45:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:45:35.044523 | orchestrator | 2026-04-08 02:45:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:45:38.096467 | orchestrator | 2026-04-08 02:45:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:45:38.099856 | orchestrator | 2026-04-08 02:45:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:45:38.100012 | orchestrator | 2026-04-08 02:45:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:45:41.149473 | orchestrator | 2026-04-08 02:45:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:45:41.151029 | orchestrator | 2026-04-08 02:45:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:45:41.151065 | orchestrator | 2026-04-08 02:45:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:45:44.202561 | orchestrator | 2026-04-08 02:45:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:45:44.204167 | orchestrator | 2026-04-08 02:45:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:45:44.204217 | orchestrator | 2026-04-08 02:45:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:45:47.262472 | orchestrator | 2026-04-08 02:45:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:45:47.264464 | orchestrator | 2026-04-08 02:45:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:45:47.266414 | orchestrator | 2026-04-08 02:45:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:45:50.308957 | orchestrator | 2026-04-08 02:45:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:45:50.310485 | orchestrator | 2026-04-08 02:45:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:45:50.310565 | orchestrator | 2026-04-08 02:45:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:45:53.365504 | orchestrator | 2026-04-08 02:45:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:45:53.367068 | orchestrator | 2026-04-08 02:45:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:45:53.367112 | orchestrator | 2026-04-08 02:45:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:45:56.414189 | orchestrator | 2026-04-08 02:45:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:45:56.419744 | orchestrator | 2026-04-08 02:45:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:45:56.419842 | orchestrator | 2026-04-08 02:45:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:45:59.464322 | orchestrator | 2026-04-08 02:45:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:45:59.465231 | orchestrator | 2026-04-08 02:45:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:45:59.465260 | orchestrator | 2026-04-08 02:45:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:46:02.506790 | orchestrator | 2026-04-08 02:46:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:46:02.508014 | orchestrator | 2026-04-08 02:46:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:46:02.508131 | orchestrator | 2026-04-08 02:46:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:46:05.555751 | orchestrator | 2026-04-08 02:46:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:46:05.558061 | orchestrator | 2026-04-08 02:46:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:46:05.558307 | orchestrator | 2026-04-08 02:46:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:46:08.607706 | orchestrator | 2026-04-08 02:46:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:46:08.609425 | orchestrator | 2026-04-08 02:46:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:46:08.609466 | orchestrator | 2026-04-08 02:46:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:46:11.659640 | orchestrator | 2026-04-08 02:46:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:46:11.659836 | orchestrator | 2026-04-08 02:46:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:46:11.659850 | orchestrator | 2026-04-08 02:46:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:46:14.708658 | orchestrator | 2026-04-08 02:46:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:46:14.709956 | orchestrator | 2026-04-08 02:46:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:46:14.710114 | orchestrator | 2026-04-08 02:46:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:46:17.766907 | orchestrator | 2026-04-08 02:46:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:46:17.770007 | orchestrator | 2026-04-08 02:46:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:46:17.770238 | orchestrator | 2026-04-08 02:46:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:46:20.818902 | orchestrator | 2026-04-08 02:46:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:46:20.821439 | orchestrator | 2026-04-08 02:46:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:46:20.821487 | orchestrator | 2026-04-08 02:46:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:46:23.878493 | orchestrator | 2026-04-08 02:46:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:46:23.880082 | orchestrator | 2026-04-08 02:46:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:46:23.880147 | orchestrator | 2026-04-08 02:46:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:46:26.934820 | orchestrator | 2026-04-08 02:46:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:46:26.936174 | orchestrator | 2026-04-08 02:46:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:46:26.936320 | orchestrator | 2026-04-08 02:46:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:46:29.991263 | orchestrator | 2026-04-08 02:46:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:46:29.993963 | orchestrator | 2026-04-08 02:46:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:46:29.995113 | orchestrator | 2026-04-08 02:46:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:46:33.046179 | orchestrator | 2026-04-08 02:46:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:46:33.048428 | orchestrator | 2026-04-08 02:46:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:46:33.048832 | orchestrator | 2026-04-08 02:46:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:46:36.102671 | orchestrator | 2026-04-08 02:46:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:46:36.104895 | orchestrator | 2026-04-08 02:46:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:46:36.104944 | orchestrator | 2026-04-08 02:46:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:46:39.148817 | orchestrator | 2026-04-08 02:46:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:46:39.150997 | orchestrator | 2026-04-08 02:46:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:46:39.151062 | orchestrator | 2026-04-08 02:46:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:46:42.198706 | orchestrator | 2026-04-08 02:46:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:46:42.199874 | orchestrator | 2026-04-08 02:46:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:46:42.199923 | orchestrator | 2026-04-08 02:46:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:46:45.254737 | orchestrator | 2026-04-08 02:46:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:46:45.256542 | orchestrator | 2026-04-08 02:46:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:46:45.256635 | orchestrator | 2026-04-08 02:46:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:46:48.304517 | orchestrator | 2026-04-08 02:46:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:46:48.306733 | orchestrator | 2026-04-08 02:46:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:46:48.306813 | orchestrator | 2026-04-08 02:46:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:46:51.356736 | orchestrator | 2026-04-08 02:46:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:46:51.358638 | orchestrator | 2026-04-08 02:46:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:46:51.358736 | orchestrator | 2026-04-08 02:46:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:46:54.400308 | orchestrator | 2026-04-08 02:46:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:46:54.400557 | orchestrator | 2026-04-08 02:46:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:46:54.400588 | orchestrator | 2026-04-08 02:46:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:46:57.456255 | orchestrator | 2026-04-08 02:46:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:46:57.457995 | orchestrator | 2026-04-08 02:46:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:46:57.458115 | orchestrator | 2026-04-08 02:46:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:47:00.506598 | orchestrator | 2026-04-08 02:47:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:47:00.508139 | orchestrator | 2026-04-08 02:47:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:47:00.508189 | orchestrator | 2026-04-08 02:47:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:47:03.557358 | orchestrator | 2026-04-08 02:47:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:47:03.559104 | orchestrator | 2026-04-08 02:47:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:47:03.559136 | orchestrator | 2026-04-08 02:47:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:47:06.610145 | orchestrator | 2026-04-08 02:47:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:47:06.611153 | orchestrator | 2026-04-08 02:47:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:47:06.611317 | orchestrator | 2026-04-08 02:47:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:47:09.660430 | orchestrator | 2026-04-08 02:47:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:47:09.662717 | orchestrator | 2026-04-08 02:47:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:47:09.662780 | orchestrator | 2026-04-08 02:47:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:47:12.714670 | orchestrator | 2026-04-08 02:47:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:47:12.716516 | orchestrator | 2026-04-08 02:47:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:47:12.716592 | orchestrator | 2026-04-08 02:47:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:47:15.766183 | orchestrator | 2026-04-08 02:47:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:47:15.768063 | orchestrator | 2026-04-08 02:47:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:47:15.768148 | orchestrator | 2026-04-08 02:47:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:47:18.813307 | orchestrator | 2026-04-08 02:47:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:47:18.814818 | orchestrator | 2026-04-08 02:47:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:47:18.815025 | orchestrator | 2026-04-08 02:47:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:47:21.862480 | orchestrator | 2026-04-08 02:47:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:47:21.863691 | orchestrator | 2026-04-08 02:47:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:47:21.863719 | orchestrator | 2026-04-08 02:47:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:47:24.908254 | orchestrator | 2026-04-08 02:47:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:47:24.908605 | orchestrator | 2026-04-08 02:47:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:47:24.908635 | orchestrator | 2026-04-08 02:47:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:47:27.960761 | orchestrator | 2026-04-08 02:47:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:47:27.962713 | orchestrator | 2026-04-08 02:47:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:47:27.962754 | orchestrator | 2026-04-08 02:47:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:49:31.131237 | orchestrator | 2026-04-08 02:49:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:49:31.131331 | orchestrator | 2026-04-08 02:49:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:49:31.131341 | orchestrator | 2026-04-08 02:49:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:49:34.183180 | orchestrator | 2026-04-08 02:49:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:49:34.184781 | orchestrator | 2026-04-08 02:49:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:49:34.184838 | orchestrator | 2026-04-08 02:49:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:49:37.230703 | orchestrator | 2026-04-08 02:49:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:49:37.233207 | orchestrator | 2026-04-08 02:49:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:49:37.233333 | orchestrator | 2026-04-08 02:49:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:49:40.286234 | orchestrator | 2026-04-08 02:49:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:49:40.288415 | orchestrator | 2026-04-08 02:49:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:49:40.288471 | orchestrator | 2026-04-08 02:49:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:49:43.334321 | orchestrator | 2026-04-08 02:49:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:49:43.336074 | orchestrator | 2026-04-08 02:49:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:49:43.336139 | orchestrator | 2026-04-08 02:49:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:49:46.379934 | orchestrator | 2026-04-08 02:49:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:49:46.381910 | orchestrator | 2026-04-08 02:49:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:49:46.381986 | orchestrator | 2026-04-08 02:49:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:49:49.431196 | orchestrator | 2026-04-08 02:49:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:49:49.433062 | orchestrator | 2026-04-08 02:49:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:49:49.433111 | orchestrator | 2026-04-08 02:49:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:49:52.472621 | orchestrator | 2026-04-08 02:49:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:49:52.475120 | orchestrator | 2026-04-08 02:49:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:49:52.475189 | orchestrator | 2026-04-08 02:49:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:49:55.526230 | orchestrator | 2026-04-08 02:49:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:49:55.528682 | orchestrator | 2026-04-08 02:49:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:49:55.528740 | orchestrator | 2026-04-08 02:49:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:49:58.576608 | orchestrator | 2026-04-08 02:49:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:49:58.577700 | orchestrator | 2026-04-08 02:49:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:49:58.577770 | orchestrator | 2026-04-08 02:49:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:50:01.622303 | orchestrator | 2026-04-08 02:50:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:50:01.624610 | orchestrator | 2026-04-08 02:50:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:50:01.624668 | orchestrator | 2026-04-08 02:50:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:50:04.673832 | orchestrator | 2026-04-08 02:50:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:50:04.676570 | orchestrator | 2026-04-08 02:50:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:50:04.676666 | orchestrator | 2026-04-08 02:50:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:50:07.718168 | orchestrator | 2026-04-08 02:50:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:50:07.719026 | orchestrator | 2026-04-08 02:50:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:50:07.719078 | orchestrator | 2026-04-08 02:50:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:50:10.764493 | orchestrator | 2026-04-08 02:50:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:50:10.765291 | orchestrator | 2026-04-08 02:50:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:50:10.765340 | orchestrator | 2026-04-08 02:50:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:50:13.810162 | orchestrator | 2026-04-08 02:50:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:50:13.813666 | orchestrator | 2026-04-08 02:50:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:50:13.813740 | orchestrator | 2026-04-08 02:50:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:50:16.858972 | orchestrator | 2026-04-08 02:50:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:50:16.860489 | orchestrator | 2026-04-08 02:50:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:50:16.860526 | orchestrator | 2026-04-08 02:50:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:50:19.907728 | orchestrator | 2026-04-08 02:50:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:50:19.909154 | orchestrator | 2026-04-08 02:50:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:50:19.909240 | orchestrator | 2026-04-08 02:50:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:50:22.952938 | orchestrator | 2026-04-08 02:50:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:50:22.955847 | orchestrator | 2026-04-08 02:50:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:50:22.955999 | orchestrator | 2026-04-08 02:50:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:50:26.003674 | orchestrator | 2026-04-08 02:50:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:50:26.004879 | orchestrator | 2026-04-08 02:50:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:50:26.005515 | orchestrator | 2026-04-08 02:50:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:50:29.049857 | orchestrator | 2026-04-08 02:50:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:50:29.051102 | orchestrator | 2026-04-08 02:50:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:50:29.051151 | orchestrator | 2026-04-08 02:50:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:50:32.089469 | orchestrator | 2026-04-08 02:50:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:50:32.091151 | orchestrator | 2026-04-08 02:50:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:50:32.091199 | orchestrator | 2026-04-08 02:50:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:50:35.143295 | orchestrator | 2026-04-08 02:50:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:50:35.144823 | orchestrator | 2026-04-08 02:50:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:50:35.144904 | orchestrator | 2026-04-08 02:50:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:50:38.196182 | orchestrator | 2026-04-08 02:50:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:50:38.199853 | orchestrator | 2026-04-08 02:50:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:50:38.199921 | orchestrator | 2026-04-08 02:50:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:50:41.242544 | orchestrator | 2026-04-08 02:50:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:50:41.247112 | orchestrator | 2026-04-08 02:50:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:50:41.247199 | orchestrator | 2026-04-08 02:50:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:50:44.291457 | orchestrator | 2026-04-08 02:50:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:50:44.292889 | orchestrator | 2026-04-08 02:50:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:50:44.292968 | orchestrator | 2026-04-08 02:50:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:50:47.344689 | orchestrator | 2026-04-08 02:50:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:50:47.345992 | orchestrator | 2026-04-08 02:50:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:50:47.346070 | orchestrator | 2026-04-08 02:50:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:50:50.397188 | orchestrator | 2026-04-08 02:50:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:50:50.399628 | orchestrator | 2026-04-08 02:50:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:50:50.399695 | orchestrator | 2026-04-08 02:50:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:50:53.445997 | orchestrator | 2026-04-08 02:50:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:50:53.448612 | orchestrator | 2026-04-08 02:50:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:50:53.448667 | orchestrator | 2026-04-08 02:50:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:50:56.496607 | orchestrator | 2026-04-08 02:50:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:50:56.498170 | orchestrator | 2026-04-08 02:50:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:50:56.498211 | orchestrator | 2026-04-08 02:50:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:50:59.540186 | orchestrator | 2026-04-08 02:50:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:50:59.541743 | orchestrator | 2026-04-08 02:50:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:50:59.541785 | orchestrator | 2026-04-08 02:50:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:51:02.583708 | orchestrator | 2026-04-08 02:51:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:51:02.586126 | orchestrator | 2026-04-08 02:51:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:51:02.586170 | orchestrator | 2026-04-08 02:51:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:51:05.633824 | orchestrator | 2026-04-08 02:51:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:51:05.635192 | orchestrator | 2026-04-08 02:51:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:51:05.635562 | orchestrator | 2026-04-08 02:51:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:51:08.682269 | orchestrator | 2026-04-08 02:51:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:51:08.684819 | orchestrator | 2026-04-08 02:51:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:51:08.684887 | orchestrator | 2026-04-08 02:51:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:51:11.735012 | orchestrator | 2026-04-08 02:51:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:51:11.737966 | orchestrator | 2026-04-08 02:51:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:51:11.738077 | orchestrator | 2026-04-08 02:51:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:51:14.786685 | orchestrator | 2026-04-08 02:51:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:51:14.789010 | orchestrator | 2026-04-08 02:51:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:51:14.789053 | orchestrator | 2026-04-08 02:51:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:51:17.834264 | orchestrator | 2026-04-08 02:51:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:51:17.836595 | orchestrator | 2026-04-08 02:51:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:51:17.836637 | orchestrator | 2026-04-08 02:51:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:51:20.882110 | orchestrator | 2026-04-08 02:51:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:51:20.883811 | orchestrator | 2026-04-08 02:51:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:51:20.883848 | orchestrator | 2026-04-08 02:51:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:51:23.929165 | orchestrator | 2026-04-08 02:51:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:51:23.931541 | orchestrator | 2026-04-08 02:51:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:51:23.931593 | orchestrator | 2026-04-08 02:51:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:51:26.973590 | orchestrator | 2026-04-08 02:51:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:51:26.975718 | orchestrator | 2026-04-08 02:51:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:51:26.975760 | orchestrator | 2026-04-08 02:51:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:51:30.017427 | orchestrator | 2026-04-08 02:51:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:51:30.020493 | orchestrator | 2026-04-08 02:51:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:51:30.020558 | orchestrator | 2026-04-08 02:51:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:51:33.057777 | orchestrator | 2026-04-08 02:51:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:51:33.059810 | orchestrator | 2026-04-08 02:51:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:51:33.059852 | orchestrator | 2026-04-08 02:51:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:51:36.109774 | orchestrator | 2026-04-08 02:51:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:51:36.110866 | orchestrator | 2026-04-08 02:51:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:51:36.110890 | orchestrator | 2026-04-08 02:51:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:51:39.159081 | orchestrator | 2026-04-08 02:51:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:51:39.162405 | orchestrator | 2026-04-08 02:51:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:51:39.162496 | orchestrator | 2026-04-08 02:51:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:51:42.207000 | orchestrator | 2026-04-08 02:51:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:51:42.208784 | orchestrator | 2026-04-08 02:51:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:51:42.208827 | orchestrator | 2026-04-08 02:51:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:51:45.256596 | orchestrator | 2026-04-08 02:51:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:51:45.258716 | orchestrator | 2026-04-08 02:51:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:51:45.258762 | orchestrator | 2026-04-08 02:51:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:51:48.301672 | orchestrator | 2026-04-08 02:51:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:51:48.303223 | orchestrator | 2026-04-08 02:51:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:51:48.303277 | orchestrator | 2026-04-08 02:51:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:51:51.341760 | orchestrator | 2026-04-08 02:51:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:51:51.345030 | orchestrator | 2026-04-08 02:51:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:51:51.345101 | orchestrator | 2026-04-08 02:51:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:51:54.387996 | orchestrator | 2026-04-08 02:51:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:51:54.389749 | orchestrator | 2026-04-08 02:51:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:51:54.389811 | orchestrator | 2026-04-08 02:51:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:51:57.433572 | orchestrator | 2026-04-08 02:51:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:51:57.434478 | orchestrator | 2026-04-08 02:51:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:51:57.434527 | orchestrator | 2026-04-08 02:51:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:52:00.481011 | orchestrator | 2026-04-08 02:52:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:52:00.483821 | orchestrator | 2026-04-08 02:52:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:52:00.483865 | orchestrator | 2026-04-08 02:52:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:52:03.528138 | orchestrator | 2026-04-08 02:52:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:52:03.530003 | orchestrator | 2026-04-08 02:52:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:52:03.530136 | orchestrator | 2026-04-08 02:52:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:52:06.577287 | orchestrator | 2026-04-08 02:52:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:52:06.579812 | orchestrator | 2026-04-08 02:52:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:52:06.580057 | orchestrator | 2026-04-08 02:52:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:52:09.620752 | orchestrator | 2026-04-08 02:52:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:52:09.622472 | orchestrator | 2026-04-08 02:52:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:52:09.622553 | orchestrator | 2026-04-08 02:52:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:52:12.667171 | orchestrator | 2026-04-08 02:52:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:52:12.668609 | orchestrator | 2026-04-08 02:52:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:52:12.668743 | orchestrator | 2026-04-08 02:52:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:52:15.716783 | orchestrator | 2026-04-08 02:52:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:52:15.718334 | orchestrator | 2026-04-08 02:52:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:52:15.718532 | orchestrator | 2026-04-08 02:52:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:52:18.767381 | orchestrator | 2026-04-08 02:52:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:52:18.769864 | orchestrator | 2026-04-08 02:52:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:52:18.769954 | orchestrator | 2026-04-08 02:52:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:52:21.819425 | orchestrator | 2026-04-08 02:52:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:52:21.821621 | orchestrator | 2026-04-08 02:52:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:52:21.821672 | orchestrator | 2026-04-08 02:52:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:52:24.870538 | orchestrator | 2026-04-08 02:52:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:52:24.872049 | orchestrator | 2026-04-08 02:52:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:52:24.872129 | orchestrator | 2026-04-08 02:52:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:52:27.918639 | orchestrator | 2026-04-08 02:52:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:52:27.919029 | orchestrator | 2026-04-08 02:52:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:52:27.919070 | orchestrator | 2026-04-08 02:52:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:52:30.962845 | orchestrator | 2026-04-08 02:52:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:52:30.964636 | orchestrator | 2026-04-08 02:52:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:52:30.964692 | orchestrator | 2026-04-08 02:52:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:52:34.014005 | orchestrator | 2026-04-08 02:52:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:52:34.017134 | orchestrator | 2026-04-08 02:52:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:52:34.017202 | orchestrator | 2026-04-08 02:52:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:52:37.064902 | orchestrator | 2026-04-08 02:52:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:52:37.066719 | orchestrator | 2026-04-08 02:52:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:52:37.066760 | orchestrator | 2026-04-08 02:52:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:52:40.115284 | orchestrator | 2026-04-08 02:52:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:52:40.118005 | orchestrator | 2026-04-08 02:52:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:52:40.118113 | orchestrator | 2026-04-08 02:52:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:52:43.161890 | orchestrator | 2026-04-08 02:52:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:52:43.163837 | orchestrator | 2026-04-08 02:52:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:52:43.163911 | orchestrator | 2026-04-08 02:52:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:52:46.209478 | orchestrator | 2026-04-08 02:52:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:52:46.210867 | orchestrator | 2026-04-08 02:52:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:52:46.210942 | orchestrator | 2026-04-08 02:52:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:52:49.261023 | orchestrator | 2026-04-08 02:52:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:52:49.263071 | orchestrator | 2026-04-08 02:52:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:52:49.263146 | orchestrator | 2026-04-08 02:52:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:52:52.310882 | orchestrator | 2026-04-08 02:52:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:52:52.312556 | orchestrator | 2026-04-08 02:52:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:52:52.312596 | orchestrator | 2026-04-08 02:52:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:52:55.355497 | orchestrator | 2026-04-08 02:52:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:52:55.359063 | orchestrator | 2026-04-08 02:52:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:52:55.359147 | orchestrator | 2026-04-08 02:52:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:52:58.402274 | orchestrator | 2026-04-08 02:52:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:52:58.404116 | orchestrator | 2026-04-08 02:52:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:52:58.404193 | orchestrator | 2026-04-08 02:52:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:53:01.444533 | orchestrator | 2026-04-08 02:53:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:53:01.445871 | orchestrator | 2026-04-08 02:53:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:53:01.445977 | orchestrator | 2026-04-08 02:53:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:53:04.487891 | orchestrator | 2026-04-08 02:53:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:53:04.492312 | orchestrator | 2026-04-08 02:53:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:53:04.493013 | orchestrator | 2026-04-08 02:53:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:53:07.540126 | orchestrator | 2026-04-08 02:53:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:53:07.541775 | orchestrator | 2026-04-08 02:53:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:53:07.541827 | orchestrator | 2026-04-08 02:53:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:53:10.586253 | orchestrator | 2026-04-08 02:53:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:53:10.587632 | orchestrator | 2026-04-08 02:53:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:53:10.587704 | orchestrator | 2026-04-08 02:53:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:53:13.631887 | orchestrator | 2026-04-08 02:53:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:53:13.633489 | orchestrator | 2026-04-08 02:53:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:53:13.633554 | orchestrator | 2026-04-08 02:53:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:53:16.677667 | orchestrator | 2026-04-08 02:53:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:53:16.679230 | orchestrator | 2026-04-08 02:53:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:53:16.679275 | orchestrator | 2026-04-08 02:53:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:53:19.723501 | orchestrator | 2026-04-08 02:53:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:53:19.725836 | orchestrator | 2026-04-08 02:53:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:53:19.725886 | orchestrator | 2026-04-08 02:53:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:53:22.770831 | orchestrator | 2026-04-08 02:53:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:53:22.772044 | orchestrator | 2026-04-08 02:53:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:53:22.772394 | orchestrator | 2026-04-08 02:53:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:53:25.816894 | orchestrator | 2026-04-08 02:53:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:53:25.819602 | orchestrator | 2026-04-08 02:53:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:53:25.819654 | orchestrator | 2026-04-08 02:53:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:53:28.865482 | orchestrator | 2026-04-08 02:53:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:53:28.868148 | orchestrator | 2026-04-08 02:53:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:53:28.868186 | orchestrator | 2026-04-08 02:53:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:53:31.914276 | orchestrator | 2026-04-08 02:53:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:53:31.918278 | orchestrator | 2026-04-08 02:53:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:53:31.918670 | orchestrator | 2026-04-08 02:53:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:53:34.967591 | orchestrator | 2026-04-08 02:53:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:53:34.969427 | orchestrator | 2026-04-08 02:53:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:53:34.969512 | orchestrator | 2026-04-08 02:53:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:53:38.012372 | orchestrator | 2026-04-08 02:53:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:53:38.014434 | orchestrator | 2026-04-08 02:53:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:53:38.014530 | orchestrator | 2026-04-08 02:53:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:53:41.056921 | orchestrator | 2026-04-08 02:53:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:53:41.058493 | orchestrator | 2026-04-08 02:53:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:53:41.058658 | orchestrator | 2026-04-08 02:53:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:53:44.102593 | orchestrator | 2026-04-08 02:53:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:53:44.103594 | orchestrator | 2026-04-08 02:53:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:53:44.103642 | orchestrator | 2026-04-08 02:53:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:53:47.149999 | orchestrator | 2026-04-08 02:53:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:53:47.152466 | orchestrator | 2026-04-08 02:53:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:53:47.152542 | orchestrator | 2026-04-08 02:53:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:53:50.193810 | orchestrator | 2026-04-08 02:53:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:53:50.195429 | orchestrator | 2026-04-08 02:53:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:53:50.195478 | orchestrator | 2026-04-08 02:53:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:53:53.240941 | orchestrator | 2026-04-08 02:53:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:53:53.244432 | orchestrator | 2026-04-08 02:53:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:53:53.244506 | orchestrator | 2026-04-08 02:53:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:53:56.295099 | orchestrator | 2026-04-08 02:53:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:53:56.297408 | orchestrator | 2026-04-08 02:53:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:53:56.297461 | orchestrator | 2026-04-08 02:53:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:53:59.342789 | orchestrator | 2026-04-08 02:53:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:53:59.344928 | orchestrator | 2026-04-08 02:53:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:53:59.345053 | orchestrator | 2026-04-08 02:53:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:54:02.393278 | orchestrator | 2026-04-08 02:54:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:54:02.395948 | orchestrator | 2026-04-08 02:54:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:54:02.396006 | orchestrator | 2026-04-08 02:54:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:54:05.444669 | orchestrator | 2026-04-08 02:54:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:54:05.445655 | orchestrator | 2026-04-08 02:54:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:54:05.445674 | orchestrator | 2026-04-08 02:54:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:54:08.491692 | orchestrator | 2026-04-08 02:54:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:54:08.494180 | orchestrator | 2026-04-08 02:54:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:54:08.494258 | orchestrator | 2026-04-08 02:54:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:54:11.543721 | orchestrator | 2026-04-08 02:54:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:54:11.544911 | orchestrator | 2026-04-08 02:54:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:54:11.545003 | orchestrator | 2026-04-08 02:54:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:54:14.594525 | orchestrator | 2026-04-08 02:54:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:54:14.596403 | orchestrator | 2026-04-08 02:54:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:54:14.596593 | orchestrator | 2026-04-08 02:54:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:54:17.647357 | orchestrator | 2026-04-08 02:54:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:54:17.649556 | orchestrator | 2026-04-08 02:54:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:54:17.649579 | orchestrator | 2026-04-08 02:54:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:54:20.694520 | orchestrator | 2026-04-08 02:54:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:54:20.696008 | orchestrator | 2026-04-08 02:54:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:54:20.696150 | orchestrator | 2026-04-08 02:54:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:54:23.744599 | orchestrator | 2026-04-08 02:54:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:54:23.746428 | orchestrator | 2026-04-08 02:54:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:54:23.746501 | orchestrator | 2026-04-08 02:54:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:54:26.792238 | orchestrator | 2026-04-08 02:54:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:54:26.794877 | orchestrator | 2026-04-08 02:54:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:54:26.794958 | orchestrator | 2026-04-08 02:54:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:54:29.837489 | orchestrator | 2026-04-08 02:54:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:54:29.838367 | orchestrator | 2026-04-08 02:54:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:54:29.838419 | orchestrator | 2026-04-08 02:54:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:54:32.880484 | orchestrator | 2026-04-08 02:54:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:54:32.882824 | orchestrator | 2026-04-08 02:54:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:54:32.882882 | orchestrator | 2026-04-08 02:54:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:54:35.929592 | orchestrator | 2026-04-08 02:54:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:54:35.931002 | orchestrator | 2026-04-08 02:54:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:54:35.931054 | orchestrator | 2026-04-08 02:54:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:54:38.976889 | orchestrator | 2026-04-08 02:54:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:54:38.979014 | orchestrator | 2026-04-08 02:54:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:54:38.979057 | orchestrator | 2026-04-08 02:54:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:54:42.032911 | orchestrator | 2026-04-08 02:54:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:54:42.035486 | orchestrator | 2026-04-08 02:54:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:54:42.035592 | orchestrator | 2026-04-08 02:54:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:54:45.080517 | orchestrator | 2026-04-08 02:54:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:54:45.081783 | orchestrator | 2026-04-08 02:54:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:54:45.081830 | orchestrator | 2026-04-08 02:54:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:54:48.129408 | orchestrator | 2026-04-08 02:54:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:54:48.130076 | orchestrator | 2026-04-08 02:54:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:54:48.130118 | orchestrator | 2026-04-08 02:54:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:54:51.174890 | orchestrator | 2026-04-08 02:54:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:54:51.176702 | orchestrator | 2026-04-08 02:54:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:54:51.176781 | orchestrator | 2026-04-08 02:54:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:54:54.232829 | orchestrator | 2026-04-08 02:54:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:54:54.234550 | orchestrator | 2026-04-08 02:54:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:54:54.234605 | orchestrator | 2026-04-08 02:54:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:54:57.274468 | orchestrator | 2026-04-08 02:54:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:54:57.276789 | orchestrator | 2026-04-08 02:54:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:54:57.276864 | orchestrator | 2026-04-08 02:54:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:55:00.321644 | orchestrator | 2026-04-08 02:55:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:55:00.325072 | orchestrator | 2026-04-08 02:55:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:55:00.325128 | orchestrator | 2026-04-08 02:55:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:55:03.374752 | orchestrator | 2026-04-08 02:55:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:55:03.377764 | orchestrator | 2026-04-08 02:55:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:55:03.377810 | orchestrator | 2026-04-08 02:55:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:55:06.428077 | orchestrator | 2026-04-08 02:55:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:55:06.431371 | orchestrator | 2026-04-08 02:55:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:55:06.431461 | orchestrator | 2026-04-08 02:55:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:55:09.485883 | orchestrator | 2026-04-08 02:55:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:55:09.490336 | orchestrator | 2026-04-08 02:55:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:55:09.490417 | orchestrator | 2026-04-08 02:55:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:55:12.539377 | orchestrator | 2026-04-08 02:55:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:55:12.540975 | orchestrator | 2026-04-08 02:55:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:55:12.541101 | orchestrator | 2026-04-08 02:55:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:55:15.591071 | orchestrator | 2026-04-08 02:55:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:55:15.592854 | orchestrator | 2026-04-08 02:55:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:55:15.592916 | orchestrator | 2026-04-08 02:55:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:55:18.634318 | orchestrator | 2026-04-08 02:55:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:55:18.635107 | orchestrator | 2026-04-08 02:55:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:55:18.635132 | orchestrator | 2026-04-08 02:55:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:55:21.679323 | orchestrator | 2026-04-08 02:55:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:55:21.681088 | orchestrator | 2026-04-08 02:55:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:55:21.681130 | orchestrator | 2026-04-08 02:55:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:55:24.740627 | orchestrator | 2026-04-08 02:55:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:55:24.741011 | orchestrator | 2026-04-08 02:55:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:55:24.741706 | orchestrator | 2026-04-08 02:55:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:55:27.791442 | orchestrator | 2026-04-08 02:55:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:55:27.793803 | orchestrator | 2026-04-08 02:55:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:55:27.793847 | orchestrator | 2026-04-08 02:55:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:55:30.839666 | orchestrator | 2026-04-08 02:55:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:55:30.840832 | orchestrator | 2026-04-08 02:55:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:55:30.840859 | orchestrator | 2026-04-08 02:55:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:55:33.890116 | orchestrator | 2026-04-08 02:55:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:55:33.892384 | orchestrator | 2026-04-08 02:55:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:55:33.892441 | orchestrator | 2026-04-08 02:55:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:55:36.935668 | orchestrator | 2026-04-08 02:55:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:55:36.936317 | orchestrator | 2026-04-08 02:55:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:55:36.936394 | orchestrator | 2026-04-08 02:55:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:55:39.984048 | orchestrator | 2026-04-08 02:55:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:55:39.984185 | orchestrator | 2026-04-08 02:55:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:55:39.984197 | orchestrator | 2026-04-08 02:55:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:55:43.036310 | orchestrator | 2026-04-08 02:55:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:55:43.038111 | orchestrator | 2026-04-08 02:55:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:55:43.038169 | orchestrator | 2026-04-08 02:55:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:55:46.089532 | orchestrator | 2026-04-08 02:55:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:55:46.091405 | orchestrator | 2026-04-08 02:55:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:55:46.091493 | orchestrator | 2026-04-08 02:55:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:55:49.135371 | orchestrator | 2026-04-08 02:55:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:55:49.137444 | orchestrator | 2026-04-08 02:55:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:55:49.137563 | orchestrator | 2026-04-08 02:55:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:55:52.191795 | orchestrator | 2026-04-08 02:55:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:55:52.193720 | orchestrator | 2026-04-08 02:55:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:55:52.193782 | orchestrator | 2026-04-08 02:55:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:55:55.245342 | orchestrator | 2026-04-08 02:55:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:55:55.247621 | orchestrator | 2026-04-08 02:55:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:55:55.247688 | orchestrator | 2026-04-08 02:55:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:55:58.290736 | orchestrator | 2026-04-08 02:55:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:55:58.292273 | orchestrator | 2026-04-08 02:55:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:55:58.292323 | orchestrator | 2026-04-08 02:55:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:56:01.338060 | orchestrator | 2026-04-08 02:56:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:56:01.340012 | orchestrator | 2026-04-08 02:56:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:56:01.340073 | orchestrator | 2026-04-08 02:56:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:56:04.385273 | orchestrator | 2026-04-08 02:56:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:56:04.385536 | orchestrator | 2026-04-08 02:56:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:56:04.385560 | orchestrator | 2026-04-08 02:56:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:56:07.431916 | orchestrator | 2026-04-08 02:56:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:56:07.433053 | orchestrator | 2026-04-08 02:56:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:56:07.433145 | orchestrator | 2026-04-08 02:56:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:56:10.480305 | orchestrator | 2026-04-08 02:56:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:56:10.481817 | orchestrator | 2026-04-08 02:56:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:56:10.481873 | orchestrator | 2026-04-08 02:56:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:56:13.530929 | orchestrator | 2026-04-08 02:56:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:56:13.533710 | orchestrator | 2026-04-08 02:56:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:56:13.533738 | orchestrator | 2026-04-08 02:56:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:56:16.583393 | orchestrator | 2026-04-08 02:56:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:56:16.584784 | orchestrator | 2026-04-08 02:56:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:56:16.584815 | orchestrator | 2026-04-08 02:56:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:56:19.634805 | orchestrator | 2026-04-08 02:56:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:56:19.637940 | orchestrator | 2026-04-08 02:56:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:56:19.638004 | orchestrator | 2026-04-08 02:56:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:56:22.690632 | orchestrator | 2026-04-08 02:56:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:56:22.693456 | orchestrator | 2026-04-08 02:56:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:56:22.693528 | orchestrator | 2026-04-08 02:56:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:56:25.746702 | orchestrator | 2026-04-08 02:56:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:56:25.750447 | orchestrator | 2026-04-08 02:56:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:56:25.750563 | orchestrator | 2026-04-08 02:56:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:56:28.796119 | orchestrator | 2026-04-08 02:56:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:56:28.798185 | orchestrator | 2026-04-08 02:56:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:56:28.798297 | orchestrator | 2026-04-08 02:56:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:56:31.845759 | orchestrator | 2026-04-08 02:56:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:56:31.847416 | orchestrator | 2026-04-08 02:56:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:56:31.847476 | orchestrator | 2026-04-08 02:56:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:56:34.897844 | orchestrator | 2026-04-08 02:56:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:56:34.902087 | orchestrator | 2026-04-08 02:56:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:56:34.902238 | orchestrator | 2026-04-08 02:56:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:56:37.946557 | orchestrator | 2026-04-08 02:56:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:56:37.947616 | orchestrator | 2026-04-08 02:56:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:56:37.947656 | orchestrator | 2026-04-08 02:56:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:56:40.996706 | orchestrator | 2026-04-08 02:56:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:56:40.998482 | orchestrator | 2026-04-08 02:56:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:56:40.998547 | orchestrator | 2026-04-08 02:56:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:56:44.054419 | orchestrator | 2026-04-08 02:56:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:56:44.055620 | orchestrator | 2026-04-08 02:56:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:56:44.055953 | orchestrator | 2026-04-08 02:56:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:56:47.106157 | orchestrator | 2026-04-08 02:56:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:56:47.107351 | orchestrator | 2026-04-08 02:56:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:56:47.107396 | orchestrator | 2026-04-08 02:56:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:56:50.158180 | orchestrator | 2026-04-08 02:56:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:56:50.160390 | orchestrator | 2026-04-08 02:56:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:56:50.160432 | orchestrator | 2026-04-08 02:56:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:56:53.206871 | orchestrator | 2026-04-08 02:56:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:56:53.207410 | orchestrator | 2026-04-08 02:56:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:56:53.207455 | orchestrator | 2026-04-08 02:56:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:56:56.250995 | orchestrator | 2026-04-08 02:56:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:56:56.252188 | orchestrator | 2026-04-08 02:56:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:56:56.252288 | orchestrator | 2026-04-08 02:56:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:56:59.312649 | orchestrator | 2026-04-08 02:56:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:56:59.316523 | orchestrator | 2026-04-08 02:56:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:56:59.316659 | orchestrator | 2026-04-08 02:56:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:57:02.369147 | orchestrator | 2026-04-08 02:57:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:57:02.370179 | orchestrator | 2026-04-08 02:57:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:57:02.370497 | orchestrator | 2026-04-08 02:57:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:57:05.424370 | orchestrator | 2026-04-08 02:57:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:57:05.426415 | orchestrator | 2026-04-08 02:57:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:57:05.426489 | orchestrator | 2026-04-08 02:57:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:57:08.473112 | orchestrator | 2026-04-08 02:57:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:57:08.475009 | orchestrator | 2026-04-08 02:57:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:57:08.475310 | orchestrator | 2026-04-08 02:57:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:57:11.523785 | orchestrator | 2026-04-08 02:57:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:57:11.525437 | orchestrator | 2026-04-08 02:57:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:57:11.525522 | orchestrator | 2026-04-08 02:57:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:57:14.578994 | orchestrator | 2026-04-08 02:57:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:57:14.580070 | orchestrator | 2026-04-08 02:57:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:57:14.580328 | orchestrator | 2026-04-08 02:57:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:57:17.635710 | orchestrator | 2026-04-08 02:57:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:57:17.638319 | orchestrator | 2026-04-08 02:57:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:57:17.638471 | orchestrator | 2026-04-08 02:57:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:57:20.688594 | orchestrator | 2026-04-08 02:57:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:57:20.690445 | orchestrator | 2026-04-08 02:57:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:57:20.690521 | orchestrator | 2026-04-08 02:57:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:57:23.737140 | orchestrator | 2026-04-08 02:57:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:57:23.740301 | orchestrator | 2026-04-08 02:57:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:57:23.740376 | orchestrator | 2026-04-08 02:57:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:57:26.783047 | orchestrator | 2026-04-08 02:57:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:57:26.784468 | orchestrator | 2026-04-08 02:57:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:57:26.784527 | orchestrator | 2026-04-08 02:57:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:57:29.827248 | orchestrator | 2026-04-08 02:57:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:57:29.828085 | orchestrator | 2026-04-08 02:57:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:57:29.828124 | orchestrator | 2026-04-08 02:57:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:57:32.877595 | orchestrator | 2026-04-08 02:57:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:57:32.879107 | orchestrator | 2026-04-08 02:57:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:57:32.879152 | orchestrator | 2026-04-08 02:57:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:57:35.928115 | orchestrator | 2026-04-08 02:57:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:57:35.930650 | orchestrator | 2026-04-08 02:57:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:57:35.930718 | orchestrator | 2026-04-08 02:57:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:57:38.978806 | orchestrator | 2026-04-08 02:57:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:57:38.980749 | orchestrator | 2026-04-08 02:57:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:57:38.980839 | orchestrator | 2026-04-08 02:57:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:57:42.029569 | orchestrator | 2026-04-08 02:57:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:57:42.030940 | orchestrator | 2026-04-08 02:57:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:57:42.031008 | orchestrator | 2026-04-08 02:57:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:57:45.075954 | orchestrator | 2026-04-08 02:57:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:57:45.077980 | orchestrator | 2026-04-08 02:57:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:57:45.078071 | orchestrator | 2026-04-08 02:57:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:57:48.131964 | orchestrator | 2026-04-08 02:57:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:57:48.134619 | orchestrator | 2026-04-08 02:57:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:57:48.134683 | orchestrator | 2026-04-08 02:57:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:57:51.179225 | orchestrator | 2026-04-08 02:57:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:57:51.181403 | orchestrator | 2026-04-08 02:57:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:57:51.181488 | orchestrator | 2026-04-08 02:57:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:57:54.231603 | orchestrator | 2026-04-08 02:57:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:57:54.233272 | orchestrator | 2026-04-08 02:57:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:57:54.233339 | orchestrator | 2026-04-08 02:57:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:57:57.284252 | orchestrator | 2026-04-08 02:57:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:57:57.285917 | orchestrator | 2026-04-08 02:57:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:57:57.286089 | orchestrator | 2026-04-08 02:57:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:58:00.334426 | orchestrator | 2026-04-08 02:58:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:58:00.336013 | orchestrator | 2026-04-08 02:58:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:58:00.336312 | orchestrator | 2026-04-08 02:58:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:58:03.389016 | orchestrator | 2026-04-08 02:58:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:58:03.390458 | orchestrator | 2026-04-08 02:58:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:58:03.390522 | orchestrator | 2026-04-08 02:58:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:58:06.442524 | orchestrator | 2026-04-08 02:58:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:58:06.444650 | orchestrator | 2026-04-08 02:58:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:58:06.444696 | orchestrator | 2026-04-08 02:58:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:58:09.494801 | orchestrator | 2026-04-08 02:58:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:58:09.495993 | orchestrator | 2026-04-08 02:58:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:58:09.496144 | orchestrator | 2026-04-08 02:58:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:58:12.545273 | orchestrator | 2026-04-08 02:58:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:58:12.545803 | orchestrator | 2026-04-08 02:58:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:58:12.545936 | orchestrator | 2026-04-08 02:58:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:58:15.592021 | orchestrator | 2026-04-08 02:58:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:58:15.593013 | orchestrator | 2026-04-08 02:58:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:58:15.593041 | orchestrator | 2026-04-08 02:58:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:58:18.643531 | orchestrator | 2026-04-08 02:58:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:58:18.645644 | orchestrator | 2026-04-08 02:58:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:58:18.645759 | orchestrator | 2026-04-08 02:58:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:58:21.690947 | orchestrator | 2026-04-08 02:58:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:58:21.691589 | orchestrator | 2026-04-08 02:58:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:58:21.691686 | orchestrator | 2026-04-08 02:58:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:58:24.746130 | orchestrator | 2026-04-08 02:58:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:58:24.747674 | orchestrator | 2026-04-08 02:58:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:58:24.747842 | orchestrator | 2026-04-08 02:58:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:58:27.796767 | orchestrator | 2026-04-08 02:58:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:58:27.799772 | orchestrator | 2026-04-08 02:58:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:58:27.799869 | orchestrator | 2026-04-08 02:58:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:58:30.857998 | orchestrator | 2026-04-08 02:58:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:58:30.859054 | orchestrator | 2026-04-08 02:58:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:58:30.859109 | orchestrator | 2026-04-08 02:58:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:58:33.903496 | orchestrator | 2026-04-08 02:58:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:58:33.905727 | orchestrator | 2026-04-08 02:58:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:58:33.905781 | orchestrator | 2026-04-08 02:58:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:58:36.956505 | orchestrator | 2026-04-08 02:58:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:58:36.956600 | orchestrator | 2026-04-08 02:58:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:58:36.956616 | orchestrator | 2026-04-08 02:58:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:58:40.012360 | orchestrator | 2026-04-08 02:58:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:58:40.017714 | orchestrator | 2026-04-08 02:58:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:58:40.017801 | orchestrator | 2026-04-08 02:58:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:58:43.064576 | orchestrator | 2026-04-08 02:58:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:58:43.066168 | orchestrator | 2026-04-08 02:58:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:58:43.066236 | orchestrator | 2026-04-08 02:58:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:58:46.115359 | orchestrator | 2026-04-08 02:58:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:58:46.116921 | orchestrator | 2026-04-08 02:58:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:58:46.116981 | orchestrator | 2026-04-08 02:58:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:58:49.163573 | orchestrator | 2026-04-08 02:58:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:58:49.166742 | orchestrator | 2026-04-08 02:58:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:58:49.166809 | orchestrator | 2026-04-08 02:58:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:58:52.212475 | orchestrator | 2026-04-08 02:58:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:58:52.214105 | orchestrator | 2026-04-08 02:58:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:58:52.214177 | orchestrator | 2026-04-08 02:58:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:58:55.263706 | orchestrator | 2026-04-08 02:58:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:58:55.265884 | orchestrator | 2026-04-08 02:58:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:58:55.265945 | orchestrator | 2026-04-08 02:58:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:58:58.309849 | orchestrator | 2026-04-08 02:58:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:58:58.311079 | orchestrator | 2026-04-08 02:58:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:58:58.311203 | orchestrator | 2026-04-08 02:58:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:59:01.367712 | orchestrator | 2026-04-08 02:59:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:59:01.369368 | orchestrator | 2026-04-08 02:59:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:59:01.369427 | orchestrator | 2026-04-08 02:59:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:59:04.413726 | orchestrator | 2026-04-08 02:59:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:59:04.414965 | orchestrator | 2026-04-08 02:59:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:59:04.414998 | orchestrator | 2026-04-08 02:59:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:59:07.466957 | orchestrator | 2026-04-08 02:59:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:59:07.468673 | orchestrator | 2026-04-08 02:59:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:59:07.468717 | orchestrator | 2026-04-08 02:59:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:59:10.517762 | orchestrator | 2026-04-08 02:59:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:59:10.519137 | orchestrator | 2026-04-08 02:59:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:59:10.519189 | orchestrator | 2026-04-08 02:59:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:59:13.563999 | orchestrator | 2026-04-08 02:59:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:59:13.566548 | orchestrator | 2026-04-08 02:59:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:59:13.566612 | orchestrator | 2026-04-08 02:59:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:59:16.603201 | orchestrator | 2026-04-08 02:59:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:59:16.605534 | orchestrator | 2026-04-08 02:59:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:59:16.605611 | orchestrator | 2026-04-08 02:59:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:59:19.649357 | orchestrator | 2026-04-08 02:59:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:59:19.650259 | orchestrator | 2026-04-08 02:59:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:59:19.650295 | orchestrator | 2026-04-08 02:59:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:59:22.699737 | orchestrator | 2026-04-08 02:59:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:59:22.701810 | orchestrator | 2026-04-08 02:59:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:59:22.701956 | orchestrator | 2026-04-08 02:59:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:59:25.748108 | orchestrator | 2026-04-08 02:59:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:59:25.749167 | orchestrator | 2026-04-08 02:59:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:59:25.749232 | orchestrator | 2026-04-08 02:59:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:59:28.796335 | orchestrator | 2026-04-08 02:59:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:59:28.798334 | orchestrator | 2026-04-08 02:59:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:59:28.798384 | orchestrator | 2026-04-08 02:59:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:59:31.847503 | orchestrator | 2026-04-08 02:59:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:59:31.849758 | orchestrator | 2026-04-08 02:59:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:59:31.849830 | orchestrator | 2026-04-08 02:59:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:59:34.900149 | orchestrator | 2026-04-08 02:59:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:59:34.901150 | orchestrator | 2026-04-08 02:59:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:59:34.901276 | orchestrator | 2026-04-08 02:59:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:59:37.954159 | orchestrator | 2026-04-08 02:59:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:59:37.955790 | orchestrator | 2026-04-08 02:59:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:59:37.956541 | orchestrator | 2026-04-08 02:59:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:59:41.012744 | orchestrator | 2026-04-08 02:59:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:59:41.014135 | orchestrator | 2026-04-08 02:59:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:59:41.014250 | orchestrator | 2026-04-08 02:59:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:59:44.055878 | orchestrator | 2026-04-08 02:59:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:59:44.057592 | orchestrator | 2026-04-08 02:59:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:59:44.057715 | orchestrator | 2026-04-08 02:59:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:59:47.105196 | orchestrator | 2026-04-08 02:59:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:59:47.106299 | orchestrator | 2026-04-08 02:59:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:59:47.106344 | orchestrator | 2026-04-08 02:59:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:59:50.149452 | orchestrator | 2026-04-08 02:59:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:59:50.150389 | orchestrator | 2026-04-08 02:59:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:59:50.150431 | orchestrator | 2026-04-08 02:59:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:59:53.193816 | orchestrator | 2026-04-08 02:59:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:59:53.195247 | orchestrator | 2026-04-08 02:59:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:59:53.195313 | orchestrator | 2026-04-08 02:59:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:59:56.238783 | orchestrator | 2026-04-08 02:59:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:59:56.240261 | orchestrator | 2026-04-08 02:59:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:59:56.240323 | orchestrator | 2026-04-08 02:59:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 02:59:59.283521 | orchestrator | 2026-04-08 02:59:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 02:59:59.284510 | orchestrator | 2026-04-08 02:59:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 02:59:59.284547 | orchestrator | 2026-04-08 02:59:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:00:02.340692 | orchestrator | 2026-04-08 03:00:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:00:02.343125 | orchestrator | 2026-04-08 03:00:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:00:02.343198 | orchestrator | 2026-04-08 03:00:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:00:05.389044 | orchestrator | 2026-04-08 03:00:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:00:05.391228 | orchestrator | 2026-04-08 03:00:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:00:05.391292 | orchestrator | 2026-04-08 03:00:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:00:08.433913 | orchestrator | 2026-04-08 03:00:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:00:08.437538 | orchestrator | 2026-04-08 03:00:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:00:08.437627 | orchestrator | 2026-04-08 03:00:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:00:11.483656 | orchestrator | 2026-04-08 03:00:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:00:11.487838 | orchestrator | 2026-04-08 03:00:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:00:11.488050 | orchestrator | 2026-04-08 03:00:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:00:14.534181 | orchestrator | 2026-04-08 03:00:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:00:14.536428 | orchestrator | 2026-04-08 03:00:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:00:14.536498 | orchestrator | 2026-04-08 03:00:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:00:17.584548 | orchestrator | 2026-04-08 03:00:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:00:17.585550 | orchestrator | 2026-04-08 03:00:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:00:17.585665 | orchestrator | 2026-04-08 03:00:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:00:20.633936 | orchestrator | 2026-04-08 03:00:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:00:20.634060 | orchestrator | 2026-04-08 03:00:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:00:20.634071 | orchestrator | 2026-04-08 03:00:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:00:23.680917 | orchestrator | 2026-04-08 03:00:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:00:23.681178 | orchestrator | 2026-04-08 03:00:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:00:23.681313 | orchestrator | 2026-04-08 03:00:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:00:26.727505 | orchestrator | 2026-04-08 03:00:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:00:26.728062 | orchestrator | 2026-04-08 03:00:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:00:26.728147 | orchestrator | 2026-04-08 03:00:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:00:29.769745 | orchestrator | 2026-04-08 03:00:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:00:29.769919 | orchestrator | 2026-04-08 03:00:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:00:29.769937 | orchestrator | 2026-04-08 03:00:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:00:32.812477 | orchestrator | 2026-04-08 03:00:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:00:32.813409 | orchestrator | 2026-04-08 03:00:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:00:32.813452 | orchestrator | 2026-04-08 03:00:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:00:35.866389 | orchestrator | 2026-04-08 03:00:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:00:35.869236 | orchestrator | 2026-04-08 03:00:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:00:35.869314 | orchestrator | 2026-04-08 03:00:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:00:38.914396 | orchestrator | 2026-04-08 03:00:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:00:38.915042 | orchestrator | 2026-04-08 03:00:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:00:38.915141 | orchestrator | 2026-04-08 03:00:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:00:41.967205 | orchestrator | 2026-04-08 03:00:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:00:41.968047 | orchestrator | 2026-04-08 03:00:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:00:41.968091 | orchestrator | 2026-04-08 03:00:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:00:45.019389 | orchestrator | 2026-04-08 03:00:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:00:45.020641 | orchestrator | 2026-04-08 03:00:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:00:45.020672 | orchestrator | 2026-04-08 03:00:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:00:48.069823 | orchestrator | 2026-04-08 03:00:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:00:48.070494 | orchestrator | 2026-04-08 03:00:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:00:48.070533 | orchestrator | 2026-04-08 03:00:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:00:51.117891 | orchestrator | 2026-04-08 03:00:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:00:51.119039 | orchestrator | 2026-04-08 03:00:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:00:51.119109 | orchestrator | 2026-04-08 03:00:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:00:54.163742 | orchestrator | 2026-04-08 03:00:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:00:54.166776 | orchestrator | 2026-04-08 03:00:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:00:54.166846 | orchestrator | 2026-04-08 03:00:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:00:57.213415 | orchestrator | 2026-04-08 03:00:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:00:57.215020 | orchestrator | 2026-04-08 03:00:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:00:57.215098 | orchestrator | 2026-04-08 03:00:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:01:00.265088 | orchestrator | 2026-04-08 03:01:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:01:00.266339 | orchestrator | 2026-04-08 03:01:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:01:00.266395 | orchestrator | 2026-04-08 03:01:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:01:03.317835 | orchestrator | 2026-04-08 03:01:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:01:03.319338 | orchestrator | 2026-04-08 03:01:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:01:03.319373 | orchestrator | 2026-04-08 03:01:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:01:06.377024 | orchestrator | 2026-04-08 03:01:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:01:06.377103 | orchestrator | 2026-04-08 03:01:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:01:06.377110 | orchestrator | 2026-04-08 03:01:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:01:09.423905 | orchestrator | 2026-04-08 03:01:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:01:09.428565 | orchestrator | 2026-04-08 03:01:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:01:09.428649 | orchestrator | 2026-04-08 03:01:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:01:12.477657 | orchestrator | 2026-04-08 03:01:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:01:12.479674 | orchestrator | 2026-04-08 03:01:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:01:12.479745 | orchestrator | 2026-04-08 03:01:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:01:15.533324 | orchestrator | 2026-04-08 03:01:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:01:15.533416 | orchestrator | 2026-04-08 03:01:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:01:15.533426 | orchestrator | 2026-04-08 03:01:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:01:18.570174 | orchestrator | 2026-04-08 03:01:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:01:18.571079 | orchestrator | 2026-04-08 03:01:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:01:18.571118 | orchestrator | 2026-04-08 03:01:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:01:21.618873 | orchestrator | 2026-04-08 03:01:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:01:21.620999 | orchestrator | 2026-04-08 03:01:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:01:21.621052 | orchestrator | 2026-04-08 03:01:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:01:24.670887 | orchestrator | 2026-04-08 03:01:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:01:24.672381 | orchestrator | 2026-04-08 03:01:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:01:24.672481 | orchestrator | 2026-04-08 03:01:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:01:27.718458 | orchestrator | 2026-04-08 03:01:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:01:27.719150 | orchestrator | 2026-04-08 03:01:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:01:27.719185 | orchestrator | 2026-04-08 03:01:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:01:30.778850 | orchestrator | 2026-04-08 03:01:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:01:30.779843 | orchestrator | 2026-04-08 03:01:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:01:30.779898 | orchestrator | 2026-04-08 03:01:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:01:33.827486 | orchestrator | 2026-04-08 03:01:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:01:33.829783 | orchestrator | 2026-04-08 03:01:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:01:33.829914 | orchestrator | 2026-04-08 03:01:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:01:36.876017 | orchestrator | 2026-04-08 03:01:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:01:36.878618 | orchestrator | 2026-04-08 03:01:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:01:36.878666 | orchestrator | 2026-04-08 03:01:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:01:39.928104 | orchestrator | 2026-04-08 03:01:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:01:39.930353 | orchestrator | 2026-04-08 03:01:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:01:39.930429 | orchestrator | 2026-04-08 03:01:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:01:42.974898 | orchestrator | 2026-04-08 03:01:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:01:42.976968 | orchestrator | 2026-04-08 03:01:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:01:42.977032 | orchestrator | 2026-04-08 03:01:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:01:46.027162 | orchestrator | 2026-04-08 03:01:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:01:46.027392 | orchestrator | 2026-04-08 03:01:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:01:46.027422 | orchestrator | 2026-04-08 03:01:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:01:49.077530 | orchestrator | 2026-04-08 03:01:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:01:49.080945 | orchestrator | 2026-04-08 03:01:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:01:49.081046 | orchestrator | 2026-04-08 03:01:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:01:52.129998 | orchestrator | 2026-04-08 03:01:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:01:52.132078 | orchestrator | 2026-04-08 03:01:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:01:52.132430 | orchestrator | 2026-04-08 03:01:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:01:55.183611 | orchestrator | 2026-04-08 03:01:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:01:55.185605 | orchestrator | 2026-04-08 03:01:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:01:55.185666 | orchestrator | 2026-04-08 03:01:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:01:58.228837 | orchestrator | 2026-04-08 03:01:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:01:58.231582 | orchestrator | 2026-04-08 03:01:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:01:58.231651 | orchestrator | 2026-04-08 03:01:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:02:01.281873 | orchestrator | 2026-04-08 03:02:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:02:01.283443 | orchestrator | 2026-04-08 03:02:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:02:01.283503 | orchestrator | 2026-04-08 03:02:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:02:04.338739 | orchestrator | 2026-04-08 03:02:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:02:04.340263 | orchestrator | 2026-04-08 03:02:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:02:04.340316 | orchestrator | 2026-04-08 03:02:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:02:07.396557 | orchestrator | 2026-04-08 03:02:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:02:07.398544 | orchestrator | 2026-04-08 03:02:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:02:07.398605 | orchestrator | 2026-04-08 03:02:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:02:10.446725 | orchestrator | 2026-04-08 03:02:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:02:10.449542 | orchestrator | 2026-04-08 03:02:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:02:10.449623 | orchestrator | 2026-04-08 03:02:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:02:13.495604 | orchestrator | 2026-04-08 03:02:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:02:13.496990 | orchestrator | 2026-04-08 03:02:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:02:13.497167 | orchestrator | 2026-04-08 03:02:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:02:16.538009 | orchestrator | 2026-04-08 03:02:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:02:16.539794 | orchestrator | 2026-04-08 03:02:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:02:16.539876 | orchestrator | 2026-04-08 03:02:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:02:19.587136 | orchestrator | 2026-04-08 03:02:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:02:19.587460 | orchestrator | 2026-04-08 03:02:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:02:19.587526 | orchestrator | 2026-04-08 03:02:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:02:22.642820 | orchestrator | 2026-04-08 03:02:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:02:22.643877 | orchestrator | 2026-04-08 03:02:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:02:22.643897 | orchestrator | 2026-04-08 03:02:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:02:25.691917 | orchestrator | 2026-04-08 03:02:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:02:25.693276 | orchestrator | 2026-04-08 03:02:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:02:25.693294 | orchestrator | 2026-04-08 03:02:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:02:28.752800 | orchestrator | 2026-04-08 03:02:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:02:28.754111 | orchestrator | 2026-04-08 03:02:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:02:28.754148 | orchestrator | 2026-04-08 03:02:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:02:31.791460 | orchestrator | 2026-04-08 03:02:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:02:31.792446 | orchestrator | 2026-04-08 03:02:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:02:31.792509 | orchestrator | 2026-04-08 03:02:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:02:34.840527 | orchestrator | 2026-04-08 03:02:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:02:34.842259 | orchestrator | 2026-04-08 03:02:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:02:34.842325 | orchestrator | 2026-04-08 03:02:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:02:37.895792 | orchestrator | 2026-04-08 03:02:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:02:37.897217 | orchestrator | 2026-04-08 03:02:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:02:37.897263 | orchestrator | 2026-04-08 03:02:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:02:40.939901 | orchestrator | 2026-04-08 03:02:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:02:40.940634 | orchestrator | 2026-04-08 03:02:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:02:40.940759 | orchestrator | 2026-04-08 03:02:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:02:43.989517 | orchestrator | 2026-04-08 03:02:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:02:43.989812 | orchestrator | 2026-04-08 03:02:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:02:43.989835 | orchestrator | 2026-04-08 03:02:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:02:47.035592 | orchestrator | 2026-04-08 03:02:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:02:47.038090 | orchestrator | 2026-04-08 03:02:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:02:47.038137 | orchestrator | 2026-04-08 03:02:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:02:50.081904 | orchestrator | 2026-04-08 03:02:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:02:50.083021 | orchestrator | 2026-04-08 03:02:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:02:50.083074 | orchestrator | 2026-04-08 03:02:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:02:53.130568 | orchestrator | 2026-04-08 03:02:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:02:53.133493 | orchestrator | 2026-04-08 03:02:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:02:53.133591 | orchestrator | 2026-04-08 03:02:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:02:56.184482 | orchestrator | 2026-04-08 03:02:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:02:56.186098 | orchestrator | 2026-04-08 03:02:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:02:56.186161 | orchestrator | 2026-04-08 03:02:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:02:59.243273 | orchestrator | 2026-04-08 03:02:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:02:59.246946 | orchestrator | 2026-04-08 03:02:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:02:59.247032 | orchestrator | 2026-04-08 03:02:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:03:02.291807 | orchestrator | 2026-04-08 03:03:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:03:02.294393 | orchestrator | 2026-04-08 03:03:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:03:02.294492 | orchestrator | 2026-04-08 03:03:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:03:05.353967 | orchestrator | 2026-04-08 03:03:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:03:05.354105 | orchestrator | 2026-04-08 03:03:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:03:05.354118 | orchestrator | 2026-04-08 03:03:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:03:08.401476 | orchestrator | 2026-04-08 03:03:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:03:08.402464 | orchestrator | 2026-04-08 03:03:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:03:08.402526 | orchestrator | 2026-04-08 03:03:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:03:11.447597 | orchestrator | 2026-04-08 03:03:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:03:11.447922 | orchestrator | 2026-04-08 03:03:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:03:11.447956 | orchestrator | 2026-04-08 03:03:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:03:14.498456 | orchestrator | 2026-04-08 03:03:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:03:14.500236 | orchestrator | 2026-04-08 03:03:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:03:14.500302 | orchestrator | 2026-04-08 03:03:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:03:17.555136 | orchestrator | 2026-04-08 03:03:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:03:17.557155 | orchestrator | 2026-04-08 03:03:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:03:17.557286 | orchestrator | 2026-04-08 03:03:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:03:20.606105 | orchestrator | 2026-04-08 03:03:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:03:20.606972 | orchestrator | 2026-04-08 03:03:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:03:20.606995 | orchestrator | 2026-04-08 03:03:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:03:23.654282 | orchestrator | 2026-04-08 03:03:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:03:23.656295 | orchestrator | 2026-04-08 03:03:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:03:23.656354 | orchestrator | 2026-04-08 03:03:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:03:26.705381 | orchestrator | 2026-04-08 03:03:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:03:26.705727 | orchestrator | 2026-04-08 03:03:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:03:26.705758 | orchestrator | 2026-04-08 03:03:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:03:29.760730 | orchestrator | 2026-04-08 03:03:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:03:29.762451 | orchestrator | 2026-04-08 03:03:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:03:29.762513 | orchestrator | 2026-04-08 03:03:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:03:32.809793 | orchestrator | 2026-04-08 03:03:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:03:32.811152 | orchestrator | 2026-04-08 03:03:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:03:32.811224 | orchestrator | 2026-04-08 03:03:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:03:35.866632 | orchestrator | 2026-04-08 03:03:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:03:35.868933 | orchestrator | 2026-04-08 03:03:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:03:35.869067 | orchestrator | 2026-04-08 03:03:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:03:38.910681 | orchestrator | 2026-04-08 03:03:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:03:38.913677 | orchestrator | 2026-04-08 03:03:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:03:38.913930 | orchestrator | 2026-04-08 03:03:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:03:41.966962 | orchestrator | 2026-04-08 03:03:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:03:41.968276 | orchestrator | 2026-04-08 03:03:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:03:41.968503 | orchestrator | 2026-04-08 03:03:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:03:45.020623 | orchestrator | 2026-04-08 03:03:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:03:45.023013 | orchestrator | 2026-04-08 03:03:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:03:45.023129 | orchestrator | 2026-04-08 03:03:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:03:48.075246 | orchestrator | 2026-04-08 03:03:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:03:48.075351 | orchestrator | 2026-04-08 03:03:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:03:48.075367 | orchestrator | 2026-04-08 03:03:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:03:51.119158 | orchestrator | 2026-04-08 03:03:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:03:51.119689 | orchestrator | 2026-04-08 03:03:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:03:51.119782 | orchestrator | 2026-04-08 03:03:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:03:54.167049 | orchestrator | 2026-04-08 03:03:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:03:54.169282 | orchestrator | 2026-04-08 03:03:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:03:54.169388 | orchestrator | 2026-04-08 03:03:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:03:57.224556 | orchestrator | 2026-04-08 03:03:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:03:57.225607 | orchestrator | 2026-04-08 03:03:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:03:57.225657 | orchestrator | 2026-04-08 03:03:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:04:00.267860 | orchestrator | 2026-04-08 03:04:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:04:00.267970 | orchestrator | 2026-04-08 03:04:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:04:00.267982 | orchestrator | 2026-04-08 03:04:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:04:03.320001 | orchestrator | 2026-04-08 03:04:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:04:03.321740 | orchestrator | 2026-04-08 03:04:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:04:03.321806 | orchestrator | 2026-04-08 03:04:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:04:06.379738 | orchestrator | 2026-04-08 03:04:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:04:06.381159 | orchestrator | 2026-04-08 03:04:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:04:06.381351 | orchestrator | 2026-04-08 03:04:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:04:09.433849 | orchestrator | 2026-04-08 03:04:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:04:09.434371 | orchestrator | 2026-04-08 03:04:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:04:09.434465 | orchestrator | 2026-04-08 03:04:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:04:12.485188 | orchestrator | 2026-04-08 03:04:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:04:12.485370 | orchestrator | 2026-04-08 03:04:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:04:12.485387 | orchestrator | 2026-04-08 03:04:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:04:15.540535 | orchestrator | 2026-04-08 03:04:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:04:15.542198 | orchestrator | 2026-04-08 03:04:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:04:15.542262 | orchestrator | 2026-04-08 03:04:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:04:18.590111 | orchestrator | 2026-04-08 03:04:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:04:18.592049 | orchestrator | 2026-04-08 03:04:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:04:18.592107 | orchestrator | 2026-04-08 03:04:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:04:21.643256 | orchestrator | 2026-04-08 03:04:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:04:21.643458 | orchestrator | 2026-04-08 03:04:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:04:21.643480 | orchestrator | 2026-04-08 03:04:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:04:24.695153 | orchestrator | 2026-04-08 03:04:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:04:24.696614 | orchestrator | 2026-04-08 03:04:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:04:24.696683 | orchestrator | 2026-04-08 03:04:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:04:27.747936 | orchestrator | 2026-04-08 03:04:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:04:27.748655 | orchestrator | 2026-04-08 03:04:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:04:27.748737 | orchestrator | 2026-04-08 03:04:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:04:30.803299 | orchestrator | 2026-04-08 03:04:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:04:30.805311 | orchestrator | 2026-04-08 03:04:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:04:30.805604 | orchestrator | 2026-04-08 03:04:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:04:33.858766 | orchestrator | 2026-04-08 03:04:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:04:33.861000 | orchestrator | 2026-04-08 03:04:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:04:33.861168 | orchestrator | 2026-04-08 03:04:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:04:36.918269 | orchestrator | 2026-04-08 03:04:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:04:36.919241 | orchestrator | 2026-04-08 03:04:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:04:36.919420 | orchestrator | 2026-04-08 03:04:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:04:39.974179 | orchestrator | 2026-04-08 03:04:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:04:39.974789 | orchestrator | 2026-04-08 03:04:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:04:39.974886 | orchestrator | 2026-04-08 03:04:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:04:43.030982 | orchestrator | 2026-04-08 03:04:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:04:43.032584 | orchestrator | 2026-04-08 03:04:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:04:43.032636 | orchestrator | 2026-04-08 03:04:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:04:46.080616 | orchestrator | 2026-04-08 03:04:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:04:46.081030 | orchestrator | 2026-04-08 03:04:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:04:46.081076 | orchestrator | 2026-04-08 03:04:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:04:49.131741 | orchestrator | 2026-04-08 03:04:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:04:49.133649 | orchestrator | 2026-04-08 03:04:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:04:49.133683 | orchestrator | 2026-04-08 03:04:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:04:52.181327 | orchestrator | 2026-04-08 03:04:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:04:52.183728 | orchestrator | 2026-04-08 03:04:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:04:52.183870 | orchestrator | 2026-04-08 03:04:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:04:55.235452 | orchestrator | 2026-04-08 03:04:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:04:55.236872 | orchestrator | 2026-04-08 03:04:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:04:55.236949 | orchestrator | 2026-04-08 03:04:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:04:58.287384 | orchestrator | 2026-04-08 03:04:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:04:58.288433 | orchestrator | 2026-04-08 03:04:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:04:58.288633 | orchestrator | 2026-04-08 03:04:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:05:01.343651 | orchestrator | 2026-04-08 03:05:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:05:01.345524 | orchestrator | 2026-04-08 03:05:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:05:01.345615 | orchestrator | 2026-04-08 03:05:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:05:04.392686 | orchestrator | 2026-04-08 03:05:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:05:04.394869 | orchestrator | 2026-04-08 03:05:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:05:04.394921 | orchestrator | 2026-04-08 03:05:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:05:07.441280 | orchestrator | 2026-04-08 03:05:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:05:07.442685 | orchestrator | 2026-04-08 03:05:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:05:07.442724 | orchestrator | 2026-04-08 03:05:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:05:10.493058 | orchestrator | 2026-04-08 03:05:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:05:10.494502 | orchestrator | 2026-04-08 03:05:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:05:10.494532 | orchestrator | 2026-04-08 03:05:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:05:13.552115 | orchestrator | 2026-04-08 03:05:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:05:13.552266 | orchestrator | 2026-04-08 03:05:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:05:13.552283 | orchestrator | 2026-04-08 03:05:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:05:16.589925 | orchestrator | 2026-04-08 03:05:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:05:16.592112 | orchestrator | 2026-04-08 03:05:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:05:16.592227 | orchestrator | 2026-04-08 03:05:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:05:19.639485 | orchestrator | 2026-04-08 03:05:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:05:19.641811 | orchestrator | 2026-04-08 03:05:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:05:19.641927 | orchestrator | 2026-04-08 03:05:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:05:22.685471 | orchestrator | 2026-04-08 03:05:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:05:22.687391 | orchestrator | 2026-04-08 03:05:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:05:22.687424 | orchestrator | 2026-04-08 03:05:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:05:25.739627 | orchestrator | 2026-04-08 03:05:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:05:25.741267 | orchestrator | 2026-04-08 03:05:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:05:25.741306 | orchestrator | 2026-04-08 03:05:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:05:28.798268 | orchestrator | 2026-04-08 03:05:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:05:28.800426 | orchestrator | 2026-04-08 03:05:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:05:28.800554 | orchestrator | 2026-04-08 03:05:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:05:31.855159 | orchestrator | 2026-04-08 03:05:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:05:31.858537 | orchestrator | 2026-04-08 03:05:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:05:31.858599 | orchestrator | 2026-04-08 03:05:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:05:34.910132 | orchestrator | 2026-04-08 03:05:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:05:34.912726 | orchestrator | 2026-04-08 03:05:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:05:34.912778 | orchestrator | 2026-04-08 03:05:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:05:37.956684 | orchestrator | 2026-04-08 03:05:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:05:37.959074 | orchestrator | 2026-04-08 03:05:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:05:37.959164 | orchestrator | 2026-04-08 03:05:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:05:41.003294 | orchestrator | 2026-04-08 03:05:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:05:41.004285 | orchestrator | 2026-04-08 03:05:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:05:41.004365 | orchestrator | 2026-04-08 03:05:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:05:44.051438 | orchestrator | 2026-04-08 03:05:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:05:44.055211 | orchestrator | 2026-04-08 03:05:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:05:44.055428 | orchestrator | 2026-04-08 03:05:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:05:47.111103 | orchestrator | 2026-04-08 03:05:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:05:47.113447 | orchestrator | 2026-04-08 03:05:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:05:47.113554 | orchestrator | 2026-04-08 03:05:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:05:50.166064 | orchestrator | 2026-04-08 03:05:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:05:50.167355 | orchestrator | 2026-04-08 03:05:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:05:50.167398 | orchestrator | 2026-04-08 03:05:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:05:53.220367 | orchestrator | 2026-04-08 03:05:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:05:53.222514 | orchestrator | 2026-04-08 03:05:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:05:53.222658 | orchestrator | 2026-04-08 03:05:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:05:56.272176 | orchestrator | 2026-04-08 03:05:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:05:56.273706 | orchestrator | 2026-04-08 03:05:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:05:56.273813 | orchestrator | 2026-04-08 03:05:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:05:59.325734 | orchestrator | 2026-04-08 03:05:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:05:59.329894 | orchestrator | 2026-04-08 03:05:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:05:59.330137 | orchestrator | 2026-04-08 03:05:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:06:02.378100 | orchestrator | 2026-04-08 03:06:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:06:02.380420 | orchestrator | 2026-04-08 03:06:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:06:02.380463 | orchestrator | 2026-04-08 03:06:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:06:05.433437 | orchestrator | 2026-04-08 03:06:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:06:05.434250 | orchestrator | 2026-04-08 03:06:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:06:05.434394 | orchestrator | 2026-04-08 03:06:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:06:08.479936 | orchestrator | 2026-04-08 03:06:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:06:08.480057 | orchestrator | 2026-04-08 03:06:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:06:08.480068 | orchestrator | 2026-04-08 03:06:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:06:11.520397 | orchestrator | 2026-04-08 03:06:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:06:11.521534 | orchestrator | 2026-04-08 03:06:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:06:11.521649 | orchestrator | 2026-04-08 03:06:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:06:14.568502 | orchestrator | 2026-04-08 03:06:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:06:14.568585 | orchestrator | 2026-04-08 03:06:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:06:14.568593 | orchestrator | 2026-04-08 03:06:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:06:17.620288 | orchestrator | 2026-04-08 03:06:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:06:17.621393 | orchestrator | 2026-04-08 03:06:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:06:17.621445 | orchestrator | 2026-04-08 03:06:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:06:20.669257 | orchestrator | 2026-04-08 03:06:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:06:20.669691 | orchestrator | 2026-04-08 03:06:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:06:20.669735 | orchestrator | 2026-04-08 03:06:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:06:23.721089 | orchestrator | 2026-04-08 03:06:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:06:23.722769 | orchestrator | 2026-04-08 03:06:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:06:23.722812 | orchestrator | 2026-04-08 03:06:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:06:26.771815 | orchestrator | 2026-04-08 03:06:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:06:26.774143 | orchestrator | 2026-04-08 03:06:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:06:26.774302 | orchestrator | 2026-04-08 03:06:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:06:29.820539 | orchestrator | 2026-04-08 03:06:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:06:29.822073 | orchestrator | 2026-04-08 03:06:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:06:29.822182 | orchestrator | 2026-04-08 03:06:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:06:32.868234 | orchestrator | 2026-04-08 03:06:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:06:32.870577 | orchestrator | 2026-04-08 03:06:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:06:32.870766 | orchestrator | 2026-04-08 03:06:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:06:35.920048 | orchestrator | 2026-04-08 03:06:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:06:35.923251 | orchestrator | 2026-04-08 03:06:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:06:35.923362 | orchestrator | 2026-04-08 03:06:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:06:38.971718 | orchestrator | 2026-04-08 03:06:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:06:38.972851 | orchestrator | 2026-04-08 03:06:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:06:38.972892 | orchestrator | 2026-04-08 03:06:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:06:42.031565 | orchestrator | 2026-04-08 03:06:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:06:42.032155 | orchestrator | 2026-04-08 03:06:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:06:42.032222 | orchestrator | 2026-04-08 03:06:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:06:45.083246 | orchestrator | 2026-04-08 03:06:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:06:45.085143 | orchestrator | 2026-04-08 03:06:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:06:45.085243 | orchestrator | 2026-04-08 03:06:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:06:48.142406 | orchestrator | 2026-04-08 03:06:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:06:48.144517 | orchestrator | 2026-04-08 03:06:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:06:48.144594 | orchestrator | 2026-04-08 03:06:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:06:51.193806 | orchestrator | 2026-04-08 03:06:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:06:51.195831 | orchestrator | 2026-04-08 03:06:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:06:51.195869 | orchestrator | 2026-04-08 03:06:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:06:54.244831 | orchestrator | 2026-04-08 03:06:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:06:54.246395 | orchestrator | 2026-04-08 03:06:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:06:54.246584 | orchestrator | 2026-04-08 03:06:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:06:57.302668 | orchestrator | 2026-04-08 03:06:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:06:57.303910 | orchestrator | 2026-04-08 03:06:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:06:57.303947 | orchestrator | 2026-04-08 03:06:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:07:00.353739 | orchestrator | 2026-04-08 03:07:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:07:00.358070 | orchestrator | 2026-04-08 03:07:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:07:00.358224 | orchestrator | 2026-04-08 03:07:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:07:03.413898 | orchestrator | 2026-04-08 03:07:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:07:03.415414 | orchestrator | 2026-04-08 03:07:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:07:03.415503 | orchestrator | 2026-04-08 03:07:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:07:06.464375 | orchestrator | 2026-04-08 03:07:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:07:06.465749 | orchestrator | 2026-04-08 03:07:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:07:06.465789 | orchestrator | 2026-04-08 03:07:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:07:09.517015 | orchestrator | 2026-04-08 03:07:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:07:09.518239 | orchestrator | 2026-04-08 03:07:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:07:09.518292 | orchestrator | 2026-04-08 03:07:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:07:12.564327 | orchestrator | 2026-04-08 03:07:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:07:12.565824 | orchestrator | 2026-04-08 03:07:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:07:12.566083 | orchestrator | 2026-04-08 03:07:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:07:15.618472 | orchestrator | 2026-04-08 03:07:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:07:15.620164 | orchestrator | 2026-04-08 03:07:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:07:15.620199 | orchestrator | 2026-04-08 03:07:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:07:18.670358 | orchestrator | 2026-04-08 03:07:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:07:18.672388 | orchestrator | 2026-04-08 03:07:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:07:18.672437 | orchestrator | 2026-04-08 03:07:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:07:21.721466 | orchestrator | 2026-04-08 03:07:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:07:21.723291 | orchestrator | 2026-04-08 03:07:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:07:21.723351 | orchestrator | 2026-04-08 03:07:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:07:24.770425 | orchestrator | 2026-04-08 03:07:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:07:24.772247 | orchestrator | 2026-04-08 03:07:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:07:24.772299 | orchestrator | 2026-04-08 03:07:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:07:27.822096 | orchestrator | 2026-04-08 03:07:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:07:27.823961 | orchestrator | 2026-04-08 03:07:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:07:27.824042 | orchestrator | 2026-04-08 03:07:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:07:30.874105 | orchestrator | 2026-04-08 03:07:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:07:30.876525 | orchestrator | 2026-04-08 03:07:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:07:30.876667 | orchestrator | 2026-04-08 03:07:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:07:33.923758 | orchestrator | 2026-04-08 03:07:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:07:33.924784 | orchestrator | 2026-04-08 03:07:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:07:33.924886 | orchestrator | 2026-04-08 03:07:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:07:36.978087 | orchestrator | 2026-04-08 03:07:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:07:36.980653 | orchestrator | 2026-04-08 03:07:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:07:36.980701 | orchestrator | 2026-04-08 03:07:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:07:40.021242 | orchestrator | 2026-04-08 03:07:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:07:40.021443 | orchestrator | 2026-04-08 03:07:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:07:40.021464 | orchestrator | 2026-04-08 03:07:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:07:43.076976 | orchestrator | 2026-04-08 03:07:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:07:43.080346 | orchestrator | 2026-04-08 03:07:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:07:43.080924 | orchestrator | 2026-04-08 03:07:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:07:46.132395 | orchestrator | 2026-04-08 03:07:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:07:46.135006 | orchestrator | 2026-04-08 03:07:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:07:46.135107 | orchestrator | 2026-04-08 03:07:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:07:49.186393 | orchestrator | 2026-04-08 03:07:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:07:49.189080 | orchestrator | 2026-04-08 03:07:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:07:49.189148 | orchestrator | 2026-04-08 03:07:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:07:52.250252 | orchestrator | 2026-04-08 03:07:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:07:52.250584 | orchestrator | 2026-04-08 03:07:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:07:52.250612 | orchestrator | 2026-04-08 03:07:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:07:55.303301 | orchestrator | 2026-04-08 03:07:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:07:55.305215 | orchestrator | 2026-04-08 03:07:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:07:55.305278 | orchestrator | 2026-04-08 03:07:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:07:58.355865 | orchestrator | 2026-04-08 03:07:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:07:58.357509 | orchestrator | 2026-04-08 03:07:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:07:58.357577 | orchestrator | 2026-04-08 03:07:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:08:01.411942 | orchestrator | 2026-04-08 03:08:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:08:01.414094 | orchestrator | 2026-04-08 03:08:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:08:01.414162 | orchestrator | 2026-04-08 03:08:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:08:04.460638 | orchestrator | 2026-04-08 03:08:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:08:04.464524 | orchestrator | 2026-04-08 03:08:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:08:04.464623 | orchestrator | 2026-04-08 03:08:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:08:07.513344 | orchestrator | 2026-04-08 03:08:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:08:07.514864 | orchestrator | 2026-04-08 03:08:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:08:07.514906 | orchestrator | 2026-04-08 03:08:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:08:10.559988 | orchestrator | 2026-04-08 03:08:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:08:10.561558 | orchestrator | 2026-04-08 03:08:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:08:10.561752 | orchestrator | 2026-04-08 03:08:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:08:13.614260 | orchestrator | 2026-04-08 03:08:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:08:13.615379 | orchestrator | 2026-04-08 03:08:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:08:13.615552 | orchestrator | 2026-04-08 03:08:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:08:16.664270 | orchestrator | 2026-04-08 03:08:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:08:16.665603 | orchestrator | 2026-04-08 03:08:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:08:16.665675 | orchestrator | 2026-04-08 03:08:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:08:19.710408 | orchestrator | 2026-04-08 03:08:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:08:19.711519 | orchestrator | 2026-04-08 03:08:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:08:19.711558 | orchestrator | 2026-04-08 03:08:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:08:22.766901 | orchestrator | 2026-04-08 03:08:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:08:22.767888 | orchestrator | 2026-04-08 03:08:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:08:22.767927 | orchestrator | 2026-04-08 03:08:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:08:25.819585 | orchestrator | 2026-04-08 03:08:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:08:25.822477 | orchestrator | 2026-04-08 03:08:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:08:25.822531 | orchestrator | 2026-04-08 03:08:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:08:28.870278 | orchestrator | 2026-04-08 03:08:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:08:28.870904 | orchestrator | 2026-04-08 03:08:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:08:28.870929 | orchestrator | 2026-04-08 03:08:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:08:31.917227 | orchestrator | 2026-04-08 03:08:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:08:31.917316 | orchestrator | 2026-04-08 03:08:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:08:31.917364 | orchestrator | 2026-04-08 03:08:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:08:34.965411 | orchestrator | 2026-04-08 03:08:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:08:34.966671 | orchestrator | 2026-04-08 03:08:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:08:34.966698 | orchestrator | 2026-04-08 03:08:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:08:38.007141 | orchestrator | 2026-04-08 03:08:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:08:38.009379 | orchestrator | 2026-04-08 03:08:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:08:38.009414 | orchestrator | 2026-04-08 03:08:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:08:41.058505 | orchestrator | 2026-04-08 03:08:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:08:41.058709 | orchestrator | 2026-04-08 03:08:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:08:41.058732 | orchestrator | 2026-04-08 03:08:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:08:44.108888 | orchestrator | 2026-04-08 03:08:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:08:44.110394 | orchestrator | 2026-04-08 03:08:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:08:44.110421 | orchestrator | 2026-04-08 03:08:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:08:47.151435 | orchestrator | 2026-04-08 03:08:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:08:47.152225 | orchestrator | 2026-04-08 03:08:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:08:47.152273 | orchestrator | 2026-04-08 03:08:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:08:50.201408 | orchestrator | 2026-04-08 03:08:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:08:50.204938 | orchestrator | 2026-04-08 03:08:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:08:50.205016 | orchestrator | 2026-04-08 03:08:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:08:53.256231 | orchestrator | 2026-04-08 03:08:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:08:53.257294 | orchestrator | 2026-04-08 03:08:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:08:53.257327 | orchestrator | 2026-04-08 03:08:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:08:56.308758 | orchestrator | 2026-04-08 03:08:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:08:56.310761 | orchestrator | 2026-04-08 03:08:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:08:56.310805 | orchestrator | 2026-04-08 03:08:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:08:59.359711 | orchestrator | 2026-04-08 03:08:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:08:59.361204 | orchestrator | 2026-04-08 03:08:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:08:59.361258 | orchestrator | 2026-04-08 03:08:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:09:02.419313 | orchestrator | 2026-04-08 03:09:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:09:02.421609 | orchestrator | 2026-04-08 03:09:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:09:02.421892 | orchestrator | 2026-04-08 03:09:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:09:05.470613 | orchestrator | 2026-04-08 03:09:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:09:05.470781 | orchestrator | 2026-04-08 03:09:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:09:05.470815 | orchestrator | 2026-04-08 03:09:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:09:08.522761 | orchestrator | 2026-04-08 03:09:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:09:08.523936 | orchestrator | 2026-04-08 03:09:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:09:08.524000 | orchestrator | 2026-04-08 03:09:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:09:11.572410 | orchestrator | 2026-04-08 03:09:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:09:11.574656 | orchestrator | 2026-04-08 03:09:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:09:11.574915 | orchestrator | 2026-04-08 03:09:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:09:14.636956 | orchestrator | 2026-04-08 03:09:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:09:14.637046 | orchestrator | 2026-04-08 03:09:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:09:14.637055 | orchestrator | 2026-04-08 03:09:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:09:17.677358 | orchestrator | 2026-04-08 03:09:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:09:17.677461 | orchestrator | 2026-04-08 03:09:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:09:17.677481 | orchestrator | 2026-04-08 03:09:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:09:20.724223 | orchestrator | 2026-04-08 03:09:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:09:20.725408 | orchestrator | 2026-04-08 03:09:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:09:20.725444 | orchestrator | 2026-04-08 03:09:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:09:23.774767 | orchestrator | 2026-04-08 03:09:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:09:23.777142 | orchestrator | 2026-04-08 03:09:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:09:23.777182 | orchestrator | 2026-04-08 03:09:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:09:26.825024 | orchestrator | 2026-04-08 03:09:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:09:26.826769 | orchestrator | 2026-04-08 03:09:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:09:26.826840 | orchestrator | 2026-04-08 03:09:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:09:29.869143 | orchestrator | 2026-04-08 03:09:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:09:29.872192 | orchestrator | 2026-04-08 03:09:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:09:29.872270 | orchestrator | 2026-04-08 03:09:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:09:32.920504 | orchestrator | 2026-04-08 03:09:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:09:32.921558 | orchestrator | 2026-04-08 03:09:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:09:32.921597 | orchestrator | 2026-04-08 03:09:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:09:35.967493 | orchestrator | 2026-04-08 03:09:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:09:35.969142 | orchestrator | 2026-04-08 03:09:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:09:35.969186 | orchestrator | 2026-04-08 03:09:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:09:39.014383 | orchestrator | 2026-04-08 03:09:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:09:39.015217 | orchestrator | 2026-04-08 03:09:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:09:39.015271 | orchestrator | 2026-04-08 03:09:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:09:42.066110 | orchestrator | 2026-04-08 03:09:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:09:42.066732 | orchestrator | 2026-04-08 03:09:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:09:42.066765 | orchestrator | 2026-04-08 03:09:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:09:45.110701 | orchestrator | 2026-04-08 03:09:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:09:45.113304 | orchestrator | 2026-04-08 03:09:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:09:45.113369 | orchestrator | 2026-04-08 03:09:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:09:48.156431 | orchestrator | 2026-04-08 03:09:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:09:48.162074 | orchestrator | 2026-04-08 03:09:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:09:48.162181 | orchestrator | 2026-04-08 03:09:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:09:51.208074 | orchestrator | 2026-04-08 03:09:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:09:51.208800 | orchestrator | 2026-04-08 03:09:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:09:51.208856 | orchestrator | 2026-04-08 03:09:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:09:54.245238 | orchestrator | 2026-04-08 03:09:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:09:54.247631 | orchestrator | 2026-04-08 03:09:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:09:54.247685 | orchestrator | 2026-04-08 03:09:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:09:57.288886 | orchestrator | 2026-04-08 03:09:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:09:57.289477 | orchestrator | 2026-04-08 03:09:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:09:57.289516 | orchestrator | 2026-04-08 03:09:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:10:00.333171 | orchestrator | 2026-04-08 03:10:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:10:00.335583 | orchestrator | 2026-04-08 03:10:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:10:00.335721 | orchestrator | 2026-04-08 03:10:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:10:03.373664 | orchestrator | 2026-04-08 03:10:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:10:03.373889 | orchestrator | 2026-04-08 03:10:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:10:03.373906 | orchestrator | 2026-04-08 03:10:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:10:06.422755 | orchestrator | 2026-04-08 03:10:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:10:06.424809 | orchestrator | 2026-04-08 03:10:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:10:06.424878 | orchestrator | 2026-04-08 03:10:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:10:09.481844 | orchestrator | 2026-04-08 03:10:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:10:09.482776 | orchestrator | 2026-04-08 03:10:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:10:09.482801 | orchestrator | 2026-04-08 03:10:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:10:12.529065 | orchestrator | 2026-04-08 03:10:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:10:12.529798 | orchestrator | 2026-04-08 03:10:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:10:12.529860 | orchestrator | 2026-04-08 03:10:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:10:15.581343 | orchestrator | 2026-04-08 03:10:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:10:15.582586 | orchestrator | 2026-04-08 03:10:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:10:15.582652 | orchestrator | 2026-04-08 03:10:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:10:18.632393 | orchestrator | 2026-04-08 03:10:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:10:18.633327 | orchestrator | 2026-04-08 03:10:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:10:18.633434 | orchestrator | 2026-04-08 03:10:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:10:21.677993 | orchestrator | 2026-04-08 03:10:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:10:21.680203 | orchestrator | 2026-04-08 03:10:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:10:21.680254 | orchestrator | 2026-04-08 03:10:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:10:24.733186 | orchestrator | 2026-04-08 03:10:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:10:24.734171 | orchestrator | 2026-04-08 03:10:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:10:24.734199 | orchestrator | 2026-04-08 03:10:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:10:27.786671 | orchestrator | 2026-04-08 03:10:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:10:27.788266 | orchestrator | 2026-04-08 03:10:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:10:27.788302 | orchestrator | 2026-04-08 03:10:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:10:30.842322 | orchestrator | 2026-04-08 03:10:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:10:30.845529 | orchestrator | 2026-04-08 03:10:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:10:30.845721 | orchestrator | 2026-04-08 03:10:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:10:33.885532 | orchestrator | 2026-04-08 03:10:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:10:33.886361 | orchestrator | 2026-04-08 03:10:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:10:33.886536 | orchestrator | 2026-04-08 03:10:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:10:36.939128 | orchestrator | 2026-04-08 03:10:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:10:36.940523 | orchestrator | 2026-04-08 03:10:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:10:36.940571 | orchestrator | 2026-04-08 03:10:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:10:39.985186 | orchestrator | 2026-04-08 03:10:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:10:39.986364 | orchestrator | 2026-04-08 03:10:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:10:39.986416 | orchestrator | 2026-04-08 03:10:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:10:43.029094 | orchestrator | 2026-04-08 03:10:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:10:43.029275 | orchestrator | 2026-04-08 03:10:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:10:43.029294 | orchestrator | 2026-04-08 03:10:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:10:46.076982 | orchestrator | 2026-04-08 03:10:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:10:46.078208 | orchestrator | 2026-04-08 03:10:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:10:46.078252 | orchestrator | 2026-04-08 03:10:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:10:49.130570 | orchestrator | 2026-04-08 03:10:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:10:49.130676 | orchestrator | 2026-04-08 03:10:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:10:49.130692 | orchestrator | 2026-04-08 03:10:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:10:52.180283 | orchestrator | 2026-04-08 03:10:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:10:52.182601 | orchestrator | 2026-04-08 03:10:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:10:52.182729 | orchestrator | 2026-04-08 03:10:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:10:55.228911 | orchestrator | 2026-04-08 03:10:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:10:55.230708 | orchestrator | 2026-04-08 03:10:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:10:55.230771 | orchestrator | 2026-04-08 03:10:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:10:58.283241 | orchestrator | 2026-04-08 03:10:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:10:58.283334 | orchestrator | 2026-04-08 03:10:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:10:58.283344 | orchestrator | 2026-04-08 03:10:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:11:01.329968 | orchestrator | 2026-04-08 03:11:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:11:01.332010 | orchestrator | 2026-04-08 03:11:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:11:01.332139 | orchestrator | 2026-04-08 03:11:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:11:04.378502 | orchestrator | 2026-04-08 03:11:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:11:04.379732 | orchestrator | 2026-04-08 03:11:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:11:04.379785 | orchestrator | 2026-04-08 03:11:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:11:07.420563 | orchestrator | 2026-04-08 03:11:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:11:07.422475 | orchestrator | 2026-04-08 03:11:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:11:07.422520 | orchestrator | 2026-04-08 03:11:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:11:10.480198 | orchestrator | 2026-04-08 03:11:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:11:10.480287 | orchestrator | 2026-04-08 03:11:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:11:10.480299 | orchestrator | 2026-04-08 03:11:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:11:13.531443 | orchestrator | 2026-04-08 03:11:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:11:13.531932 | orchestrator | 2026-04-08 03:11:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:11:13.532025 | orchestrator | 2026-04-08 03:11:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:11:16.580043 | orchestrator | 2026-04-08 03:11:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:11:16.582218 | orchestrator | 2026-04-08 03:11:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:11:16.582274 | orchestrator | 2026-04-08 03:11:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:11:19.641841 | orchestrator | 2026-04-08 03:11:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:11:19.643870 | orchestrator | 2026-04-08 03:11:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:11:19.644330 | orchestrator | 2026-04-08 03:11:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:11:22.696844 | orchestrator | 2026-04-08 03:11:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:11:22.698443 | orchestrator | 2026-04-08 03:11:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:11:22.698488 | orchestrator | 2026-04-08 03:11:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:11:25.748692 | orchestrator | 2026-04-08 03:11:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:11:25.750811 | orchestrator | 2026-04-08 03:11:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:11:25.750898 | orchestrator | 2026-04-08 03:11:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:11:28.798428 | orchestrator | 2026-04-08 03:11:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:11:28.800035 | orchestrator | 2026-04-08 03:11:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:11:28.800067 | orchestrator | 2026-04-08 03:11:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:11:31.849025 | orchestrator | 2026-04-08 03:11:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:11:31.851935 | orchestrator | 2026-04-08 03:11:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:11:31.851996 | orchestrator | 2026-04-08 03:11:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:11:34.894058 | orchestrator | 2026-04-08 03:11:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:11:34.895021 | orchestrator | 2026-04-08 03:11:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:11:34.895066 | orchestrator | 2026-04-08 03:11:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:11:37.939366 | orchestrator | 2026-04-08 03:11:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:11:37.941069 | orchestrator | 2026-04-08 03:11:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:11:37.941127 | orchestrator | 2026-04-08 03:11:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:11:40.987028 | orchestrator | 2026-04-08 03:11:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:11:40.988992 | orchestrator | 2026-04-08 03:11:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:11:40.989044 | orchestrator | 2026-04-08 03:11:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:11:44.040071 | orchestrator | 2026-04-08 03:11:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:11:44.041542 | orchestrator | 2026-04-08 03:11:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:11:44.041587 | orchestrator | 2026-04-08 03:11:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:11:47.239074 | orchestrator | 2026-04-08 03:11:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:11:47.239183 | orchestrator | 2026-04-08 03:11:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:11:47.239197 | orchestrator | 2026-04-08 03:11:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:11:50.142352 | orchestrator | 2026-04-08 03:11:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:11:50.143437 | orchestrator | 2026-04-08 03:11:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:11:50.143490 | orchestrator | 2026-04-08 03:11:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:11:53.189780 | orchestrator | 2026-04-08 03:11:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:11:53.191120 | orchestrator | 2026-04-08 03:11:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:11:53.191155 | orchestrator | 2026-04-08 03:11:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:11:56.244023 | orchestrator | 2026-04-08 03:11:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:11:56.245711 | orchestrator | 2026-04-08 03:11:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:11:56.245845 | orchestrator | 2026-04-08 03:11:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:11:59.294758 | orchestrator | 2026-04-08 03:11:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:11:59.296148 | orchestrator | 2026-04-08 03:11:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:11:59.296217 | orchestrator | 2026-04-08 03:11:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:12:02.346962 | orchestrator | 2026-04-08 03:12:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:12:02.347038 | orchestrator | 2026-04-08 03:12:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:12:02.347044 | orchestrator | 2026-04-08 03:12:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:12:05.393423 | orchestrator | 2026-04-08 03:12:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:12:05.395272 | orchestrator | 2026-04-08 03:12:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:12:05.395324 | orchestrator | 2026-04-08 03:12:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:12:08.440024 | orchestrator | 2026-04-08 03:12:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:12:08.441995 | orchestrator | 2026-04-08 03:12:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:12:08.442118 | orchestrator | 2026-04-08 03:12:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:12:11.483791 | orchestrator | 2026-04-08 03:12:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:12:11.484823 | orchestrator | 2026-04-08 03:12:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:12:11.484861 | orchestrator | 2026-04-08 03:12:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:12:14.533523 | orchestrator | 2026-04-08 03:12:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:12:14.536309 | orchestrator | 2026-04-08 03:12:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:12:14.536371 | orchestrator | 2026-04-08 03:12:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:12:17.578889 | orchestrator | 2026-04-08 03:12:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:12:17.580117 | orchestrator | 2026-04-08 03:12:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:12:17.580180 | orchestrator | 2026-04-08 03:12:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:12:20.621557 | orchestrator | 2026-04-08 03:12:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:12:20.622271 | orchestrator | 2026-04-08 03:12:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:12:20.622384 | orchestrator | 2026-04-08 03:12:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:12:23.675520 | orchestrator | 2026-04-08 03:12:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:12:23.676714 | orchestrator | 2026-04-08 03:12:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:12:23.676765 | orchestrator | 2026-04-08 03:12:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:12:26.729574 | orchestrator | 2026-04-08 03:12:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:12:26.730450 | orchestrator | 2026-04-08 03:12:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:12:26.730490 | orchestrator | 2026-04-08 03:12:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:12:29.779079 | orchestrator | 2026-04-08 03:12:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:12:29.780977 | orchestrator | 2026-04-08 03:12:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:12:29.781048 | orchestrator | 2026-04-08 03:12:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:12:32.831393 | orchestrator | 2026-04-08 03:12:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:12:32.832781 | orchestrator | 2026-04-08 03:12:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:12:32.832826 | orchestrator | 2026-04-08 03:12:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:12:35.887152 | orchestrator | 2026-04-08 03:12:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:12:35.887314 | orchestrator | 2026-04-08 03:12:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:12:35.887333 | orchestrator | 2026-04-08 03:12:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:12:38.922907 | orchestrator | 2026-04-08 03:12:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:12:38.924984 | orchestrator | 2026-04-08 03:12:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:12:38.925070 | orchestrator | 2026-04-08 03:12:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:12:41.980758 | orchestrator | 2026-04-08 03:12:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:12:41.983433 | orchestrator | 2026-04-08 03:12:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:12:41.984125 | orchestrator | 2026-04-08 03:12:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:12:45.037421 | orchestrator | 2026-04-08 03:12:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:12:45.038488 | orchestrator | 2026-04-08 03:12:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:12:45.038534 | orchestrator | 2026-04-08 03:12:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:12:48.085764 | orchestrator | 2026-04-08 03:12:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:12:48.087322 | orchestrator | 2026-04-08 03:12:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:12:48.087379 | orchestrator | 2026-04-08 03:12:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:12:51.142405 | orchestrator | 2026-04-08 03:12:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:12:51.145388 | orchestrator | 2026-04-08 03:12:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:12:51.145472 | orchestrator | 2026-04-08 03:12:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:12:54.201795 | orchestrator | 2026-04-08 03:12:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:12:54.204534 | orchestrator | 2026-04-08 03:12:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:12:54.204617 | orchestrator | 2026-04-08 03:12:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:12:57.254109 | orchestrator | 2026-04-08 03:12:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:12:57.254219 | orchestrator | 2026-04-08 03:12:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:12:57.254234 | orchestrator | 2026-04-08 03:12:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:13:00.300850 | orchestrator | 2026-04-08 03:13:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:13:00.303245 | orchestrator | 2026-04-08 03:13:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:13:00.303363 | orchestrator | 2026-04-08 03:13:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:13:03.354452 | orchestrator | 2026-04-08 03:13:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:13:03.356562 | orchestrator | 2026-04-08 03:13:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:13:03.356614 | orchestrator | 2026-04-08 03:13:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:13:06.408542 | orchestrator | 2026-04-08 03:13:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:13:06.409931 | orchestrator | 2026-04-08 03:13:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:13:06.409979 | orchestrator | 2026-04-08 03:13:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:13:09.456659 | orchestrator | 2026-04-08 03:13:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:13:09.458719 | orchestrator | 2026-04-08 03:13:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:13:09.458823 | orchestrator | 2026-04-08 03:13:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:13:12.510173 | orchestrator | 2026-04-08 03:13:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:13:12.510393 | orchestrator | 2026-04-08 03:13:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:13:12.511141 | orchestrator | 2026-04-08 03:13:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:13:15.558492 | orchestrator | 2026-04-08 03:13:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:13:15.559851 | orchestrator | 2026-04-08 03:13:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:13:15.559911 | orchestrator | 2026-04-08 03:13:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:13:18.603968 | orchestrator | 2026-04-08 03:13:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:13:18.605945 | orchestrator | 2026-04-08 03:13:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:13:18.606062 | orchestrator | 2026-04-08 03:13:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:13:21.657867 | orchestrator | 2026-04-08 03:13:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:13:21.659091 | orchestrator | 2026-04-08 03:13:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:13:21.659123 | orchestrator | 2026-04-08 03:13:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:13:24.713196 | orchestrator | 2026-04-08 03:13:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:13:24.716135 | orchestrator | 2026-04-08 03:13:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:13:24.716225 | orchestrator | 2026-04-08 03:13:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:13:27.765460 | orchestrator | 2026-04-08 03:13:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:13:27.765992 | orchestrator | 2026-04-08 03:13:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:13:27.766010 | orchestrator | 2026-04-08 03:13:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:13:30.814429 | orchestrator | 2026-04-08 03:13:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:13:30.815433 | orchestrator | 2026-04-08 03:13:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:13:30.815491 | orchestrator | 2026-04-08 03:13:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:13:33.865867 | orchestrator | 2026-04-08 03:13:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:13:33.867528 | orchestrator | 2026-04-08 03:13:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:13:33.867601 | orchestrator | 2026-04-08 03:13:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:13:36.923528 | orchestrator | 2026-04-08 03:13:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:13:36.924541 | orchestrator | 2026-04-08 03:13:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:13:36.924673 | orchestrator | 2026-04-08 03:13:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:13:39.977291 | orchestrator | 2026-04-08 03:13:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:13:39.977618 | orchestrator | 2026-04-08 03:13:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:13:39.977650 | orchestrator | 2026-04-08 03:13:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:13:43.029843 | orchestrator | 2026-04-08 03:13:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:13:43.030780 | orchestrator | 2026-04-08 03:13:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:13:43.030852 | orchestrator | 2026-04-08 03:13:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:13:46.074909 | orchestrator | 2026-04-08 03:13:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:13:46.075579 | orchestrator | 2026-04-08 03:13:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:13:46.075622 | orchestrator | 2026-04-08 03:13:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:13:49.126713 | orchestrator | 2026-04-08 03:13:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:13:49.128061 | orchestrator | 2026-04-08 03:13:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:13:49.128092 | orchestrator | 2026-04-08 03:13:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:13:52.178764 | orchestrator | 2026-04-08 03:13:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:13:52.179589 | orchestrator | 2026-04-08 03:13:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:13:52.179658 | orchestrator | 2026-04-08 03:13:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:13:55.232487 | orchestrator | 2026-04-08 03:13:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:13:55.235023 | orchestrator | 2026-04-08 03:13:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:13:55.235101 | orchestrator | 2026-04-08 03:13:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:13:58.284590 | orchestrator | 2026-04-08 03:13:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:13:58.286546 | orchestrator | 2026-04-08 03:13:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:13:58.286666 | orchestrator | 2026-04-08 03:13:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:14:01.331696 | orchestrator | 2026-04-08 03:14:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:14:01.333657 | orchestrator | 2026-04-08 03:14:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:14:01.333711 | orchestrator | 2026-04-08 03:14:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:14:04.380215 | orchestrator | 2026-04-08 03:14:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:14:04.380805 | orchestrator | 2026-04-08 03:14:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:14:04.380850 | orchestrator | 2026-04-08 03:14:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:14:07.427120 | orchestrator | 2026-04-08 03:14:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:14:07.428858 | orchestrator | 2026-04-08 03:14:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:14:07.428933 | orchestrator | 2026-04-08 03:14:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:14:10.479112 | orchestrator | 2026-04-08 03:14:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:14:10.479406 | orchestrator | 2026-04-08 03:14:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:14:10.479456 | orchestrator | 2026-04-08 03:14:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:14:13.528700 | orchestrator | 2026-04-08 03:14:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:14:13.531057 | orchestrator | 2026-04-08 03:14:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:14:13.531649 | orchestrator | 2026-04-08 03:14:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:14:16.583988 | orchestrator | 2026-04-08 03:14:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:14:16.585721 | orchestrator | 2026-04-08 03:14:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:14:16.585773 | orchestrator | 2026-04-08 03:14:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:14:19.630533 | orchestrator | 2026-04-08 03:14:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:14:19.632023 | orchestrator | 2026-04-08 03:14:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:14:19.632072 | orchestrator | 2026-04-08 03:14:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:14:22.680833 | orchestrator | 2026-04-08 03:14:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:14:22.681583 | orchestrator | 2026-04-08 03:14:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:14:22.681712 | orchestrator | 2026-04-08 03:14:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:14:25.732870 | orchestrator | 2026-04-08 03:14:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:14:25.735155 | orchestrator | 2026-04-08 03:14:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:14:25.735192 | orchestrator | 2026-04-08 03:14:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:14:28.786292 | orchestrator | 2026-04-08 03:14:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:14:28.788581 | orchestrator | 2026-04-08 03:14:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:14:28.788626 | orchestrator | 2026-04-08 03:14:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:14:31.832431 | orchestrator | 2026-04-08 03:14:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:14:31.835196 | orchestrator | 2026-04-08 03:14:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:14:31.835292 | orchestrator | 2026-04-08 03:14:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:14:34.891032 | orchestrator | 2026-04-08 03:14:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:14:34.892804 | orchestrator | 2026-04-08 03:14:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:14:34.892867 | orchestrator | 2026-04-08 03:14:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:14:37.939978 | orchestrator | 2026-04-08 03:14:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:14:37.941518 | orchestrator | 2026-04-08 03:14:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:14:37.941581 | orchestrator | 2026-04-08 03:14:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:14:40.990249 | orchestrator | 2026-04-08 03:14:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:14:40.991704 | orchestrator | 2026-04-08 03:14:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:14:40.991753 | orchestrator | 2026-04-08 03:14:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:14:44.032284 | orchestrator | 2026-04-08 03:14:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:14:44.033384 | orchestrator | 2026-04-08 03:14:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:14:44.033470 | orchestrator | 2026-04-08 03:14:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:14:47.088297 | orchestrator | 2026-04-08 03:14:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:14:47.092856 | orchestrator | 2026-04-08 03:14:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:14:47.092940 | orchestrator | 2026-04-08 03:14:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:14:50.136941 | orchestrator | 2026-04-08 03:14:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:14:50.137046 | orchestrator | 2026-04-08 03:14:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:14:50.137054 | orchestrator | 2026-04-08 03:14:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:14:53.178188 | orchestrator | 2026-04-08 03:14:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:14:53.180350 | orchestrator | 2026-04-08 03:14:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:14:53.180391 | orchestrator | 2026-04-08 03:14:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:14:56.231823 | orchestrator | 2026-04-08 03:14:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:14:56.234299 | orchestrator | 2026-04-08 03:14:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:14:56.234376 | orchestrator | 2026-04-08 03:14:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:14:59.290686 | orchestrator | 2026-04-08 03:14:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:14:59.291924 | orchestrator | 2026-04-08 03:14:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:14:59.291966 | orchestrator | 2026-04-08 03:14:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:15:02.346766 | orchestrator | 2026-04-08 03:15:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:15:02.346868 | orchestrator | 2026-04-08 03:15:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:15:02.346879 | orchestrator | 2026-04-08 03:15:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:15:05.396054 | orchestrator | 2026-04-08 03:15:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:15:05.398372 | orchestrator | 2026-04-08 03:15:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:15:05.398456 | orchestrator | 2026-04-08 03:15:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:15:08.446525 | orchestrator | 2026-04-08 03:15:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:15:08.447576 | orchestrator | 2026-04-08 03:15:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:15:08.447622 | orchestrator | 2026-04-08 03:15:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:15:11.501353 | orchestrator | 2026-04-08 03:15:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:15:11.502540 | orchestrator | 2026-04-08 03:15:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:15:11.502767 | orchestrator | 2026-04-08 03:15:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:15:14.553681 | orchestrator | 2026-04-08 03:15:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:15:14.555654 | orchestrator | 2026-04-08 03:15:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:15:14.555701 | orchestrator | 2026-04-08 03:15:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:15:17.598963 | orchestrator | 2026-04-08 03:15:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:15:17.600890 | orchestrator | 2026-04-08 03:15:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:15:17.600944 | orchestrator | 2026-04-08 03:15:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:15:20.649786 | orchestrator | 2026-04-08 03:15:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:15:20.651511 | orchestrator | 2026-04-08 03:15:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:15:20.651572 | orchestrator | 2026-04-08 03:15:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:15:23.702273 | orchestrator | 2026-04-08 03:15:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:15:23.703579 | orchestrator | 2026-04-08 03:15:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:15:23.703624 | orchestrator | 2026-04-08 03:15:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:15:26.750602 | orchestrator | 2026-04-08 03:15:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:15:26.751002 | orchestrator | 2026-04-08 03:15:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:15:26.751115 | orchestrator | 2026-04-08 03:15:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:15:29.798062 | orchestrator | 2026-04-08 03:15:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:15:29.799774 | orchestrator | 2026-04-08 03:15:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:15:29.799813 | orchestrator | 2026-04-08 03:15:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:15:32.846707 | orchestrator | 2026-04-08 03:15:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:15:32.846945 | orchestrator | 2026-04-08 03:15:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:15:32.846980 | orchestrator | 2026-04-08 03:15:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:15:35.892278 | orchestrator | 2026-04-08 03:15:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:15:35.893572 | orchestrator | 2026-04-08 03:15:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:15:35.893643 | orchestrator | 2026-04-08 03:15:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:15:38.941254 | orchestrator | 2026-04-08 03:15:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:15:38.942754 | orchestrator | 2026-04-08 03:15:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:15:38.942816 | orchestrator | 2026-04-08 03:15:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:15:41.995821 | orchestrator | 2026-04-08 03:15:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:15:41.995926 | orchestrator | 2026-04-08 03:15:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:15:41.995942 | orchestrator | 2026-04-08 03:15:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:15:45.043454 | orchestrator | 2026-04-08 03:15:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:15:45.043598 | orchestrator | 2026-04-08 03:15:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:15:45.043610 | orchestrator | 2026-04-08 03:15:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:15:48.092397 | orchestrator | 2026-04-08 03:15:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:15:48.094251 | orchestrator | 2026-04-08 03:15:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:15:48.094332 | orchestrator | 2026-04-08 03:15:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:15:51.142953 | orchestrator | 2026-04-08 03:15:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:15:51.146791 | orchestrator | 2026-04-08 03:15:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:15:51.146975 | orchestrator | 2026-04-08 03:15:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:15:54.201400 | orchestrator | 2026-04-08 03:15:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:15:54.204337 | orchestrator | 2026-04-08 03:15:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:15:54.204438 | orchestrator | 2026-04-08 03:15:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:15:57.258723 | orchestrator | 2026-04-08 03:15:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:15:57.259119 | orchestrator | 2026-04-08 03:15:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:15:57.259142 | orchestrator | 2026-04-08 03:15:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:16:00.316079 | orchestrator | 2026-04-08 03:16:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:16:00.316199 | orchestrator | 2026-04-08 03:16:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:16:00.316360 | orchestrator | 2026-04-08 03:16:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:16:03.359956 | orchestrator | 2026-04-08 03:16:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:16:03.362160 | orchestrator | 2026-04-08 03:16:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:16:03.362308 | orchestrator | 2026-04-08 03:16:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:16:06.418842 | orchestrator | 2026-04-08 03:16:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:16:06.421874 | orchestrator | 2026-04-08 03:16:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:16:06.421924 | orchestrator | 2026-04-08 03:16:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:16:09.474683 | orchestrator | 2026-04-08 03:16:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:16:09.475417 | orchestrator | 2026-04-08 03:16:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:16:09.475595 | orchestrator | 2026-04-08 03:16:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:16:12.525356 | orchestrator | 2026-04-08 03:16:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:16:12.525444 | orchestrator | 2026-04-08 03:16:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:16:12.525454 | orchestrator | 2026-04-08 03:16:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:16:15.576114 | orchestrator | 2026-04-08 03:16:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:16:15.576972 | orchestrator | 2026-04-08 03:16:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:16:15.577007 | orchestrator | 2026-04-08 03:16:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:16:18.622792 | orchestrator | 2026-04-08 03:16:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:16:18.625998 | orchestrator | 2026-04-08 03:16:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:16:18.626143 | orchestrator | 2026-04-08 03:16:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:16:21.679644 | orchestrator | 2026-04-08 03:16:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:16:21.681213 | orchestrator | 2026-04-08 03:16:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:16:21.681278 | orchestrator | 2026-04-08 03:16:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:16:24.733245 | orchestrator | 2026-04-08 03:16:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:16:24.736416 | orchestrator | 2026-04-08 03:16:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:16:24.736499 | orchestrator | 2026-04-08 03:16:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:16:27.778814 | orchestrator | 2026-04-08 03:16:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:16:27.778945 | orchestrator | 2026-04-08 03:16:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:16:27.779071 | orchestrator | 2026-04-08 03:16:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:16:30.828374 | orchestrator | 2026-04-08 03:16:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:16:30.830340 | orchestrator | 2026-04-08 03:16:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:16:30.830402 | orchestrator | 2026-04-08 03:16:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:16:33.882100 | orchestrator | 2026-04-08 03:16:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:16:33.884258 | orchestrator | 2026-04-08 03:16:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:16:33.884339 | orchestrator | 2026-04-08 03:16:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:16:36.941807 | orchestrator | 2026-04-08 03:16:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:16:36.941933 | orchestrator | 2026-04-08 03:16:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:16:36.941949 | orchestrator | 2026-04-08 03:16:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:16:39.991198 | orchestrator | 2026-04-08 03:16:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:16:39.992815 | orchestrator | 2026-04-08 03:16:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:16:39.992881 | orchestrator | 2026-04-08 03:16:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:16:43.042917 | orchestrator | 2026-04-08 03:16:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:16:43.044208 | orchestrator | 2026-04-08 03:16:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:16:43.044284 | orchestrator | 2026-04-08 03:16:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:16:46.102220 | orchestrator | 2026-04-08 03:16:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:16:46.103729 | orchestrator | 2026-04-08 03:16:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:16:46.103841 | orchestrator | 2026-04-08 03:16:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:16:49.152465 | orchestrator | 2026-04-08 03:16:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:16:49.152762 | orchestrator | 2026-04-08 03:16:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:16:49.152794 | orchestrator | 2026-04-08 03:16:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:16:52.199511 | orchestrator | 2026-04-08 03:16:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:16:52.200277 | orchestrator | 2026-04-08 03:16:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:16:52.200299 | orchestrator | 2026-04-08 03:16:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:16:55.248007 | orchestrator | 2026-04-08 03:16:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:16:55.249494 | orchestrator | 2026-04-08 03:16:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:16:55.249564 | orchestrator | 2026-04-08 03:16:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:16:58.289694 | orchestrator | 2026-04-08 03:16:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:16:58.290718 | orchestrator | 2026-04-08 03:16:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:16:58.290761 | orchestrator | 2026-04-08 03:16:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:17:01.340533 | orchestrator | 2026-04-08 03:17:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:17:01.342437 | orchestrator | 2026-04-08 03:17:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:17:01.342484 | orchestrator | 2026-04-08 03:17:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:17:04.384879 | orchestrator | 2026-04-08 03:17:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:17:04.387784 | orchestrator | 2026-04-08 03:17:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:17:04.387858 | orchestrator | 2026-04-08 03:17:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:17:07.431556 | orchestrator | 2026-04-08 03:17:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:17:07.433675 | orchestrator | 2026-04-08 03:17:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:17:07.433803 | orchestrator | 2026-04-08 03:17:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:17:10.487205 | orchestrator | 2026-04-08 03:17:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:17:10.487531 | orchestrator | 2026-04-08 03:17:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:17:10.487552 | orchestrator | 2026-04-08 03:17:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:17:13.534982 | orchestrator | 2026-04-08 03:17:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:17:13.536162 | orchestrator | 2026-04-08 03:17:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:17:13.536225 | orchestrator | 2026-04-08 03:17:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:17:16.587697 | orchestrator | 2026-04-08 03:17:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:17:16.589161 | orchestrator | 2026-04-08 03:17:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:17:16.589317 | orchestrator | 2026-04-08 03:17:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:17:19.637229 | orchestrator | 2026-04-08 03:17:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:17:19.638544 | orchestrator | 2026-04-08 03:17:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:17:19.638758 | orchestrator | 2026-04-08 03:17:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:17:22.686885 | orchestrator | 2026-04-08 03:17:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:17:22.687256 | orchestrator | 2026-04-08 03:17:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:17:22.687277 | orchestrator | 2026-04-08 03:17:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:17:25.735232 | orchestrator | 2026-04-08 03:17:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:17:25.737578 | orchestrator | 2026-04-08 03:17:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:17:25.737720 | orchestrator | 2026-04-08 03:17:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:17:28.785134 | orchestrator | 2026-04-08 03:17:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:17:28.785918 | orchestrator | 2026-04-08 03:17:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:17:28.785948 | orchestrator | 2026-04-08 03:17:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:17:31.835706 | orchestrator | 2026-04-08 03:17:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:17:31.836121 | orchestrator | 2026-04-08 03:17:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:17:31.836164 | orchestrator | 2026-04-08 03:17:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:17:34.877290 | orchestrator | 2026-04-08 03:17:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:17:34.878866 | orchestrator | 2026-04-08 03:17:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:17:34.878915 | orchestrator | 2026-04-08 03:17:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:17:37.925012 | orchestrator | 2026-04-08 03:17:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:17:37.926974 | orchestrator | 2026-04-08 03:17:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:17:37.927017 | orchestrator | 2026-04-08 03:17:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:17:40.969185 | orchestrator | 2026-04-08 03:17:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:17:40.969781 | orchestrator | 2026-04-08 03:17:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:17:40.969803 | orchestrator | 2026-04-08 03:17:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:17:44.015898 | orchestrator | 2026-04-08 03:17:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:17:44.016064 | orchestrator | 2026-04-08 03:17:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:17:44.016077 | orchestrator | 2026-04-08 03:17:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:17:47.073358 | orchestrator | 2026-04-08 03:17:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:17:47.074872 | orchestrator | 2026-04-08 03:17:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:17:47.074918 | orchestrator | 2026-04-08 03:17:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:17:50.122781 | orchestrator | 2026-04-08 03:17:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:17:50.123850 | orchestrator | 2026-04-08 03:17:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:17:50.123888 | orchestrator | 2026-04-08 03:17:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:17:53.174250 | orchestrator | 2026-04-08 03:17:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:17:53.175140 | orchestrator | 2026-04-08 03:17:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:17:53.175221 | orchestrator | 2026-04-08 03:17:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:17:56.227205 | orchestrator | 2026-04-08 03:17:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:17:56.229468 | orchestrator | 2026-04-08 03:17:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:17:56.229845 | orchestrator | 2026-04-08 03:17:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:17:59.281816 | orchestrator | 2026-04-08 03:17:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:17:59.282973 | orchestrator | 2026-04-08 03:17:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:17:59.283033 | orchestrator | 2026-04-08 03:17:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:18:02.342440 | orchestrator | 2026-04-08 03:18:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:18:02.343569 | orchestrator | 2026-04-08 03:18:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:18:02.343606 | orchestrator | 2026-04-08 03:18:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:18:05.395332 | orchestrator | 2026-04-08 03:18:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:18:05.397893 | orchestrator | 2026-04-08 03:18:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:18:05.397943 | orchestrator | 2026-04-08 03:18:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:18:08.451535 | orchestrator | 2026-04-08 03:18:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:18:08.452775 | orchestrator | 2026-04-08 03:18:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:18:08.452821 | orchestrator | 2026-04-08 03:18:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:18:11.502462 | orchestrator | 2026-04-08 03:18:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:18:11.504121 | orchestrator | 2026-04-08 03:18:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:18:11.504178 | orchestrator | 2026-04-08 03:18:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:18:14.554108 | orchestrator | 2026-04-08 03:18:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:18:14.554271 | orchestrator | 2026-04-08 03:18:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:18:14.554287 | orchestrator | 2026-04-08 03:18:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:18:17.601604 | orchestrator | 2026-04-08 03:18:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:18:17.603449 | orchestrator | 2026-04-08 03:18:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:18:17.603507 | orchestrator | 2026-04-08 03:18:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:18:20.650534 | orchestrator | 2026-04-08 03:18:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:18:20.651376 | orchestrator | 2026-04-08 03:18:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:18:20.651451 | orchestrator | 2026-04-08 03:18:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:18:23.699642 | orchestrator | 2026-04-08 03:18:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:18:23.701377 | orchestrator | 2026-04-08 03:18:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:18:23.701519 | orchestrator | 2026-04-08 03:18:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:18:26.748460 | orchestrator | 2026-04-08 03:18:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:18:26.750452 | orchestrator | 2026-04-08 03:18:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:18:26.750539 | orchestrator | 2026-04-08 03:18:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:18:29.793546 | orchestrator | 2026-04-08 03:18:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:18:29.795988 | orchestrator | 2026-04-08 03:18:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:18:29.796217 | orchestrator | 2026-04-08 03:18:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:18:32.846217 | orchestrator | 2026-04-08 03:18:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:20:32.959812 | orchestrator | 2026-04-08 03:20:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:20:32.960026 | orchestrator | 2026-04-08 03:20:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:20:36.009807 | orchestrator | 2026-04-08 03:20:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:20:36.011502 | orchestrator | 2026-04-08 03:20:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:20:36.011547 | orchestrator | 2026-04-08 03:20:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:20:39.052813 | orchestrator | 2026-04-08 03:20:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:20:39.054371 | orchestrator | 2026-04-08 03:20:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:20:39.054559 | orchestrator | 2026-04-08 03:20:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:20:42.100423 | orchestrator | 2026-04-08 03:20:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:20:42.102885 | orchestrator | 2026-04-08 03:20:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:20:42.102948 | orchestrator | 2026-04-08 03:20:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:20:45.148306 | orchestrator | 2026-04-08 03:20:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:20:45.151583 | orchestrator | 2026-04-08 03:20:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:20:45.151685 | orchestrator | 2026-04-08 03:20:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:20:48.197817 | orchestrator | 2026-04-08 03:20:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:20:48.203118 | orchestrator | 2026-04-08 03:20:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:20:48.203192 | orchestrator | 2026-04-08 03:20:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:20:51.251443 | orchestrator | 2026-04-08 03:20:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:20:51.252113 | orchestrator | 2026-04-08 03:20:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:20:51.252271 | orchestrator | 2026-04-08 03:20:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:20:54.299695 | orchestrator | 2026-04-08 03:20:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:20:54.300377 | orchestrator | 2026-04-08 03:20:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:20:54.300408 | orchestrator | 2026-04-08 03:20:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:20:57.342936 | orchestrator | 2026-04-08 03:20:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:20:57.345273 | orchestrator | 2026-04-08 03:20:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:20:57.345339 | orchestrator | 2026-04-08 03:20:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:21:00.386307 | orchestrator | 2026-04-08 03:21:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:21:00.388970 | orchestrator | 2026-04-08 03:21:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:21:00.389024 | orchestrator | 2026-04-08 03:21:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:21:03.433194 | orchestrator | 2026-04-08 03:21:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:21:03.435509 | orchestrator | 2026-04-08 03:21:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:21:03.435597 | orchestrator | 2026-04-08 03:21:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:21:06.481307 | orchestrator | 2026-04-08 03:21:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:21:06.483093 | orchestrator | 2026-04-08 03:21:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:21:06.483148 | orchestrator | 2026-04-08 03:21:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:21:09.529498 | orchestrator | 2026-04-08 03:21:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:21:09.530430 | orchestrator | 2026-04-08 03:21:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:21:09.530484 | orchestrator | 2026-04-08 03:21:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:21:12.574821 | orchestrator | 2026-04-08 03:21:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:21:12.576968 | orchestrator | 2026-04-08 03:21:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:21:12.577019 | orchestrator | 2026-04-08 03:21:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:21:15.617804 | orchestrator | 2026-04-08 03:21:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:21:15.621004 | orchestrator | 2026-04-08 03:21:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:21:15.621071 | orchestrator | 2026-04-08 03:21:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:21:18.666421 | orchestrator | 2026-04-08 03:21:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:21:18.669921 | orchestrator | 2026-04-08 03:21:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:21:18.670007 | orchestrator | 2026-04-08 03:21:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:21:21.713157 | orchestrator | 2026-04-08 03:21:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:21:21.714646 | orchestrator | 2026-04-08 03:21:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:21:21.714679 | orchestrator | 2026-04-08 03:21:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:21:24.758403 | orchestrator | 2026-04-08 03:21:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:21:24.759090 | orchestrator | 2026-04-08 03:21:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:21:24.759123 | orchestrator | 2026-04-08 03:21:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:21:27.803855 | orchestrator | 2026-04-08 03:21:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:21:27.806140 | orchestrator | 2026-04-08 03:21:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:21:27.806249 | orchestrator | 2026-04-08 03:21:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:21:30.849750 | orchestrator | 2026-04-08 03:21:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:21:30.850824 | orchestrator | 2026-04-08 03:21:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:21:30.850844 | orchestrator | 2026-04-08 03:21:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:21:33.900252 | orchestrator | 2026-04-08 03:21:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:21:33.905341 | orchestrator | 2026-04-08 03:21:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:21:33.905428 | orchestrator | 2026-04-08 03:21:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:21:36.953627 | orchestrator | 2026-04-08 03:21:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:21:36.954111 | orchestrator | 2026-04-08 03:21:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:21:36.954161 | orchestrator | 2026-04-08 03:21:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:21:39.997307 | orchestrator | 2026-04-08 03:21:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:21:39.998606 | orchestrator | 2026-04-08 03:21:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:21:39.998690 | orchestrator | 2026-04-08 03:21:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:21:43.047057 | orchestrator | 2026-04-08 03:21:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:21:43.048501 | orchestrator | 2026-04-08 03:21:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:21:43.048548 | orchestrator | 2026-04-08 03:21:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:21:46.097883 | orchestrator | 2026-04-08 03:21:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:21:46.098635 | orchestrator | 2026-04-08 03:21:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:21:46.098683 | orchestrator | 2026-04-08 03:21:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:21:49.145377 | orchestrator | 2026-04-08 03:21:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:21:49.146266 | orchestrator | 2026-04-08 03:21:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:21:49.146709 | orchestrator | 2026-04-08 03:21:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:21:52.191386 | orchestrator | 2026-04-08 03:21:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:21:52.194328 | orchestrator | 2026-04-08 03:21:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:21:52.194418 | orchestrator | 2026-04-08 03:21:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:21:55.237755 | orchestrator | 2026-04-08 03:21:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:21:55.239257 | orchestrator | 2026-04-08 03:21:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:21:55.239306 | orchestrator | 2026-04-08 03:21:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:21:58.288982 | orchestrator | 2026-04-08 03:21:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:21:58.290722 | orchestrator | 2026-04-08 03:21:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:21:58.290818 | orchestrator | 2026-04-08 03:21:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:22:01.339557 | orchestrator | 2026-04-08 03:22:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:22:01.341265 | orchestrator | 2026-04-08 03:22:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:22:01.341320 | orchestrator | 2026-04-08 03:22:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:22:04.384897 | orchestrator | 2026-04-08 03:22:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:22:04.388018 | orchestrator | 2026-04-08 03:22:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:22:04.388111 | orchestrator | 2026-04-08 03:22:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:22:07.434192 | orchestrator | 2026-04-08 03:22:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:22:07.435955 | orchestrator | 2026-04-08 03:22:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:22:07.436017 | orchestrator | 2026-04-08 03:22:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:22:10.484671 | orchestrator | 2026-04-08 03:22:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:22:10.486691 | orchestrator | 2026-04-08 03:22:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:22:10.487292 | orchestrator | 2026-04-08 03:22:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:22:13.529145 | orchestrator | 2026-04-08 03:22:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:22:13.530442 | orchestrator | 2026-04-08 03:22:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:22:13.530531 | orchestrator | 2026-04-08 03:22:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:22:16.576284 | orchestrator | 2026-04-08 03:22:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:22:16.578825 | orchestrator | 2026-04-08 03:22:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:22:16.578924 | orchestrator | 2026-04-08 03:22:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:22:19.626156 | orchestrator | 2026-04-08 03:22:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:22:19.628107 | orchestrator | 2026-04-08 03:22:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:22:19.628167 | orchestrator | 2026-04-08 03:22:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:22:22.675129 | orchestrator | 2026-04-08 03:22:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:22:22.676120 | orchestrator | 2026-04-08 03:22:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:22:22.676159 | orchestrator | 2026-04-08 03:22:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:22:25.724902 | orchestrator | 2026-04-08 03:22:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:22:25.727114 | orchestrator | 2026-04-08 03:22:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:22:25.727157 | orchestrator | 2026-04-08 03:22:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:22:28.771473 | orchestrator | 2026-04-08 03:22:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:22:28.772829 | orchestrator | 2026-04-08 03:22:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:22:28.772873 | orchestrator | 2026-04-08 03:22:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:22:31.822307 | orchestrator | 2026-04-08 03:22:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:22:31.823137 | orchestrator | 2026-04-08 03:22:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:22:31.823222 | orchestrator | 2026-04-08 03:22:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:22:34.869359 | orchestrator | 2026-04-08 03:22:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:22:34.871538 | orchestrator | 2026-04-08 03:22:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:22:34.871606 | orchestrator | 2026-04-08 03:22:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:22:37.915040 | orchestrator | 2026-04-08 03:22:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:22:37.916886 | orchestrator | 2026-04-08 03:22:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:22:37.916943 | orchestrator | 2026-04-08 03:22:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:22:40.964018 | orchestrator | 2026-04-08 03:22:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:22:40.964111 | orchestrator | 2026-04-08 03:22:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:22:40.964122 | orchestrator | 2026-04-08 03:22:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:22:44.009481 | orchestrator | 2026-04-08 03:22:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:22:44.010559 | orchestrator | 2026-04-08 03:22:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:22:44.010617 | orchestrator | 2026-04-08 03:22:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:22:47.062228 | orchestrator | 2026-04-08 03:22:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:22:47.062897 | orchestrator | 2026-04-08 03:22:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:22:47.062926 | orchestrator | 2026-04-08 03:22:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:22:50.113930 | orchestrator | 2026-04-08 03:22:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:22:50.114912 | orchestrator | 2026-04-08 03:22:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:22:50.115218 | orchestrator | 2026-04-08 03:22:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:22:53.164747 | orchestrator | 2026-04-08 03:22:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:22:53.166392 | orchestrator | 2026-04-08 03:22:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:22:53.166445 | orchestrator | 2026-04-08 03:22:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:22:56.212068 | orchestrator | 2026-04-08 03:22:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:22:56.212491 | orchestrator | 2026-04-08 03:22:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:22:56.212525 | orchestrator | 2026-04-08 03:22:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:22:59.255081 | orchestrator | 2026-04-08 03:22:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:22:59.256661 | orchestrator | 2026-04-08 03:22:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:22:59.256804 | orchestrator | 2026-04-08 03:22:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:23:02.296871 | orchestrator | 2026-04-08 03:23:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:23:02.298140 | orchestrator | 2026-04-08 03:23:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:23:02.298217 | orchestrator | 2026-04-08 03:23:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:23:05.344905 | orchestrator | 2026-04-08 03:23:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:23:05.346542 | orchestrator | 2026-04-08 03:23:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:23:05.346996 | orchestrator | 2026-04-08 03:23:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:23:08.394522 | orchestrator | 2026-04-08 03:23:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:23:08.396562 | orchestrator | 2026-04-08 03:23:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:23:08.396608 | orchestrator | 2026-04-08 03:23:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:23:11.439863 | orchestrator | 2026-04-08 03:23:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:23:11.441705 | orchestrator | 2026-04-08 03:23:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:23:11.441761 | orchestrator | 2026-04-08 03:23:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:23:14.492407 | orchestrator | 2026-04-08 03:23:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:23:14.493954 | orchestrator | 2026-04-08 03:23:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:23:14.494142 | orchestrator | 2026-04-08 03:23:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:23:17.541178 | orchestrator | 2026-04-08 03:23:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:23:17.542414 | orchestrator | 2026-04-08 03:23:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:23:17.542499 | orchestrator | 2026-04-08 03:23:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:23:20.587929 | orchestrator | 2026-04-08 03:23:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:23:20.588655 | orchestrator | 2026-04-08 03:23:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:23:20.588729 | orchestrator | 2026-04-08 03:23:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:23:23.639053 | orchestrator | 2026-04-08 03:23:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:23:23.639439 | orchestrator | 2026-04-08 03:23:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:23:23.639559 | orchestrator | 2026-04-08 03:23:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:23:26.686400 | orchestrator | 2026-04-08 03:23:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:23:26.687974 | orchestrator | 2026-04-08 03:23:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:23:26.688029 | orchestrator | 2026-04-08 03:23:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:23:29.730302 | orchestrator | 2026-04-08 03:23:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:23:29.731419 | orchestrator | 2026-04-08 03:23:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:23:29.731477 | orchestrator | 2026-04-08 03:23:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:23:32.774471 | orchestrator | 2026-04-08 03:23:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:23:32.777237 | orchestrator | 2026-04-08 03:23:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:23:32.777416 | orchestrator | 2026-04-08 03:23:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:23:35.820526 | orchestrator | 2026-04-08 03:23:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:23:35.821871 | orchestrator | 2026-04-08 03:23:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:23:35.821930 | orchestrator | 2026-04-08 03:23:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:23:38.869888 | orchestrator | 2026-04-08 03:23:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:23:38.872254 | orchestrator | 2026-04-08 03:23:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:23:38.872307 | orchestrator | 2026-04-08 03:23:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:23:41.917250 | orchestrator | 2026-04-08 03:23:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:23:41.917700 | orchestrator | 2026-04-08 03:23:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:23:41.917960 | orchestrator | 2026-04-08 03:23:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:23:44.959351 | orchestrator | 2026-04-08 03:23:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:23:44.960604 | orchestrator | 2026-04-08 03:23:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:23:44.960678 | orchestrator | 2026-04-08 03:23:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:23:48.007669 | orchestrator | 2026-04-08 03:23:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:23:48.010375 | orchestrator | 2026-04-08 03:23:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:23:48.010438 | orchestrator | 2026-04-08 03:23:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:23:51.057835 | orchestrator | 2026-04-08 03:23:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:23:51.060287 | orchestrator | 2026-04-08 03:23:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:23:51.060380 | orchestrator | 2026-04-08 03:23:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:23:54.108164 | orchestrator | 2026-04-08 03:23:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:23:54.109890 | orchestrator | 2026-04-08 03:23:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:23:54.109921 | orchestrator | 2026-04-08 03:23:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:23:57.156063 | orchestrator | 2026-04-08 03:23:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:23:57.157680 | orchestrator | 2026-04-08 03:23:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:23:57.157734 | orchestrator | 2026-04-08 03:23:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:24:00.205723 | orchestrator | 2026-04-08 03:24:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:24:00.207712 | orchestrator | 2026-04-08 03:24:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:24:00.207771 | orchestrator | 2026-04-08 03:24:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:24:03.251939 | orchestrator | 2026-04-08 03:24:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:24:03.254491 | orchestrator | 2026-04-08 03:24:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:24:03.254556 | orchestrator | 2026-04-08 03:24:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:24:06.292912 | orchestrator | 2026-04-08 03:24:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:24:06.294493 | orchestrator | 2026-04-08 03:24:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:24:06.294557 | orchestrator | 2026-04-08 03:24:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:24:09.342428 | orchestrator | 2026-04-08 03:24:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:24:09.344238 | orchestrator | 2026-04-08 03:24:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:24:09.344280 | orchestrator | 2026-04-08 03:24:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:24:12.383018 | orchestrator | 2026-04-08 03:24:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:24:12.384886 | orchestrator | 2026-04-08 03:24:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:24:12.384965 | orchestrator | 2026-04-08 03:24:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:24:15.425773 | orchestrator | 2026-04-08 03:24:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:24:15.427720 | orchestrator | 2026-04-08 03:24:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:24:15.427805 | orchestrator | 2026-04-08 03:24:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:24:18.477029 | orchestrator | 2026-04-08 03:24:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:24:18.478114 | orchestrator | 2026-04-08 03:24:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:24:18.478162 | orchestrator | 2026-04-08 03:24:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:24:21.517842 | orchestrator | 2026-04-08 03:24:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:24:21.519169 | orchestrator | 2026-04-08 03:24:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:24:21.519211 | orchestrator | 2026-04-08 03:24:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:24:24.569886 | orchestrator | 2026-04-08 03:24:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:24:24.572663 | orchestrator | 2026-04-08 03:24:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:24:24.572771 | orchestrator | 2026-04-08 03:24:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:24:27.617658 | orchestrator | 2026-04-08 03:24:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:24:27.618908 | orchestrator | 2026-04-08 03:24:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:24:27.618967 | orchestrator | 2026-04-08 03:24:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:24:30.672405 | orchestrator | 2026-04-08 03:24:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:24:30.674313 | orchestrator | 2026-04-08 03:24:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:24:30.674357 | orchestrator | 2026-04-08 03:24:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:24:33.719632 | orchestrator | 2026-04-08 03:24:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:24:33.723204 | orchestrator | 2026-04-08 03:24:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:24:33.723268 | orchestrator | 2026-04-08 03:24:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:24:36.768391 | orchestrator | 2026-04-08 03:24:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:24:36.771617 | orchestrator | 2026-04-08 03:24:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:24:36.771671 | orchestrator | 2026-04-08 03:24:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:24:39.813751 | orchestrator | 2026-04-08 03:24:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:24:39.815903 | orchestrator | 2026-04-08 03:24:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:24:39.816291 | orchestrator | 2026-04-08 03:24:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:24:42.863608 | orchestrator | 2026-04-08 03:24:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:24:42.865080 | orchestrator | 2026-04-08 03:24:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:24:42.865378 | orchestrator | 2026-04-08 03:24:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:24:45.910715 | orchestrator | 2026-04-08 03:24:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:24:45.913030 | orchestrator | 2026-04-08 03:24:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:24:45.913137 | orchestrator | 2026-04-08 03:24:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:24:48.961482 | orchestrator | 2026-04-08 03:24:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:24:48.963618 | orchestrator | 2026-04-08 03:24:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:24:48.963767 | orchestrator | 2026-04-08 03:24:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:24:52.009590 | orchestrator | 2026-04-08 03:24:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:24:52.011513 | orchestrator | 2026-04-08 03:24:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:24:52.011591 | orchestrator | 2026-04-08 03:24:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:24:55.055755 | orchestrator | 2026-04-08 03:24:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:24:55.057117 | orchestrator | 2026-04-08 03:24:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:24:55.057167 | orchestrator | 2026-04-08 03:24:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:24:58.107696 | orchestrator | 2026-04-08 03:24:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:24:58.109546 | orchestrator | 2026-04-08 03:24:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:24:58.109625 | orchestrator | 2026-04-08 03:24:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:25:01.160661 | orchestrator | 2026-04-08 03:25:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:25:01.162573 | orchestrator | 2026-04-08 03:25:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:25:01.162615 | orchestrator | 2026-04-08 03:25:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:25:04.208506 | orchestrator | 2026-04-08 03:25:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:25:04.210695 | orchestrator | 2026-04-08 03:25:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:25:04.210739 | orchestrator | 2026-04-08 03:25:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:25:07.258572 | orchestrator | 2026-04-08 03:25:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:25:07.261590 | orchestrator | 2026-04-08 03:25:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:25:07.261658 | orchestrator | 2026-04-08 03:25:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:25:10.314887 | orchestrator | 2026-04-08 03:25:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:25:10.320541 | orchestrator | 2026-04-08 03:25:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:25:10.320612 | orchestrator | 2026-04-08 03:25:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:25:13.371997 | orchestrator | 2026-04-08 03:25:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:25:13.372924 | orchestrator | 2026-04-08 03:25:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:25:13.372972 | orchestrator | 2026-04-08 03:25:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:25:16.419335 | orchestrator | 2026-04-08 03:25:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:25:16.420234 | orchestrator | 2026-04-08 03:25:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:25:16.420335 | orchestrator | 2026-04-08 03:25:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:25:19.467950 | orchestrator | 2026-04-08 03:25:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:25:19.470141 | orchestrator | 2026-04-08 03:25:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:25:19.470373 | orchestrator | 2026-04-08 03:25:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:25:22.514566 | orchestrator | 2026-04-08 03:25:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:25:22.517382 | orchestrator | 2026-04-08 03:25:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:25:22.517467 | orchestrator | 2026-04-08 03:25:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:25:25.569282 | orchestrator | 2026-04-08 03:25:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:25:25.571432 | orchestrator | 2026-04-08 03:25:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:25:25.571514 | orchestrator | 2026-04-08 03:25:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:25:28.615887 | orchestrator | 2026-04-08 03:25:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:25:28.617433 | orchestrator | 2026-04-08 03:25:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:25:28.617513 | orchestrator | 2026-04-08 03:25:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:25:31.662399 | orchestrator | 2026-04-08 03:25:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:25:31.664608 | orchestrator | 2026-04-08 03:25:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:25:31.664681 | orchestrator | 2026-04-08 03:25:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:25:34.709465 | orchestrator | 2026-04-08 03:25:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:25:34.711634 | orchestrator | 2026-04-08 03:25:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:25:34.711707 | orchestrator | 2026-04-08 03:25:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:25:37.748916 | orchestrator | 2026-04-08 03:25:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:25:37.750064 | orchestrator | 2026-04-08 03:25:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:25:37.750126 | orchestrator | 2026-04-08 03:25:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:25:40.794805 | orchestrator | 2026-04-08 03:25:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:25:40.796930 | orchestrator | 2026-04-08 03:25:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:25:40.796979 | orchestrator | 2026-04-08 03:25:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:25:43.843995 | orchestrator | 2026-04-08 03:25:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:25:43.845695 | orchestrator | 2026-04-08 03:25:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:25:43.845748 | orchestrator | 2026-04-08 03:25:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:25:46.893956 | orchestrator | 2026-04-08 03:25:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:25:46.895927 | orchestrator | 2026-04-08 03:25:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:25:46.896011 | orchestrator | 2026-04-08 03:25:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:25:49.936217 | orchestrator | 2026-04-08 03:25:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:25:49.938127 | orchestrator | 2026-04-08 03:25:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:25:49.938175 | orchestrator | 2026-04-08 03:25:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:25:52.985555 | orchestrator | 2026-04-08 03:25:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:25:52.987805 | orchestrator | 2026-04-08 03:25:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:25:52.987867 | orchestrator | 2026-04-08 03:25:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:25:56.038571 | orchestrator | 2026-04-08 03:25:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:25:56.041074 | orchestrator | 2026-04-08 03:25:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:25:56.041170 | orchestrator | 2026-04-08 03:25:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:25:59.091104 | orchestrator | 2026-04-08 03:25:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:25:59.092757 | orchestrator | 2026-04-08 03:25:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:25:59.092786 | orchestrator | 2026-04-08 03:25:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:26:02.143475 | orchestrator | 2026-04-08 03:26:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:26:02.146693 | orchestrator | 2026-04-08 03:26:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:26:02.146782 | orchestrator | 2026-04-08 03:26:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:26:05.200258 | orchestrator | 2026-04-08 03:26:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:26:05.202566 | orchestrator | 2026-04-08 03:26:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:26:05.202635 | orchestrator | 2026-04-08 03:26:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:26:08.255534 | orchestrator | 2026-04-08 03:26:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:26:08.257312 | orchestrator | 2026-04-08 03:26:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:26:08.257439 | orchestrator | 2026-04-08 03:26:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:26:11.309243 | orchestrator | 2026-04-08 03:26:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:26:11.310331 | orchestrator | 2026-04-08 03:26:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:26:11.310730 | orchestrator | 2026-04-08 03:26:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:26:14.362271 | orchestrator | 2026-04-08 03:26:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:26:14.364353 | orchestrator | 2026-04-08 03:26:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:26:14.364480 | orchestrator | 2026-04-08 03:26:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:26:17.414457 | orchestrator | 2026-04-08 03:26:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:26:17.415086 | orchestrator | 2026-04-08 03:26:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:26:17.415176 | orchestrator | 2026-04-08 03:26:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:26:20.464345 | orchestrator | 2026-04-08 03:26:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:26:20.467620 | orchestrator | 2026-04-08 03:26:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:26:20.467709 | orchestrator | 2026-04-08 03:26:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:26:23.515388 | orchestrator | 2026-04-08 03:26:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:26:23.518630 | orchestrator | 2026-04-08 03:26:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:26:23.518692 | orchestrator | 2026-04-08 03:26:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:26:26.570355 | orchestrator | 2026-04-08 03:26:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:26:26.572004 | orchestrator | 2026-04-08 03:26:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:26:26.572052 | orchestrator | 2026-04-08 03:26:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:26:29.623778 | orchestrator | 2026-04-08 03:26:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:26:29.625617 | orchestrator | 2026-04-08 03:26:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:26:29.625648 | orchestrator | 2026-04-08 03:26:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:26:32.671056 | orchestrator | 2026-04-08 03:26:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:26:32.672812 | orchestrator | 2026-04-08 03:26:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:26:32.672871 | orchestrator | 2026-04-08 03:26:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:26:35.718450 | orchestrator | 2026-04-08 03:26:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:26:35.720224 | orchestrator | 2026-04-08 03:26:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:26:35.720443 | orchestrator | 2026-04-08 03:26:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:26:38.768513 | orchestrator | 2026-04-08 03:26:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:26:38.770056 | orchestrator | 2026-04-08 03:26:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:26:38.770115 | orchestrator | 2026-04-08 03:26:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:26:41.812979 | orchestrator | 2026-04-08 03:26:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:26:41.813388 | orchestrator | 2026-04-08 03:26:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:26:41.813421 | orchestrator | 2026-04-08 03:26:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:26:44.868417 | orchestrator | 2026-04-08 03:26:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:26:44.869099 | orchestrator | 2026-04-08 03:26:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:26:44.869133 | orchestrator | 2026-04-08 03:26:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:26:47.913364 | orchestrator | 2026-04-08 03:26:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:26:47.914589 | orchestrator | 2026-04-08 03:26:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:26:47.914695 | orchestrator | 2026-04-08 03:26:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:26:50.965733 | orchestrator | 2026-04-08 03:26:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:26:50.966932 | orchestrator | 2026-04-08 03:26:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:26:50.967054 | orchestrator | 2026-04-08 03:26:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:26:54.016860 | orchestrator | 2026-04-08 03:26:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:26:54.019208 | orchestrator | 2026-04-08 03:26:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:26:54.019364 | orchestrator | 2026-04-08 03:26:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:26:57.083135 | orchestrator | 2026-04-08 03:26:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:26:57.085465 | orchestrator | 2026-04-08 03:26:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:26:57.085822 | orchestrator | 2026-04-08 03:26:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:27:00.128263 | orchestrator | 2026-04-08 03:27:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:27:00.128394 | orchestrator | 2026-04-08 03:27:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:27:00.128405 | orchestrator | 2026-04-08 03:27:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:27:03.177849 | orchestrator | 2026-04-08 03:27:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:27:03.179703 | orchestrator | 2026-04-08 03:27:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:27:03.179764 | orchestrator | 2026-04-08 03:27:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:27:06.230659 | orchestrator | 2026-04-08 03:27:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:27:06.232459 | orchestrator | 2026-04-08 03:27:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:27:06.232507 | orchestrator | 2026-04-08 03:27:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:27:09.286524 | orchestrator | 2026-04-08 03:27:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:27:09.289367 | orchestrator | 2026-04-08 03:27:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:27:09.289449 | orchestrator | 2026-04-08 03:27:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:27:12.341439 | orchestrator | 2026-04-08 03:27:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:27:12.341999 | orchestrator | 2026-04-08 03:27:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:27:12.342124 | orchestrator | 2026-04-08 03:27:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:27:15.391561 | orchestrator | 2026-04-08 03:27:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:27:15.393687 | orchestrator | 2026-04-08 03:27:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:27:15.393783 | orchestrator | 2026-04-08 03:27:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:27:18.447839 | orchestrator | 2026-04-08 03:27:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:27:18.448751 | orchestrator | 2026-04-08 03:27:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:27:18.448799 | orchestrator | 2026-04-08 03:27:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:27:21.496140 | orchestrator | 2026-04-08 03:27:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:27:21.498438 | orchestrator | 2026-04-08 03:27:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:27:21.498503 | orchestrator | 2026-04-08 03:27:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:27:24.543132 | orchestrator | 2026-04-08 03:27:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:27:24.545977 | orchestrator | 2026-04-08 03:27:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:27:24.546070 | orchestrator | 2026-04-08 03:27:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:27:27.596615 | orchestrator | 2026-04-08 03:27:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:27:27.599555 | orchestrator | 2026-04-08 03:27:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:27:27.599623 | orchestrator | 2026-04-08 03:27:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:27:30.654865 | orchestrator | 2026-04-08 03:27:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:27:30.656982 | orchestrator | 2026-04-08 03:27:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:27:30.657084 | orchestrator | 2026-04-08 03:27:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:27:33.702660 | orchestrator | 2026-04-08 03:27:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:27:33.705634 | orchestrator | 2026-04-08 03:27:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:27:33.705693 | orchestrator | 2026-04-08 03:27:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:27:36.760575 | orchestrator | 2026-04-08 03:27:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:27:36.763332 | orchestrator | 2026-04-08 03:27:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:27:36.763471 | orchestrator | 2026-04-08 03:27:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:27:39.810326 | orchestrator | 2026-04-08 03:27:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:27:39.812327 | orchestrator | 2026-04-08 03:27:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:27:39.812380 | orchestrator | 2026-04-08 03:27:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:27:42.864845 | orchestrator | 2026-04-08 03:27:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:27:42.866825 | orchestrator | 2026-04-08 03:27:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:27:42.866933 | orchestrator | 2026-04-08 03:27:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:27:45.914833 | orchestrator | 2026-04-08 03:27:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:27:45.915851 | orchestrator | 2026-04-08 03:27:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:27:45.915958 | orchestrator | 2026-04-08 03:27:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:27:48.964844 | orchestrator | 2026-04-08 03:27:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:27:48.965489 | orchestrator | 2026-04-08 03:27:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:27:48.965549 | orchestrator | 2026-04-08 03:27:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:27:52.012155 | orchestrator | 2026-04-08 03:27:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:27:52.013927 | orchestrator | 2026-04-08 03:27:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:27:52.014070 | orchestrator | 2026-04-08 03:27:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:27:55.057576 | orchestrator | 2026-04-08 03:27:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:27:55.059075 | orchestrator | 2026-04-08 03:27:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:27:55.059119 | orchestrator | 2026-04-08 03:27:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:27:58.111821 | orchestrator | 2026-04-08 03:27:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:27:58.113526 | orchestrator | 2026-04-08 03:27:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:27:58.113583 | orchestrator | 2026-04-08 03:27:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:28:01.153740 | orchestrator | 2026-04-08 03:28:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:28:01.155072 | orchestrator | 2026-04-08 03:28:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:28:01.155110 | orchestrator | 2026-04-08 03:28:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:28:04.197414 | orchestrator | 2026-04-08 03:28:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:28:04.200124 | orchestrator | 2026-04-08 03:28:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:28:04.200316 | orchestrator | 2026-04-08 03:28:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:28:07.247691 | orchestrator | 2026-04-08 03:28:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:28:07.250279 | orchestrator | 2026-04-08 03:28:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:28:07.250366 | orchestrator | 2026-04-08 03:28:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:28:10.306787 | orchestrator | 2026-04-08 03:28:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:28:10.310578 | orchestrator | 2026-04-08 03:28:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:28:10.310706 | orchestrator | 2026-04-08 03:28:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:28:13.358669 | orchestrator | 2026-04-08 03:28:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:28:13.360591 | orchestrator | 2026-04-08 03:28:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:28:13.360648 | orchestrator | 2026-04-08 03:28:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:28:16.405681 | orchestrator | 2026-04-08 03:28:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:28:16.407044 | orchestrator | 2026-04-08 03:28:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:28:16.407186 | orchestrator | 2026-04-08 03:28:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:28:19.458225 | orchestrator | 2026-04-08 03:28:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:28:19.459455 | orchestrator | 2026-04-08 03:28:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:28:19.459495 | orchestrator | 2026-04-08 03:28:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:28:22.511765 | orchestrator | 2026-04-08 03:28:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:28:22.513593 | orchestrator | 2026-04-08 03:28:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:28:22.513675 | orchestrator | 2026-04-08 03:28:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:28:25.560999 | orchestrator | 2026-04-08 03:28:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:28:25.563107 | orchestrator | 2026-04-08 03:28:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:28:25.563189 | orchestrator | 2026-04-08 03:28:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:28:28.602336 | orchestrator | 2026-04-08 03:28:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:28:28.604058 | orchestrator | 2026-04-08 03:28:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:28:28.604169 | orchestrator | 2026-04-08 03:28:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:28:31.659071 | orchestrator | 2026-04-08 03:28:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:28:31.661471 | orchestrator | 2026-04-08 03:28:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:28:31.661644 | orchestrator | 2026-04-08 03:28:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:28:34.709712 | orchestrator | 2026-04-08 03:28:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:28:34.710844 | orchestrator | 2026-04-08 03:28:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:28:34.710888 | orchestrator | 2026-04-08 03:28:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:28:37.761764 | orchestrator | 2026-04-08 03:28:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:28:37.762676 | orchestrator | 2026-04-08 03:28:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:28:37.762733 | orchestrator | 2026-04-08 03:28:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:28:40.816761 | orchestrator | 2026-04-08 03:28:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:28:40.818688 | orchestrator | 2026-04-08 03:28:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:28:40.818728 | orchestrator | 2026-04-08 03:28:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:28:43.866940 | orchestrator | 2026-04-08 03:28:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:28:43.868644 | orchestrator | 2026-04-08 03:28:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:28:43.868685 | orchestrator | 2026-04-08 03:28:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:28:46.911878 | orchestrator | 2026-04-08 03:28:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:28:46.912967 | orchestrator | 2026-04-08 03:28:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:28:46.913021 | orchestrator | 2026-04-08 03:28:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:28:49.960650 | orchestrator | 2026-04-08 03:28:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:28:49.962503 | orchestrator | 2026-04-08 03:28:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:28:49.962551 | orchestrator | 2026-04-08 03:28:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:28:53.014663 | orchestrator | 2026-04-08 03:28:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:28:53.017045 | orchestrator | 2026-04-08 03:28:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:28:53.017179 | orchestrator | 2026-04-08 03:28:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:28:56.063589 | orchestrator | 2026-04-08 03:28:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:28:56.065371 | orchestrator | 2026-04-08 03:28:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:28:56.065429 | orchestrator | 2026-04-08 03:28:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:28:59.116764 | orchestrator | 2026-04-08 03:28:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:28:59.118254 | orchestrator | 2026-04-08 03:28:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:28:59.118430 | orchestrator | 2026-04-08 03:28:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:29:02.172382 | orchestrator | 2026-04-08 03:29:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:29:02.175737 | orchestrator | 2026-04-08 03:29:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:29:02.175816 | orchestrator | 2026-04-08 03:29:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:29:05.226794 | orchestrator | 2026-04-08 03:29:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:29:05.228415 | orchestrator | 2026-04-08 03:29:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:29:05.228470 | orchestrator | 2026-04-08 03:29:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:29:08.273440 | orchestrator | 2026-04-08 03:29:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:29:08.275526 | orchestrator | 2026-04-08 03:29:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:29:08.275599 | orchestrator | 2026-04-08 03:29:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:29:11.327184 | orchestrator | 2026-04-08 03:29:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:29:11.329637 | orchestrator | 2026-04-08 03:29:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:29:11.329692 | orchestrator | 2026-04-08 03:29:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:29:14.371903 | orchestrator | 2026-04-08 03:29:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:29:14.372300 | orchestrator | 2026-04-08 03:29:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:29:14.372546 | orchestrator | 2026-04-08 03:29:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:29:17.428133 | orchestrator | 2026-04-08 03:29:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:29:17.431101 | orchestrator | 2026-04-08 03:29:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:29:17.431162 | orchestrator | 2026-04-08 03:29:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:29:20.483915 | orchestrator | 2026-04-08 03:29:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:29:20.485446 | orchestrator | 2026-04-08 03:29:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:29:20.485514 | orchestrator | 2026-04-08 03:29:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:29:23.531852 | orchestrator | 2026-04-08 03:29:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:29:23.532973 | orchestrator | 2026-04-08 03:29:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:29:23.533017 | orchestrator | 2026-04-08 03:29:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:29:26.580696 | orchestrator | 2026-04-08 03:29:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:29:26.583076 | orchestrator | 2026-04-08 03:29:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:29:26.583137 | orchestrator | 2026-04-08 03:29:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:29:29.631679 | orchestrator | 2026-04-08 03:29:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:29:29.634098 | orchestrator | 2026-04-08 03:29:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:29:29.634169 | orchestrator | 2026-04-08 03:29:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:29:32.678778 | orchestrator | 2026-04-08 03:29:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:29:32.680794 | orchestrator | 2026-04-08 03:29:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:29:32.680867 | orchestrator | 2026-04-08 03:29:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:29:35.724490 | orchestrator | 2026-04-08 03:29:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:29:35.725935 | orchestrator | 2026-04-08 03:29:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:29:35.725979 | orchestrator | 2026-04-08 03:29:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:29:38.767716 | orchestrator | 2026-04-08 03:29:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:29:38.769628 | orchestrator | 2026-04-08 03:29:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:29:38.769695 | orchestrator | 2026-04-08 03:29:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:29:41.819805 | orchestrator | 2026-04-08 03:29:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:29:41.820076 | orchestrator | 2026-04-08 03:29:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:29:41.820459 | orchestrator | 2026-04-08 03:29:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:29:44.873816 | orchestrator | 2026-04-08 03:29:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:29:44.876555 | orchestrator | 2026-04-08 03:29:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:29:44.876687 | orchestrator | 2026-04-08 03:29:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:29:47.927854 | orchestrator | 2026-04-08 03:29:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:29:47.929439 | orchestrator | 2026-04-08 03:29:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:29:47.929570 | orchestrator | 2026-04-08 03:29:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:29:50.978656 | orchestrator | 2026-04-08 03:29:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:29:50.980553 | orchestrator | 2026-04-08 03:29:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:29:50.980600 | orchestrator | 2026-04-08 03:29:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:29:54.034826 | orchestrator | 2026-04-08 03:29:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:29:54.038295 | orchestrator | 2026-04-08 03:29:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:29:54.038447 | orchestrator | 2026-04-08 03:29:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:29:57.091270 | orchestrator | 2026-04-08 03:29:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:29:57.095051 | orchestrator | 2026-04-08 03:29:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:29:57.095182 | orchestrator | 2026-04-08 03:29:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:30:00.144027 | orchestrator | 2026-04-08 03:30:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:30:00.145616 | orchestrator | 2026-04-08 03:30:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:30:00.145686 | orchestrator | 2026-04-08 03:30:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:30:03.204653 | orchestrator | 2026-04-08 03:30:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:30:03.206252 | orchestrator | 2026-04-08 03:30:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:30:03.206349 | orchestrator | 2026-04-08 03:30:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:30:06.249287 | orchestrator | 2026-04-08 03:30:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:30:06.252183 | orchestrator | 2026-04-08 03:30:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:30:06.252281 | orchestrator | 2026-04-08 03:30:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:30:09.300087 | orchestrator | 2026-04-08 03:30:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:30:09.302249 | orchestrator | 2026-04-08 03:30:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:30:09.302294 | orchestrator | 2026-04-08 03:30:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:30:12.353828 | orchestrator | 2026-04-08 03:30:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:30:12.355176 | orchestrator | 2026-04-08 03:30:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:30:12.355274 | orchestrator | 2026-04-08 03:30:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:30:15.410230 | orchestrator | 2026-04-08 03:30:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:30:15.412094 | orchestrator | 2026-04-08 03:30:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:30:15.412139 | orchestrator | 2026-04-08 03:30:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:30:18.466090 | orchestrator | 2026-04-08 03:30:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:30:18.466202 | orchestrator | 2026-04-08 03:30:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:30:18.466217 | orchestrator | 2026-04-08 03:30:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:30:21.508048 | orchestrator | 2026-04-08 03:30:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:30:21.509961 | orchestrator | 2026-04-08 03:30:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:30:21.509992 | orchestrator | 2026-04-08 03:30:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:30:24.559755 | orchestrator | 2026-04-08 03:30:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:30:24.562851 | orchestrator | 2026-04-08 03:30:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:30:24.563077 | orchestrator | 2026-04-08 03:30:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:30:27.613491 | orchestrator | 2026-04-08 03:30:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:30:27.615541 | orchestrator | 2026-04-08 03:30:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:30:27.615596 | orchestrator | 2026-04-08 03:30:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:30:30.664203 | orchestrator | 2026-04-08 03:30:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:30:30.665990 | orchestrator | 2026-04-08 03:30:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:30:30.666104 | orchestrator | 2026-04-08 03:30:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:30:33.714642 | orchestrator | 2026-04-08 03:30:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:30:33.715659 | orchestrator | 2026-04-08 03:30:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:30:33.715716 | orchestrator | 2026-04-08 03:30:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:30:36.753059 | orchestrator | 2026-04-08 03:30:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:30:36.753189 | orchestrator | 2026-04-08 03:30:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:30:36.753206 | orchestrator | 2026-04-08 03:30:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:30:39.806068 | orchestrator | 2026-04-08 03:30:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:30:39.806147 | orchestrator | 2026-04-08 03:30:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:30:39.806154 | orchestrator | 2026-04-08 03:30:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:30:42.850948 | orchestrator | 2026-04-08 03:30:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:30:42.852697 | orchestrator | 2026-04-08 03:30:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:30:42.852739 | orchestrator | 2026-04-08 03:30:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:30:45.903498 | orchestrator | 2026-04-08 03:30:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:30:45.905617 | orchestrator | 2026-04-08 03:30:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:30:45.905656 | orchestrator | 2026-04-08 03:30:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:30:48.958263 | orchestrator | 2026-04-08 03:30:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:30:48.960926 | orchestrator | 2026-04-08 03:30:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:30:48.960989 | orchestrator | 2026-04-08 03:30:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:30:51.999619 | orchestrator | 2026-04-08 03:30:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:30:52.001590 | orchestrator | 2026-04-08 03:30:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:30:52.001684 | orchestrator | 2026-04-08 03:30:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:30:55.049988 | orchestrator | 2026-04-08 03:30:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:30:55.050589 | orchestrator | 2026-04-08 03:30:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:30:55.050625 | orchestrator | 2026-04-08 03:30:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:30:58.107136 | orchestrator | 2026-04-08 03:30:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:30:58.107799 | orchestrator | 2026-04-08 03:30:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:30:58.107834 | orchestrator | 2026-04-08 03:30:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:31:01.154716 | orchestrator | 2026-04-08 03:31:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:31:01.156805 | orchestrator | 2026-04-08 03:31:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:31:01.156857 | orchestrator | 2026-04-08 03:31:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:31:04.208055 | orchestrator | 2026-04-08 03:31:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:31:04.211706 | orchestrator | 2026-04-08 03:31:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:31:04.211797 | orchestrator | 2026-04-08 03:31:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:31:07.262231 | orchestrator | 2026-04-08 03:31:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:31:07.264260 | orchestrator | 2026-04-08 03:31:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:31:07.264303 | orchestrator | 2026-04-08 03:31:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:31:10.320964 | orchestrator | 2026-04-08 03:31:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:31:10.321061 | orchestrator | 2026-04-08 03:31:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:31:10.321075 | orchestrator | 2026-04-08 03:31:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:31:13.374161 | orchestrator | 2026-04-08 03:31:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:31:13.376887 | orchestrator | 2026-04-08 03:31:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:31:13.377006 | orchestrator | 2026-04-08 03:31:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:31:16.426767 | orchestrator | 2026-04-08 03:31:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:31:16.428476 | orchestrator | 2026-04-08 03:31:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:31:16.428524 | orchestrator | 2026-04-08 03:31:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:31:19.482147 | orchestrator | 2026-04-08 03:31:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:31:19.484496 | orchestrator | 2026-04-08 03:31:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:31:19.484555 | orchestrator | 2026-04-08 03:31:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:31:22.529098 | orchestrator | 2026-04-08 03:31:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:31:22.529738 | orchestrator | 2026-04-08 03:31:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:31:22.529781 | orchestrator | 2026-04-08 03:31:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:31:25.582813 | orchestrator | 2026-04-08 03:31:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:31:25.585259 | orchestrator | 2026-04-08 03:31:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:31:25.585417 | orchestrator | 2026-04-08 03:31:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:31:28.634784 | orchestrator | 2026-04-08 03:31:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:31:28.635605 | orchestrator | 2026-04-08 03:31:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:31:28.635635 | orchestrator | 2026-04-08 03:31:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:31:31.679669 | orchestrator | 2026-04-08 03:31:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:31:31.679755 | orchestrator | 2026-04-08 03:31:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:31:31.679776 | orchestrator | 2026-04-08 03:31:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:31:34.730698 | orchestrator | 2026-04-08 03:31:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:31:34.730790 | orchestrator | 2026-04-08 03:31:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:31:34.730805 | orchestrator | 2026-04-08 03:31:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:31:37.772830 | orchestrator | 2026-04-08 03:31:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:31:37.775132 | orchestrator | 2026-04-08 03:31:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:31:37.775193 | orchestrator | 2026-04-08 03:31:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:31:40.823869 | orchestrator | 2026-04-08 03:31:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:31:40.825499 | orchestrator | 2026-04-08 03:31:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:31:40.825554 | orchestrator | 2026-04-08 03:31:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:31:43.873609 | orchestrator | 2026-04-08 03:31:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:31:43.875573 | orchestrator | 2026-04-08 03:31:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:31:43.875627 | orchestrator | 2026-04-08 03:31:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:31:46.926438 | orchestrator | 2026-04-08 03:31:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:31:46.927704 | orchestrator | 2026-04-08 03:31:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:31:46.927786 | orchestrator | 2026-04-08 03:31:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:31:49.983236 | orchestrator | 2026-04-08 03:31:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:31:49.984050 | orchestrator | 2026-04-08 03:31:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:31:49.984086 | orchestrator | 2026-04-08 03:31:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:31:53.032724 | orchestrator | 2026-04-08 03:31:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:31:53.032817 | orchestrator | 2026-04-08 03:31:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:31:53.032826 | orchestrator | 2026-04-08 03:31:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:31:56.082906 | orchestrator | 2026-04-08 03:31:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:31:56.085277 | orchestrator | 2026-04-08 03:31:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:31:56.085340 | orchestrator | 2026-04-08 03:31:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:31:59.129207 | orchestrator | 2026-04-08 03:31:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:31:59.129804 | orchestrator | 2026-04-08 03:31:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:31:59.130280 | orchestrator | 2026-04-08 03:31:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:32:02.174573 | orchestrator | 2026-04-08 03:32:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:32:02.175443 | orchestrator | 2026-04-08 03:32:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:32:02.175473 | orchestrator | 2026-04-08 03:32:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:32:05.224475 | orchestrator | 2026-04-08 03:32:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:32:05.225820 | orchestrator | 2026-04-08 03:32:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:32:05.225875 | orchestrator | 2026-04-08 03:32:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:32:08.272864 | orchestrator | 2026-04-08 03:32:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:32:08.276341 | orchestrator | 2026-04-08 03:32:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:32:08.276479 | orchestrator | 2026-04-08 03:32:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:32:11.328681 | orchestrator | 2026-04-08 03:32:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:32:11.329496 | orchestrator | 2026-04-08 03:32:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:32:11.329544 | orchestrator | 2026-04-08 03:32:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:32:14.384596 | orchestrator | 2026-04-08 03:32:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:32:14.386923 | orchestrator | 2026-04-08 03:32:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:32:14.386994 | orchestrator | 2026-04-08 03:32:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:32:17.441076 | orchestrator | 2026-04-08 03:32:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:32:17.442746 | orchestrator | 2026-04-08 03:32:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:32:17.442793 | orchestrator | 2026-04-08 03:32:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:32:20.494911 | orchestrator | 2026-04-08 03:32:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:32:20.495644 | orchestrator | 2026-04-08 03:32:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:32:20.495680 | orchestrator | 2026-04-08 03:32:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:32:23.543286 | orchestrator | 2026-04-08 03:32:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:32:23.543867 | orchestrator | 2026-04-08 03:32:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:32:23.543901 | orchestrator | 2026-04-08 03:32:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:32:26.598515 | orchestrator | 2026-04-08 03:32:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:32:26.599182 | orchestrator | 2026-04-08 03:32:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:32:26.599217 | orchestrator | 2026-04-08 03:32:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:32:29.645229 | orchestrator | 2026-04-08 03:32:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:32:29.646622 | orchestrator | 2026-04-08 03:32:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:32:29.646674 | orchestrator | 2026-04-08 03:32:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:32:32.699762 | orchestrator | 2026-04-08 03:32:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:32:32.701117 | orchestrator | 2026-04-08 03:32:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:32:32.701297 | orchestrator | 2026-04-08 03:32:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:32:35.750947 | orchestrator | 2026-04-08 03:32:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:32:35.752584 | orchestrator | 2026-04-08 03:32:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:32:35.752667 | orchestrator | 2026-04-08 03:32:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:32:38.799917 | orchestrator | 2026-04-08 03:32:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:32:38.801747 | orchestrator | 2026-04-08 03:32:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:32:38.801931 | orchestrator | 2026-04-08 03:32:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:32:41.846682 | orchestrator | 2026-04-08 03:32:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:32:41.848274 | orchestrator | 2026-04-08 03:32:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:32:41.848408 | orchestrator | 2026-04-08 03:32:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:32:44.892901 | orchestrator | 2026-04-08 03:32:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:32:44.894977 | orchestrator | 2026-04-08 03:32:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:32:44.895118 | orchestrator | 2026-04-08 03:32:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:32:47.947563 | orchestrator | 2026-04-08 03:32:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:32:47.949774 | orchestrator | 2026-04-08 03:32:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:32:47.949817 | orchestrator | 2026-04-08 03:32:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:32:51.004086 | orchestrator | 2026-04-08 03:32:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:32:51.005401 | orchestrator | 2026-04-08 03:32:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:32:51.005443 | orchestrator | 2026-04-08 03:32:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:32:54.048157 | orchestrator | 2026-04-08 03:32:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:32:54.049999 | orchestrator | 2026-04-08 03:32:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:32:54.050092 | orchestrator | 2026-04-08 03:32:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:32:57.101947 | orchestrator | 2026-04-08 03:32:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:32:57.103451 | orchestrator | 2026-04-08 03:32:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:32:57.103519 | orchestrator | 2026-04-08 03:32:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:33:00.149809 | orchestrator | 2026-04-08 03:33:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:33:00.150171 | orchestrator | 2026-04-08 03:33:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:33:00.150198 | orchestrator | 2026-04-08 03:33:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:33:03.195860 | orchestrator | 2026-04-08 03:33:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:33:03.197821 | orchestrator | 2026-04-08 03:33:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:33:03.197904 | orchestrator | 2026-04-08 03:33:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:33:06.244596 | orchestrator | 2026-04-08 03:33:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:33:06.247048 | orchestrator | 2026-04-08 03:33:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:33:06.247127 | orchestrator | 2026-04-08 03:33:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:33:09.297298 | orchestrator | 2026-04-08 03:33:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:33:09.300247 | orchestrator | 2026-04-08 03:33:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:33:09.300478 | orchestrator | 2026-04-08 03:33:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:33:12.348819 | orchestrator | 2026-04-08 03:33:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:33:12.350821 | orchestrator | 2026-04-08 03:33:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:33:12.350900 | orchestrator | 2026-04-08 03:33:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:33:15.403358 | orchestrator | 2026-04-08 03:33:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:33:15.406519 | orchestrator | 2026-04-08 03:33:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:33:15.406587 | orchestrator | 2026-04-08 03:33:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:33:18.454464 | orchestrator | 2026-04-08 03:33:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:33:18.455826 | orchestrator | 2026-04-08 03:33:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:33:18.455862 | orchestrator | 2026-04-08 03:33:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:33:21.511147 | orchestrator | 2026-04-08 03:33:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:33:21.512446 | orchestrator | 2026-04-08 03:33:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:33:21.512512 | orchestrator | 2026-04-08 03:33:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:33:24.566592 | orchestrator | 2026-04-08 03:33:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:33:24.568106 | orchestrator | 2026-04-08 03:33:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:33:24.568199 | orchestrator | 2026-04-08 03:33:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:33:27.617364 | orchestrator | 2026-04-08 03:33:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:33:27.618850 | orchestrator | 2026-04-08 03:33:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:33:27.618912 | orchestrator | 2026-04-08 03:33:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:33:30.663680 | orchestrator | 2026-04-08 03:33:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:33:30.664334 | orchestrator | 2026-04-08 03:33:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:33:30.664464 | orchestrator | 2026-04-08 03:33:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:33:33.712058 | orchestrator | 2026-04-08 03:33:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:33:33.713856 | orchestrator | 2026-04-08 03:33:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:33:33.713912 | orchestrator | 2026-04-08 03:33:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:33:36.766865 | orchestrator | 2026-04-08 03:33:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:33:36.768501 | orchestrator | 2026-04-08 03:33:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:33:36.768557 | orchestrator | 2026-04-08 03:33:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:33:39.813591 | orchestrator | 2026-04-08 03:33:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:33:39.815587 | orchestrator | 2026-04-08 03:33:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:33:39.815651 | orchestrator | 2026-04-08 03:33:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:33:42.862296 | orchestrator | 2026-04-08 03:33:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:33:42.864780 | orchestrator | 2026-04-08 03:33:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:33:42.864843 | orchestrator | 2026-04-08 03:33:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:33:45.912316 | orchestrator | 2026-04-08 03:33:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:33:45.913423 | orchestrator | 2026-04-08 03:33:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:33:45.913487 | orchestrator | 2026-04-08 03:33:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:33:48.962710 | orchestrator | 2026-04-08 03:33:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:33:48.964219 | orchestrator | 2026-04-08 03:33:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:33:48.964347 | orchestrator | 2026-04-08 03:33:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:33:52.012344 | orchestrator | 2026-04-08 03:33:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:33:52.014876 | orchestrator | 2026-04-08 03:33:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:33:52.014949 | orchestrator | 2026-04-08 03:33:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:33:55.061973 | orchestrator | 2026-04-08 03:33:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:33:55.063645 | orchestrator | 2026-04-08 03:33:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:33:55.063748 | orchestrator | 2026-04-08 03:33:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:33:58.114807 | orchestrator | 2026-04-08 03:33:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:33:58.116834 | orchestrator | 2026-04-08 03:33:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:33:58.116903 | orchestrator | 2026-04-08 03:33:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:34:01.160312 | orchestrator | 2026-04-08 03:34:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:34:01.161812 | orchestrator | 2026-04-08 03:34:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:34:01.161867 | orchestrator | 2026-04-08 03:34:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:34:04.210198 | orchestrator | 2026-04-08 03:34:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:34:04.212580 | orchestrator | 2026-04-08 03:34:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:34:04.212663 | orchestrator | 2026-04-08 03:34:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:34:07.262351 | orchestrator | 2026-04-08 03:34:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:34:07.264749 | orchestrator | 2026-04-08 03:34:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:34:07.264797 | orchestrator | 2026-04-08 03:34:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:34:10.320695 | orchestrator | 2026-04-08 03:34:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:34:10.322340 | orchestrator | 2026-04-08 03:34:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:34:10.322509 | orchestrator | 2026-04-08 03:34:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:34:13.378865 | orchestrator | 2026-04-08 03:34:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:34:13.380676 | orchestrator | 2026-04-08 03:34:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:34:13.380736 | orchestrator | 2026-04-08 03:34:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:34:16.426246 | orchestrator | 2026-04-08 03:34:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:34:16.427808 | orchestrator | 2026-04-08 03:34:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:34:16.427853 | orchestrator | 2026-04-08 03:34:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:34:19.474004 | orchestrator | 2026-04-08 03:34:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:34:19.476629 | orchestrator | 2026-04-08 03:34:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:34:19.476710 | orchestrator | 2026-04-08 03:34:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:34:22.523740 | orchestrator | 2026-04-08 03:34:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:34:22.525731 | orchestrator | 2026-04-08 03:34:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:34:22.525817 | orchestrator | 2026-04-08 03:34:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:34:25.574664 | orchestrator | 2026-04-08 03:34:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:34:25.577692 | orchestrator | 2026-04-08 03:34:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:34:25.577761 | orchestrator | 2026-04-08 03:34:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:34:28.626548 | orchestrator | 2026-04-08 03:34:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:34:28.628232 | orchestrator | 2026-04-08 03:34:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:34:28.628263 | orchestrator | 2026-04-08 03:34:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:34:31.675347 | orchestrator | 2026-04-08 03:34:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:34:31.679061 | orchestrator | 2026-04-08 03:34:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:34:31.679163 | orchestrator | 2026-04-08 03:34:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:34:34.729400 | orchestrator | 2026-04-08 03:34:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:34:34.732793 | orchestrator | 2026-04-08 03:34:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:34:34.732873 | orchestrator | 2026-04-08 03:34:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:34:37.785134 | orchestrator | 2026-04-08 03:34:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:34:37.786850 | orchestrator | 2026-04-08 03:34:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:34:37.786907 | orchestrator | 2026-04-08 03:34:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:34:40.835997 | orchestrator | 2026-04-08 03:34:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:34:40.837799 | orchestrator | 2026-04-08 03:34:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:34:40.837883 | orchestrator | 2026-04-08 03:34:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:34:43.885802 | orchestrator | 2026-04-08 03:34:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:34:43.887202 | orchestrator | 2026-04-08 03:34:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:34:43.887246 | orchestrator | 2026-04-08 03:34:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:34:46.934312 | orchestrator | 2026-04-08 03:34:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:34:46.936723 | orchestrator | 2026-04-08 03:34:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:34:46.936789 | orchestrator | 2026-04-08 03:34:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:34:49.988326 | orchestrator | 2026-04-08 03:34:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:34:49.989688 | orchestrator | 2026-04-08 03:34:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:34:49.989773 | orchestrator | 2026-04-08 03:34:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:34:53.041668 | orchestrator | 2026-04-08 03:34:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:34:53.044680 | orchestrator | 2026-04-08 03:34:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:34:53.044752 | orchestrator | 2026-04-08 03:34:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:34:56.082344 | orchestrator | 2026-04-08 03:34:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:34:56.083951 | orchestrator | 2026-04-08 03:34:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:34:56.084018 | orchestrator | 2026-04-08 03:34:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:34:59.134423 | orchestrator | 2026-04-08 03:34:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:34:59.136053 | orchestrator | 2026-04-08 03:34:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:34:59.136154 | orchestrator | 2026-04-08 03:34:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:35:02.188550 | orchestrator | 2026-04-08 03:35:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:35:02.190975 | orchestrator | 2026-04-08 03:35:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:35:02.191287 | orchestrator | 2026-04-08 03:35:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:35:05.245025 | orchestrator | 2026-04-08 03:35:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:35:05.248041 | orchestrator | 2026-04-08 03:35:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:35:05.248127 | orchestrator | 2026-04-08 03:35:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:35:08.295062 | orchestrator | 2026-04-08 03:35:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:35:08.296129 | orchestrator | 2026-04-08 03:35:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:35:08.296262 | orchestrator | 2026-04-08 03:35:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:35:11.339209 | orchestrator | 2026-04-08 03:35:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:35:11.342431 | orchestrator | 2026-04-08 03:35:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:35:11.342491 | orchestrator | 2026-04-08 03:35:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:35:14.390287 | orchestrator | 2026-04-08 03:35:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:35:14.392034 | orchestrator | 2026-04-08 03:35:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:35:14.392094 | orchestrator | 2026-04-08 03:35:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:35:17.438228 | orchestrator | 2026-04-08 03:35:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:35:17.438865 | orchestrator | 2026-04-08 03:35:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:35:17.438916 | orchestrator | 2026-04-08 03:35:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:35:20.488897 | orchestrator | 2026-04-08 03:35:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:35:20.490264 | orchestrator | 2026-04-08 03:35:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:35:20.490317 | orchestrator | 2026-04-08 03:35:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:35:23.536023 | orchestrator | 2026-04-08 03:35:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:35:23.536732 | orchestrator | 2026-04-08 03:35:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:35:23.536777 | orchestrator | 2026-04-08 03:35:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:35:26.588508 | orchestrator | 2026-04-08 03:35:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:35:26.592181 | orchestrator | 2026-04-08 03:35:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:35:26.592226 | orchestrator | 2026-04-08 03:35:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:35:29.645064 | orchestrator | 2026-04-08 03:35:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:35:29.645885 | orchestrator | 2026-04-08 03:35:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:35:29.645942 | orchestrator | 2026-04-08 03:35:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:35:32.689650 | orchestrator | 2026-04-08 03:35:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:35:32.689752 | orchestrator | 2026-04-08 03:35:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:35:32.689768 | orchestrator | 2026-04-08 03:35:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:35:35.737526 | orchestrator | 2026-04-08 03:35:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:35:35.739288 | orchestrator | 2026-04-08 03:35:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:35:35.739326 | orchestrator | 2026-04-08 03:35:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:35:38.786808 | orchestrator | 2026-04-08 03:35:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:35:38.788054 | orchestrator | 2026-04-08 03:35:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:35:38.788105 | orchestrator | 2026-04-08 03:35:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:35:41.828250 | orchestrator | 2026-04-08 03:35:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:35:41.829683 | orchestrator | 2026-04-08 03:35:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:35:41.829748 | orchestrator | 2026-04-08 03:35:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:35:44.879547 | orchestrator | 2026-04-08 03:35:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:35:44.881311 | orchestrator | 2026-04-08 03:35:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:35:44.881359 | orchestrator | 2026-04-08 03:35:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:35:47.931056 | orchestrator | 2026-04-08 03:35:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:35:47.933444 | orchestrator | 2026-04-08 03:35:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:35:47.933518 | orchestrator | 2026-04-08 03:35:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:35:50.984928 | orchestrator | 2026-04-08 03:35:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:35:50.986123 | orchestrator | 2026-04-08 03:35:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:35:50.986192 | orchestrator | 2026-04-08 03:35:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:35:54.040135 | orchestrator | 2026-04-08 03:35:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:35:54.041677 | orchestrator | 2026-04-08 03:35:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:35:54.041734 | orchestrator | 2026-04-08 03:35:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:35:57.092767 | orchestrator | 2026-04-08 03:35:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:35:57.094609 | orchestrator | 2026-04-08 03:35:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:35:57.094684 | orchestrator | 2026-04-08 03:35:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:36:00.137685 | orchestrator | 2026-04-08 03:36:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:36:00.139211 | orchestrator | 2026-04-08 03:36:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:36:00.139245 | orchestrator | 2026-04-08 03:36:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:36:03.184826 | orchestrator | 2026-04-08 03:36:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:36:03.186273 | orchestrator | 2026-04-08 03:36:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:36:03.188128 | orchestrator | 2026-04-08 03:36:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:36:06.240157 | orchestrator | 2026-04-08 03:36:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:36:06.241822 | orchestrator | 2026-04-08 03:36:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:36:06.241879 | orchestrator | 2026-04-08 03:36:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:36:09.300578 | orchestrator | 2026-04-08 03:36:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:36:09.301991 | orchestrator | 2026-04-08 03:36:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:36:09.302137 | orchestrator | 2026-04-08 03:36:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:36:12.354047 | orchestrator | 2026-04-08 03:36:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:36:12.356130 | orchestrator | 2026-04-08 03:36:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:36:12.356188 | orchestrator | 2026-04-08 03:36:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:36:15.413026 | orchestrator | 2026-04-08 03:36:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:36:15.415099 | orchestrator | 2026-04-08 03:36:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:36:15.415202 | orchestrator | 2026-04-08 03:36:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:36:18.455217 | orchestrator | 2026-04-08 03:36:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:36:18.457938 | orchestrator | 2026-04-08 03:36:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:36:18.458127 | orchestrator | 2026-04-08 03:36:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:36:21.506552 | orchestrator | 2026-04-08 03:36:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:36:21.509503 | orchestrator | 2026-04-08 03:36:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:36:21.509546 | orchestrator | 2026-04-08 03:36:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:36:24.559785 | orchestrator | 2026-04-08 03:36:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:36:24.561354 | orchestrator | 2026-04-08 03:36:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:36:24.561392 | orchestrator | 2026-04-08 03:36:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:36:27.609175 | orchestrator | 2026-04-08 03:36:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:36:27.611646 | orchestrator | 2026-04-08 03:36:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:36:27.611716 | orchestrator | 2026-04-08 03:36:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:36:30.655145 | orchestrator | 2026-04-08 03:36:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:36:30.657450 | orchestrator | 2026-04-08 03:36:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:36:30.657499 | orchestrator | 2026-04-08 03:36:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:36:33.709634 | orchestrator | 2026-04-08 03:36:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:36:33.711052 | orchestrator | 2026-04-08 03:36:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:36:33.711097 | orchestrator | 2026-04-08 03:36:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:36:36.761101 | orchestrator | 2026-04-08 03:36:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:36:36.762738 | orchestrator | 2026-04-08 03:36:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:36:36.762794 | orchestrator | 2026-04-08 03:36:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:36:39.817737 | orchestrator | 2026-04-08 03:36:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:36:39.820565 | orchestrator | 2026-04-08 03:36:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:36:39.820627 | orchestrator | 2026-04-08 03:36:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:36:42.862650 | orchestrator | 2026-04-08 03:36:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:36:42.863690 | orchestrator | 2026-04-08 03:36:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:36:42.863768 | orchestrator | 2026-04-08 03:36:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:36:45.911636 | orchestrator | 2026-04-08 03:36:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:36:45.914285 | orchestrator | 2026-04-08 03:36:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:36:45.914437 | orchestrator | 2026-04-08 03:36:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:36:48.962329 | orchestrator | 2026-04-08 03:36:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:36:48.964120 | orchestrator | 2026-04-08 03:36:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:36:48.964172 | orchestrator | 2026-04-08 03:36:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:36:52.016625 | orchestrator | 2026-04-08 03:36:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:36:52.017626 | orchestrator | 2026-04-08 03:36:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:36:52.017679 | orchestrator | 2026-04-08 03:36:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:36:55.065345 | orchestrator | 2026-04-08 03:36:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:36:55.067581 | orchestrator | 2026-04-08 03:36:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:36:55.067640 | orchestrator | 2026-04-08 03:36:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:36:58.116735 | orchestrator | 2026-04-08 03:36:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:36:58.118962 | orchestrator | 2026-04-08 03:36:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:36:58.119100 | orchestrator | 2026-04-08 03:36:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:37:01.168281 | orchestrator | 2026-04-08 03:37:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:37:01.169472 | orchestrator | 2026-04-08 03:37:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:37:01.169747 | orchestrator | 2026-04-08 03:37:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:37:04.218671 | orchestrator | 2026-04-08 03:37:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:37:04.221273 | orchestrator | 2026-04-08 03:37:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:37:04.221868 | orchestrator | 2026-04-08 03:37:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:37:07.273795 | orchestrator | 2026-04-08 03:37:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:37:07.275172 | orchestrator | 2026-04-08 03:37:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:37:07.275222 | orchestrator | 2026-04-08 03:37:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:37:10.320447 | orchestrator | 2026-04-08 03:37:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:37:10.322583 | orchestrator | 2026-04-08 03:37:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:37:10.322643 | orchestrator | 2026-04-08 03:37:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:37:13.374939 | orchestrator | 2026-04-08 03:37:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:37:13.377428 | orchestrator | 2026-04-08 03:37:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:37:13.377549 | orchestrator | 2026-04-08 03:37:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:37:16.422235 | orchestrator | 2026-04-08 03:37:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:37:16.424447 | orchestrator | 2026-04-08 03:37:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:37:16.424492 | orchestrator | 2026-04-08 03:37:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:37:19.466361 | orchestrator | 2026-04-08 03:37:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:37:19.467636 | orchestrator | 2026-04-08 03:37:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:37:19.467697 | orchestrator | 2026-04-08 03:37:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:37:22.516950 | orchestrator | 2026-04-08 03:37:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:37:22.518433 | orchestrator | 2026-04-08 03:37:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:37:22.518640 | orchestrator | 2026-04-08 03:37:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:37:25.569846 | orchestrator | 2026-04-08 03:37:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:37:25.571882 | orchestrator | 2026-04-08 03:37:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:37:25.571954 | orchestrator | 2026-04-08 03:37:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:37:28.619485 | orchestrator | 2026-04-08 03:37:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:37:28.620382 | orchestrator | 2026-04-08 03:37:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:37:28.620472 | orchestrator | 2026-04-08 03:37:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:37:31.667007 | orchestrator | 2026-04-08 03:37:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:37:31.668228 | orchestrator | 2026-04-08 03:37:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:37:31.668252 | orchestrator | 2026-04-08 03:37:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:37:34.718369 | orchestrator | 2026-04-08 03:37:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:37:34.720247 | orchestrator | 2026-04-08 03:37:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:37:34.720311 | orchestrator | 2026-04-08 03:37:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:37:37.761567 | orchestrator | 2026-04-08 03:37:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:37:37.762388 | orchestrator | 2026-04-08 03:37:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:37:37.762594 | orchestrator | 2026-04-08 03:37:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:37:40.808776 | orchestrator | 2026-04-08 03:37:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:37:40.811843 | orchestrator | 2026-04-08 03:37:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:37:40.811942 | orchestrator | 2026-04-08 03:37:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:37:43.864723 | orchestrator | 2026-04-08 03:37:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:37:43.867402 | orchestrator | 2026-04-08 03:37:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:37:43.867450 | orchestrator | 2026-04-08 03:37:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:37:46.913292 | orchestrator | 2026-04-08 03:37:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:37:46.915862 | orchestrator | 2026-04-08 03:37:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:37:46.915994 | orchestrator | 2026-04-08 03:37:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:37:49.961278 | orchestrator | 2026-04-08 03:37:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:37:49.963056 | orchestrator | 2026-04-08 03:37:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:37:49.963101 | orchestrator | 2026-04-08 03:37:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:37:53.009976 | orchestrator | 2026-04-08 03:37:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:37:53.011965 | orchestrator | 2026-04-08 03:37:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:37:53.012073 | orchestrator | 2026-04-08 03:37:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:37:56.062607 | orchestrator | 2026-04-08 03:37:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:37:56.064705 | orchestrator | 2026-04-08 03:37:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:37:56.064745 | orchestrator | 2026-04-08 03:37:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:37:59.112149 | orchestrator | 2026-04-08 03:37:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:37:59.112448 | orchestrator | 2026-04-08 03:37:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:37:59.112479 | orchestrator | 2026-04-08 03:37:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:38:02.168955 | orchestrator | 2026-04-08 03:38:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:38:02.171599 | orchestrator | 2026-04-08 03:38:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:38:02.171659 | orchestrator | 2026-04-08 03:38:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:38:05.221924 | orchestrator | 2026-04-08 03:38:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:38:05.224814 | orchestrator | 2026-04-08 03:38:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:38:05.224949 | orchestrator | 2026-04-08 03:38:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:38:08.273904 | orchestrator | 2026-04-08 03:38:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:38:08.275166 | orchestrator | 2026-04-08 03:38:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:38:08.275291 | orchestrator | 2026-04-08 03:38:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:38:11.325734 | orchestrator | 2026-04-08 03:38:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:38:11.327507 | orchestrator | 2026-04-08 03:38:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:38:11.327568 | orchestrator | 2026-04-08 03:38:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:38:14.376648 | orchestrator | 2026-04-08 03:38:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:38:14.378249 | orchestrator | 2026-04-08 03:38:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:38:14.378297 | orchestrator | 2026-04-08 03:38:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:38:17.423476 | orchestrator | 2026-04-08 03:38:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:38:17.425158 | orchestrator | 2026-04-08 03:38:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:38:17.425214 | orchestrator | 2026-04-08 03:38:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:38:20.477261 | orchestrator | 2026-04-08 03:38:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:38:20.479870 | orchestrator | 2026-04-08 03:38:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:38:20.479953 | orchestrator | 2026-04-08 03:38:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:38:23.530245 | orchestrator | 2026-04-08 03:38:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:38:23.531975 | orchestrator | 2026-04-08 03:38:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:38:23.532065 | orchestrator | 2026-04-08 03:38:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:38:26.579098 | orchestrator | 2026-04-08 03:38:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:38:26.580387 | orchestrator | 2026-04-08 03:38:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:38:26.580563 | orchestrator | 2026-04-08 03:38:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:38:29.628560 | orchestrator | 2026-04-08 03:38:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:38:29.630728 | orchestrator | 2026-04-08 03:38:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:38:29.630895 | orchestrator | 2026-04-08 03:38:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:38:32.679191 | orchestrator | 2026-04-08 03:38:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:38:32.679354 | orchestrator | 2026-04-08 03:38:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:38:32.679371 | orchestrator | 2026-04-08 03:38:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:38:35.728546 | orchestrator | 2026-04-08 03:38:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:38:35.730506 | orchestrator | 2026-04-08 03:38:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:38:35.730581 | orchestrator | 2026-04-08 03:38:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:38:38.780405 | orchestrator | 2026-04-08 03:38:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:38:38.782730 | orchestrator | 2026-04-08 03:38:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:38:38.782824 | orchestrator | 2026-04-08 03:38:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:38:41.829285 | orchestrator | 2026-04-08 03:38:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:38:41.830236 | orchestrator | 2026-04-08 03:38:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:38:41.830270 | orchestrator | 2026-04-08 03:38:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:38:44.879519 | orchestrator | 2026-04-08 03:38:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:38:44.880990 | orchestrator | 2026-04-08 03:38:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:38:44.881047 | orchestrator | 2026-04-08 03:38:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:38:47.925493 | orchestrator | 2026-04-08 03:38:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:38:47.927055 | orchestrator | 2026-04-08 03:38:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:38:47.927098 | orchestrator | 2026-04-08 03:38:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:38:50.974927 | orchestrator | 2026-04-08 03:38:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:38:50.976942 | orchestrator | 2026-04-08 03:38:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:38:50.977140 | orchestrator | 2026-04-08 03:38:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:38:54.025334 | orchestrator | 2026-04-08 03:38:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:38:54.025411 | orchestrator | 2026-04-08 03:38:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:38:54.025419 | orchestrator | 2026-04-08 03:38:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:38:57.070913 | orchestrator | 2026-04-08 03:38:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:38:57.073280 | orchestrator | 2026-04-08 03:38:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:38:57.073326 | orchestrator | 2026-04-08 03:38:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:39:00.119069 | orchestrator | 2026-04-08 03:39:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:39:00.119518 | orchestrator | 2026-04-08 03:39:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:39:00.119544 | orchestrator | 2026-04-08 03:39:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:39:03.168609 | orchestrator | 2026-04-08 03:39:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:39:03.171058 | orchestrator | 2026-04-08 03:39:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:39:03.171107 | orchestrator | 2026-04-08 03:39:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:39:06.215837 | orchestrator | 2026-04-08 03:39:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:39:06.216988 | orchestrator | 2026-04-08 03:39:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:39:06.217040 | orchestrator | 2026-04-08 03:39:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:39:09.265021 | orchestrator | 2026-04-08 03:39:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:39:09.266932 | orchestrator | 2026-04-08 03:39:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:39:09.266973 | orchestrator | 2026-04-08 03:39:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:39:12.317187 | orchestrator | 2026-04-08 03:39:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:39:12.319191 | orchestrator | 2026-04-08 03:39:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:39:12.319254 | orchestrator | 2026-04-08 03:39:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:39:15.371919 | orchestrator | 2026-04-08 03:39:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:39:15.373168 | orchestrator | 2026-04-08 03:39:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:39:15.373230 | orchestrator | 2026-04-08 03:39:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:39:18.423038 | orchestrator | 2026-04-08 03:39:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:39:18.425948 | orchestrator | 2026-04-08 03:39:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:39:18.426125 | orchestrator | 2026-04-08 03:39:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:39:21.467398 | orchestrator | 2026-04-08 03:39:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:39:21.469309 | orchestrator | 2026-04-08 03:39:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:39:21.469375 | orchestrator | 2026-04-08 03:39:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:39:24.517710 | orchestrator | 2026-04-08 03:39:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:39:24.520590 | orchestrator | 2026-04-08 03:39:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:39:24.520717 | orchestrator | 2026-04-08 03:39:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:39:27.568799 | orchestrator | 2026-04-08 03:39:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:39:27.572114 | orchestrator | 2026-04-08 03:39:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:39:27.572203 | orchestrator | 2026-04-08 03:39:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:39:30.631066 | orchestrator | 2026-04-08 03:39:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:39:30.633109 | orchestrator | 2026-04-08 03:39:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:39:30.633149 | orchestrator | 2026-04-08 03:39:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:39:33.688420 | orchestrator | 2026-04-08 03:39:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:39:33.690781 | orchestrator | 2026-04-08 03:39:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:39:33.690880 | orchestrator | 2026-04-08 03:39:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:39:36.744999 | orchestrator | 2026-04-08 03:39:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:39:36.746270 | orchestrator | 2026-04-08 03:39:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:39:36.746478 | orchestrator | 2026-04-08 03:39:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:39:39.790618 | orchestrator | 2026-04-08 03:39:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:39:39.791996 | orchestrator | 2026-04-08 03:39:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:39:39.792030 | orchestrator | 2026-04-08 03:39:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:39:42.840763 | orchestrator | 2026-04-08 03:39:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:39:42.841878 | orchestrator | 2026-04-08 03:39:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:39:42.842097 | orchestrator | 2026-04-08 03:39:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:39:45.893943 | orchestrator | 2026-04-08 03:39:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:39:45.895635 | orchestrator | 2026-04-08 03:39:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:39:45.895676 | orchestrator | 2026-04-08 03:39:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:39:48.947566 | orchestrator | 2026-04-08 03:39:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:39:48.948954 | orchestrator | 2026-04-08 03:39:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:39:48.948995 | orchestrator | 2026-04-08 03:39:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:39:52.005131 | orchestrator | 2026-04-08 03:39:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:39:52.006172 | orchestrator | 2026-04-08 03:39:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:39:52.006231 | orchestrator | 2026-04-08 03:39:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:39:55.052511 | orchestrator | 2026-04-08 03:39:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:39:55.055610 | orchestrator | 2026-04-08 03:39:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:39:55.055682 | orchestrator | 2026-04-08 03:39:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:39:58.099977 | orchestrator | 2026-04-08 03:39:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:39:58.101114 | orchestrator | 2026-04-08 03:39:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:39:58.101150 | orchestrator | 2026-04-08 03:39:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:40:01.153704 | orchestrator | 2026-04-08 03:40:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:40:01.154504 | orchestrator | 2026-04-08 03:40:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:40:01.154582 | orchestrator | 2026-04-08 03:40:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:40:04.200824 | orchestrator | 2026-04-08 03:40:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:40:04.202249 | orchestrator | 2026-04-08 03:40:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:40:04.202388 | orchestrator | 2026-04-08 03:40:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:40:07.254471 | orchestrator | 2026-04-08 03:40:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:40:07.256200 | orchestrator | 2026-04-08 03:40:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:40:07.256525 | orchestrator | 2026-04-08 03:40:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:40:10.308867 | orchestrator | 2026-04-08 03:40:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:40:10.309858 | orchestrator | 2026-04-08 03:40:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:40:10.309909 | orchestrator | 2026-04-08 03:40:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:40:13.367215 | orchestrator | 2026-04-08 03:40:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:40:13.369126 | orchestrator | 2026-04-08 03:40:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:40:13.369164 | orchestrator | 2026-04-08 03:40:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:40:16.421071 | orchestrator | 2026-04-08 03:40:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:40:16.422328 | orchestrator | 2026-04-08 03:40:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:40:16.422388 | orchestrator | 2026-04-08 03:40:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:40:19.470270 | orchestrator | 2026-04-08 03:40:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:40:19.471612 | orchestrator | 2026-04-08 03:40:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:40:19.471676 | orchestrator | 2026-04-08 03:40:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:40:22.521591 | orchestrator | 2026-04-08 03:40:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:40:22.524363 | orchestrator | 2026-04-08 03:40:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:40:22.524425 | orchestrator | 2026-04-08 03:40:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:40:25.577216 | orchestrator | 2026-04-08 03:40:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:40:25.579804 | orchestrator | 2026-04-08 03:40:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:40:25.579896 | orchestrator | 2026-04-08 03:40:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:40:28.627407 | orchestrator | 2026-04-08 03:40:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:40:28.628174 | orchestrator | 2026-04-08 03:40:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:40:28.628230 | orchestrator | 2026-04-08 03:40:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:40:31.676081 | orchestrator | 2026-04-08 03:40:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:40:31.677488 | orchestrator | 2026-04-08 03:40:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:40:31.677536 | orchestrator | 2026-04-08 03:40:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:40:34.724640 | orchestrator | 2026-04-08 03:40:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:40:34.727129 | orchestrator | 2026-04-08 03:40:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:40:34.727239 | orchestrator | 2026-04-08 03:40:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:40:37.763241 | orchestrator | 2026-04-08 03:40:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:40:37.765000 | orchestrator | 2026-04-08 03:40:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:40:37.767216 | orchestrator | 2026-04-08 03:40:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:40:40.816549 | orchestrator | 2026-04-08 03:40:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:40:40.818887 | orchestrator | 2026-04-08 03:40:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:40:40.818940 | orchestrator | 2026-04-08 03:40:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:40:43.868726 | orchestrator | 2026-04-08 03:40:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:40:43.871548 | orchestrator | 2026-04-08 03:40:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:40:43.871635 | orchestrator | 2026-04-08 03:40:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:40:46.922857 | orchestrator | 2026-04-08 03:40:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:40:46.925817 | orchestrator | 2026-04-08 03:40:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:40:46.925869 | orchestrator | 2026-04-08 03:40:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:40:49.969248 | orchestrator | 2026-04-08 03:40:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:40:49.971688 | orchestrator | 2026-04-08 03:40:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:40:49.972361 | orchestrator | 2026-04-08 03:40:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:40:53.019817 | orchestrator | 2026-04-08 03:40:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:40:53.023679 | orchestrator | 2026-04-08 03:40:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:40:53.023740 | orchestrator | 2026-04-08 03:40:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:40:56.070070 | orchestrator | 2026-04-08 03:40:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:40:56.071810 | orchestrator | 2026-04-08 03:40:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:40:56.071923 | orchestrator | 2026-04-08 03:40:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:40:59.118530 | orchestrator | 2026-04-08 03:40:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:40:59.119706 | orchestrator | 2026-04-08 03:40:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:40:59.119948 | orchestrator | 2026-04-08 03:40:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:41:02.161057 | orchestrator | 2026-04-08 03:41:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:41:02.162120 | orchestrator | 2026-04-08 03:41:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:41:02.162175 | orchestrator | 2026-04-08 03:41:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:41:05.216032 | orchestrator | 2026-04-08 03:41:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:41:05.217155 | orchestrator | 2026-04-08 03:41:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:41:05.217196 | orchestrator | 2026-04-08 03:41:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:41:08.274111 | orchestrator | 2026-04-08 03:41:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:41:08.275804 | orchestrator | 2026-04-08 03:41:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:41:08.275937 | orchestrator | 2026-04-08 03:41:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:41:11.317825 | orchestrator | 2026-04-08 03:41:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:41:11.318989 | orchestrator | 2026-04-08 03:41:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:41:11.319052 | orchestrator | 2026-04-08 03:41:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:41:14.369523 | orchestrator | 2026-04-08 03:41:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:41:14.371793 | orchestrator | 2026-04-08 03:41:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:41:14.371854 | orchestrator | 2026-04-08 03:41:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:41:17.420559 | orchestrator | 2026-04-08 03:41:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:41:17.421791 | orchestrator | 2026-04-08 03:41:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:41:17.422276 | orchestrator | 2026-04-08 03:41:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:41:20.474572 | orchestrator | 2026-04-08 03:41:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:41:20.476109 | orchestrator | 2026-04-08 03:41:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:41:20.476222 | orchestrator | 2026-04-08 03:41:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:41:23.521476 | orchestrator | 2026-04-08 03:41:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:41:23.524613 | orchestrator | 2026-04-08 03:41:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:41:23.524674 | orchestrator | 2026-04-08 03:41:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:41:26.581435 | orchestrator | 2026-04-08 03:41:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:41:26.583641 | orchestrator | 2026-04-08 03:41:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:41:26.583726 | orchestrator | 2026-04-08 03:41:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:41:29.633673 | orchestrator | 2026-04-08 03:41:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:41:29.636442 | orchestrator | 2026-04-08 03:41:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:41:29.636904 | orchestrator | 2026-04-08 03:41:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:41:32.693403 | orchestrator | 2026-04-08 03:41:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:41:32.693819 | orchestrator | 2026-04-08 03:41:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:41:32.694422 | orchestrator | 2026-04-08 03:41:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:41:35.739146 | orchestrator | 2026-04-08 03:41:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:41:35.740794 | orchestrator | 2026-04-08 03:41:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:41:35.740868 | orchestrator | 2026-04-08 03:41:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:41:38.783928 | orchestrator | 2026-04-08 03:41:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:41:38.785437 | orchestrator | 2026-04-08 03:41:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:41:38.785481 | orchestrator | 2026-04-08 03:41:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:41:41.838821 | orchestrator | 2026-04-08 03:41:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:41:41.840564 | orchestrator | 2026-04-08 03:41:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:41:41.840636 | orchestrator | 2026-04-08 03:41:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:41:44.897241 | orchestrator | 2026-04-08 03:41:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:41:44.899302 | orchestrator | 2026-04-08 03:41:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:41:44.899340 | orchestrator | 2026-04-08 03:41:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:41:47.947871 | orchestrator | 2026-04-08 03:41:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:41:47.951863 | orchestrator | 2026-04-08 03:41:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:41:47.951989 | orchestrator | 2026-04-08 03:41:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:41:51.007577 | orchestrator | 2026-04-08 03:41:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:41:51.009307 | orchestrator | 2026-04-08 03:41:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:41:51.009377 | orchestrator | 2026-04-08 03:41:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:41:54.059881 | orchestrator | 2026-04-08 03:41:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:41:54.060825 | orchestrator | 2026-04-08 03:41:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:41:54.061119 | orchestrator | 2026-04-08 03:41:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:41:57.114397 | orchestrator | 2026-04-08 03:41:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:41:57.116128 | orchestrator | 2026-04-08 03:41:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:41:57.116195 | orchestrator | 2026-04-08 03:41:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:42:00.164833 | orchestrator | 2026-04-08 03:42:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:42:00.167820 | orchestrator | 2026-04-08 03:42:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:42:00.167941 | orchestrator | 2026-04-08 03:42:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:42:03.208435 | orchestrator | 2026-04-08 03:42:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:42:03.210699 | orchestrator | 2026-04-08 03:42:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:42:03.211027 | orchestrator | 2026-04-08 03:42:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:42:06.264438 | orchestrator | 2026-04-08 03:42:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:42:06.265395 | orchestrator | 2026-04-08 03:42:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:42:06.265438 | orchestrator | 2026-04-08 03:42:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:42:09.312286 | orchestrator | 2026-04-08 03:42:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:42:09.313914 | orchestrator | 2026-04-08 03:42:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:42:09.314138 | orchestrator | 2026-04-08 03:42:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:42:12.361454 | orchestrator | 2026-04-08 03:42:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:42:12.363367 | orchestrator | 2026-04-08 03:42:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:42:12.363509 | orchestrator | 2026-04-08 03:42:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:42:15.406674 | orchestrator | 2026-04-08 03:42:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:42:15.408814 | orchestrator | 2026-04-08 03:42:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:42:15.408876 | orchestrator | 2026-04-08 03:42:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:42:18.451397 | orchestrator | 2026-04-08 03:42:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:42:18.453954 | orchestrator | 2026-04-08 03:42:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:42:18.454065 | orchestrator | 2026-04-08 03:42:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:42:21.506138 | orchestrator | 2026-04-08 03:42:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:42:21.507317 | orchestrator | 2026-04-08 03:42:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:42:21.507356 | orchestrator | 2026-04-08 03:42:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:42:24.550525 | orchestrator | 2026-04-08 03:42:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:42:24.552368 | orchestrator | 2026-04-08 03:42:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:42:24.552476 | orchestrator | 2026-04-08 03:42:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:42:27.592264 | orchestrator | 2026-04-08 03:42:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:42:27.593295 | orchestrator | 2026-04-08 03:42:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:42:27.593365 | orchestrator | 2026-04-08 03:42:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:42:30.644006 | orchestrator | 2026-04-08 03:42:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:42:30.647028 | orchestrator | 2026-04-08 03:42:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:42:30.647114 | orchestrator | 2026-04-08 03:42:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:42:33.697076 | orchestrator | 2026-04-08 03:42:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:42:33.700693 | orchestrator | 2026-04-08 03:42:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:42:33.700772 | orchestrator | 2026-04-08 03:42:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:42:36.753625 | orchestrator | 2026-04-08 03:42:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:42:36.755559 | orchestrator | 2026-04-08 03:42:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:42:36.755595 | orchestrator | 2026-04-08 03:42:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:42:39.805935 | orchestrator | 2026-04-08 03:42:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:42:39.807719 | orchestrator | 2026-04-08 03:42:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:42:39.807771 | orchestrator | 2026-04-08 03:42:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:42:42.851075 | orchestrator | 2026-04-08 03:42:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:42:42.852601 | orchestrator | 2026-04-08 03:42:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:42:42.852632 | orchestrator | 2026-04-08 03:42:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:42:45.894199 | orchestrator | 2026-04-08 03:42:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:42:45.895609 | orchestrator | 2026-04-08 03:42:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:42:45.895692 | orchestrator | 2026-04-08 03:42:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:42:48.939951 | orchestrator | 2026-04-08 03:42:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:42:48.941803 | orchestrator | 2026-04-08 03:42:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:42:48.941916 | orchestrator | 2026-04-08 03:42:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:42:51.995968 | orchestrator | 2026-04-08 03:42:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:42:51.998347 | orchestrator | 2026-04-08 03:42:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:42:51.998417 | orchestrator | 2026-04-08 03:42:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:42:55.047934 | orchestrator | 2026-04-08 03:42:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:42:55.049232 | orchestrator | 2026-04-08 03:42:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:42:55.049299 | orchestrator | 2026-04-08 03:42:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:42:58.100270 | orchestrator | 2026-04-08 03:42:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:42:58.103007 | orchestrator | 2026-04-08 03:42:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:42:58.103074 | orchestrator | 2026-04-08 03:42:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:43:01.150620 | orchestrator | 2026-04-08 03:43:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:43:01.151805 | orchestrator | 2026-04-08 03:43:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:43:01.151849 | orchestrator | 2026-04-08 03:43:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:43:04.204720 | orchestrator | 2026-04-08 03:43:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:43:04.206251 | orchestrator | 2026-04-08 03:43:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:43:04.206848 | orchestrator | 2026-04-08 03:43:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:43:07.265066 | orchestrator | 2026-04-08 03:43:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:43:07.266226 | orchestrator | 2026-04-08 03:43:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:43:07.266268 | orchestrator | 2026-04-08 03:43:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:43:10.314503 | orchestrator | 2026-04-08 03:43:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:43:10.315571 | orchestrator | 2026-04-08 03:43:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:43:10.315670 | orchestrator | 2026-04-08 03:43:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:43:13.362305 | orchestrator | 2026-04-08 03:43:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:43:13.365385 | orchestrator | 2026-04-08 03:43:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:43:13.365583 | orchestrator | 2026-04-08 03:43:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:43:16.419101 | orchestrator | 2026-04-08 03:43:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:43:16.421217 | orchestrator | 2026-04-08 03:43:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:43:16.421304 | orchestrator | 2026-04-08 03:43:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:43:19.469168 | orchestrator | 2026-04-08 03:43:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:43:19.470659 | orchestrator | 2026-04-08 03:43:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:43:19.470708 | orchestrator | 2026-04-08 03:43:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:43:22.527548 | orchestrator | 2026-04-08 03:43:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:43:22.529724 | orchestrator | 2026-04-08 03:43:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:43:22.529804 | orchestrator | 2026-04-08 03:43:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:43:25.581958 | orchestrator | 2026-04-08 03:43:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:43:25.583436 | orchestrator | 2026-04-08 03:43:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:43:25.583526 | orchestrator | 2026-04-08 03:43:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:43:28.624612 | orchestrator | 2026-04-08 03:43:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:43:28.626265 | orchestrator | 2026-04-08 03:43:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:43:28.626329 | orchestrator | 2026-04-08 03:43:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:43:31.669547 | orchestrator | 2026-04-08 03:43:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:43:31.671075 | orchestrator | 2026-04-08 03:43:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:43:31.671126 | orchestrator | 2026-04-08 03:43:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:43:34.716645 | orchestrator | 2026-04-08 03:43:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:43:34.718010 | orchestrator | 2026-04-08 03:43:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:43:34.718225 | orchestrator | 2026-04-08 03:43:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:43:37.767904 | orchestrator | 2026-04-08 03:43:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:43:37.767986 | orchestrator | 2026-04-08 03:43:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:43:37.767998 | orchestrator | 2026-04-08 03:43:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:43:40.817881 | orchestrator | 2026-04-08 03:43:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:43:40.820877 | orchestrator | 2026-04-08 03:43:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:43:40.821049 | orchestrator | 2026-04-08 03:43:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:43:43.869698 | orchestrator | 2026-04-08 03:43:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:43:43.871363 | orchestrator | 2026-04-08 03:43:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:43:43.871438 | orchestrator | 2026-04-08 03:43:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:43:46.919521 | orchestrator | 2026-04-08 03:43:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:43:46.920813 | orchestrator | 2026-04-08 03:43:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:43:46.920902 | orchestrator | 2026-04-08 03:43:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:43:49.972494 | orchestrator | 2026-04-08 03:43:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:43:49.973871 | orchestrator | 2026-04-08 03:43:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:43:49.974164 | orchestrator | 2026-04-08 03:43:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:43:53.032932 | orchestrator | 2026-04-08 03:43:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:43:53.036097 | orchestrator | 2026-04-08 03:43:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:43:53.036166 | orchestrator | 2026-04-08 03:43:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:43:56.085244 | orchestrator | 2026-04-08 03:43:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:43:56.087808 | orchestrator | 2026-04-08 03:43:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:43:56.087876 | orchestrator | 2026-04-08 03:43:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:43:59.134146 | orchestrator | 2026-04-08 03:43:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:43:59.135448 | orchestrator | 2026-04-08 03:43:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:43:59.135488 | orchestrator | 2026-04-08 03:43:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:44:02.185237 | orchestrator | 2026-04-08 03:44:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:44:02.189383 | orchestrator | 2026-04-08 03:44:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:44:02.189498 | orchestrator | 2026-04-08 03:44:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:44:05.236647 | orchestrator | 2026-04-08 03:44:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:44:05.237836 | orchestrator | 2026-04-08 03:44:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:44:05.237884 | orchestrator | 2026-04-08 03:44:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:44:08.291077 | orchestrator | 2026-04-08 03:44:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:44:08.292817 | orchestrator | 2026-04-08 03:44:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:44:08.292875 | orchestrator | 2026-04-08 03:44:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:44:11.341240 | orchestrator | 2026-04-08 03:44:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:44:11.342598 | orchestrator | 2026-04-08 03:44:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:44:11.342637 | orchestrator | 2026-04-08 03:44:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:44:14.391602 | orchestrator | 2026-04-08 03:44:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:44:14.394153 | orchestrator | 2026-04-08 03:44:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:44:14.394297 | orchestrator | 2026-04-08 03:44:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:44:17.442503 | orchestrator | 2026-04-08 03:44:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:44:17.443931 | orchestrator | 2026-04-08 03:44:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:44:17.444046 | orchestrator | 2026-04-08 03:44:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:44:20.489870 | orchestrator | 2026-04-08 03:44:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:44:20.491663 | orchestrator | 2026-04-08 03:44:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:44:20.491724 | orchestrator | 2026-04-08 03:44:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:44:23.542942 | orchestrator | 2026-04-08 03:44:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:44:23.544613 | orchestrator | 2026-04-08 03:44:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:44:23.544675 | orchestrator | 2026-04-08 03:44:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:44:26.586665 | orchestrator | 2026-04-08 03:44:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:44:26.590544 | orchestrator | 2026-04-08 03:44:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:44:26.590653 | orchestrator | 2026-04-08 03:44:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:44:29.635403 | orchestrator | 2026-04-08 03:44:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:44:29.639223 | orchestrator | 2026-04-08 03:44:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:44:29.639303 | orchestrator | 2026-04-08 03:44:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:44:32.688107 | orchestrator | 2026-04-08 03:44:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:44:32.689899 | orchestrator | 2026-04-08 03:44:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:44:32.690085 | orchestrator | 2026-04-08 03:44:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:44:35.738181 | orchestrator | 2026-04-08 03:44:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:44:35.739738 | orchestrator | 2026-04-08 03:44:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:44:35.739782 | orchestrator | 2026-04-08 03:44:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:44:38.788249 | orchestrator | 2026-04-08 03:44:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:44:38.789897 | orchestrator | 2026-04-08 03:44:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:44:38.790066 | orchestrator | 2026-04-08 03:44:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:44:41.833665 | orchestrator | 2026-04-08 03:44:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:44:41.838289 | orchestrator | 2026-04-08 03:44:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:44:41.838378 | orchestrator | 2026-04-08 03:44:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:44:44.880023 | orchestrator | 2026-04-08 03:44:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:44:44.881473 | orchestrator | 2026-04-08 03:44:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:44:44.881542 | orchestrator | 2026-04-08 03:44:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:44:47.934505 | orchestrator | 2026-04-08 03:44:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:44:47.937482 | orchestrator | 2026-04-08 03:44:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:44:47.937544 | orchestrator | 2026-04-08 03:44:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:44:50.989585 | orchestrator | 2026-04-08 03:44:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:44:50.991307 | orchestrator | 2026-04-08 03:44:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:44:50.991356 | orchestrator | 2026-04-08 03:44:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:44:54.044604 | orchestrator | 2026-04-08 03:44:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:44:54.046166 | orchestrator | 2026-04-08 03:44:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:44:54.046222 | orchestrator | 2026-04-08 03:44:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:44:57.100585 | orchestrator | 2026-04-08 03:44:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:44:57.101470 | orchestrator | 2026-04-08 03:44:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:44:57.101503 | orchestrator | 2026-04-08 03:44:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:45:00.146372 | orchestrator | 2026-04-08 03:45:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:45:00.147217 | orchestrator | 2026-04-08 03:45:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:45:00.147269 | orchestrator | 2026-04-08 03:45:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:45:03.204539 | orchestrator | 2026-04-08 03:45:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:45:03.204791 | orchestrator | 2026-04-08 03:45:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:45:03.206114 | orchestrator | 2026-04-08 03:45:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:45:06.254880 | orchestrator | 2026-04-08 03:45:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:45:06.256801 | orchestrator | 2026-04-08 03:45:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:45:06.256842 | orchestrator | 2026-04-08 03:45:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:45:09.313693 | orchestrator | 2026-04-08 03:45:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:45:09.314824 | orchestrator | 2026-04-08 03:45:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:45:09.314984 | orchestrator | 2026-04-08 03:45:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:45:12.361202 | orchestrator | 2026-04-08 03:45:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:45:12.363079 | orchestrator | 2026-04-08 03:45:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:45:12.363120 | orchestrator | 2026-04-08 03:45:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:45:15.411754 | orchestrator | 2026-04-08 03:45:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:45:15.413675 | orchestrator | 2026-04-08 03:45:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:45:15.413770 | orchestrator | 2026-04-08 03:45:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:45:18.460813 | orchestrator | 2026-04-08 03:45:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:45:18.462385 | orchestrator | 2026-04-08 03:45:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:45:18.462747 | orchestrator | 2026-04-08 03:45:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:45:21.505123 | orchestrator | 2026-04-08 03:45:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:45:21.506599 | orchestrator | 2026-04-08 03:45:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:45:21.506640 | orchestrator | 2026-04-08 03:45:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:45:24.559243 | orchestrator | 2026-04-08 03:45:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:45:24.560970 | orchestrator | 2026-04-08 03:45:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:45:24.561016 | orchestrator | 2026-04-08 03:45:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:45:27.608767 | orchestrator | 2026-04-08 03:45:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:45:27.610194 | orchestrator | 2026-04-08 03:45:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:45:27.610568 | orchestrator | 2026-04-08 03:45:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:45:30.656769 | orchestrator | 2026-04-08 03:45:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:45:30.658574 | orchestrator | 2026-04-08 03:45:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:45:30.658656 | orchestrator | 2026-04-08 03:45:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:45:33.707257 | orchestrator | 2026-04-08 03:45:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:45:33.709086 | orchestrator | 2026-04-08 03:45:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:45:33.709147 | orchestrator | 2026-04-08 03:45:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:45:36.751481 | orchestrator | 2026-04-08 03:45:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:45:36.754921 | orchestrator | 2026-04-08 03:45:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:45:36.755025 | orchestrator | 2026-04-08 03:45:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:45:39.795944 | orchestrator | 2026-04-08 03:45:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:45:39.798346 | orchestrator | 2026-04-08 03:45:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:45:39.798412 | orchestrator | 2026-04-08 03:45:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:45:42.849805 | orchestrator | 2026-04-08 03:45:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:45:42.851723 | orchestrator | 2026-04-08 03:45:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:45:42.851778 | orchestrator | 2026-04-08 03:45:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:45:45.903303 | orchestrator | 2026-04-08 03:45:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:45:45.906631 | orchestrator | 2026-04-08 03:45:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:45:45.906771 | orchestrator | 2026-04-08 03:45:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:45:48.955342 | orchestrator | 2026-04-08 03:45:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:45:48.957515 | orchestrator | 2026-04-08 03:45:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:45:48.957580 | orchestrator | 2026-04-08 03:45:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:45:52.007607 | orchestrator | 2026-04-08 03:45:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:45:52.008668 | orchestrator | 2026-04-08 03:45:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:45:52.008728 | orchestrator | 2026-04-08 03:45:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:45:55.048767 | orchestrator | 2026-04-08 03:45:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:45:55.050277 | orchestrator | 2026-04-08 03:45:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:45:55.050316 | orchestrator | 2026-04-08 03:45:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:45:58.098109 | orchestrator | 2026-04-08 03:45:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:45:58.099550 | orchestrator | 2026-04-08 03:45:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:45:58.099603 | orchestrator | 2026-04-08 03:45:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:46:01.147918 | orchestrator | 2026-04-08 03:46:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:46:01.149006 | orchestrator | 2026-04-08 03:46:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:46:01.149045 | orchestrator | 2026-04-08 03:46:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:46:04.202317 | orchestrator | 2026-04-08 03:46:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:46:04.204174 | orchestrator | 2026-04-08 03:46:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:46:04.204238 | orchestrator | 2026-04-08 03:46:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:46:07.254070 | orchestrator | 2026-04-08 03:46:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:46:07.256541 | orchestrator | 2026-04-08 03:46:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:46:07.256606 | orchestrator | 2026-04-08 03:46:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:46:10.303530 | orchestrator | 2026-04-08 03:46:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:46:10.304607 | orchestrator | 2026-04-08 03:46:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:46:10.304645 | orchestrator | 2026-04-08 03:46:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:46:13.349129 | orchestrator | 2026-04-08 03:46:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:46:13.350215 | orchestrator | 2026-04-08 03:46:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:46:13.350441 | orchestrator | 2026-04-08 03:46:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:46:16.397302 | orchestrator | 2026-04-08 03:46:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:46:16.398128 | orchestrator | 2026-04-08 03:46:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:46:16.398790 | orchestrator | 2026-04-08 03:46:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:46:19.439962 | orchestrator | 2026-04-08 03:46:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:46:19.441897 | orchestrator | 2026-04-08 03:46:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:46:19.442147 | orchestrator | 2026-04-08 03:46:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:46:22.492129 | orchestrator | 2026-04-08 03:46:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:46:22.494406 | orchestrator | 2026-04-08 03:46:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:46:22.495003 | orchestrator | 2026-04-08 03:46:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:46:25.548712 | orchestrator | 2026-04-08 03:46:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:46:25.550702 | orchestrator | 2026-04-08 03:46:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:46:25.550852 | orchestrator | 2026-04-08 03:46:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:46:28.597961 | orchestrator | 2026-04-08 03:46:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:46:28.599564 | orchestrator | 2026-04-08 03:46:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:46:28.599661 | orchestrator | 2026-04-08 03:46:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:46:31.649699 | orchestrator | 2026-04-08 03:46:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:46:31.652598 | orchestrator | 2026-04-08 03:46:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:46:31.652676 | orchestrator | 2026-04-08 03:46:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:46:34.699977 | orchestrator | 2026-04-08 03:46:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:46:34.701856 | orchestrator | 2026-04-08 03:46:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:46:34.701939 | orchestrator | 2026-04-08 03:46:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:46:37.754162 | orchestrator | 2026-04-08 03:46:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:46:37.757444 | orchestrator | 2026-04-08 03:46:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:46:37.757518 | orchestrator | 2026-04-08 03:46:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:46:40.805895 | orchestrator | 2026-04-08 03:46:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:46:40.808109 | orchestrator | 2026-04-08 03:46:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:46:40.808167 | orchestrator | 2026-04-08 03:46:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:46:43.857437 | orchestrator | 2026-04-08 03:46:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:46:43.859093 | orchestrator | 2026-04-08 03:46:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:46:43.859151 | orchestrator | 2026-04-08 03:46:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:46:46.908114 | orchestrator | 2026-04-08 03:46:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:46:46.911029 | orchestrator | 2026-04-08 03:46:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:46:46.911096 | orchestrator | 2026-04-08 03:46:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:46:49.955380 | orchestrator | 2026-04-08 03:46:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:46:49.956909 | orchestrator | 2026-04-08 03:46:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:46:49.956976 | orchestrator | 2026-04-08 03:46:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:46:53.007299 | orchestrator | 2026-04-08 03:46:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:46:53.008991 | orchestrator | 2026-04-08 03:46:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:46:53.009059 | orchestrator | 2026-04-08 03:46:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:46:56.063085 | orchestrator | 2026-04-08 03:46:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:46:56.065451 | orchestrator | 2026-04-08 03:46:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:46:56.065536 | orchestrator | 2026-04-08 03:46:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:46:59.117132 | orchestrator | 2026-04-08 03:46:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:46:59.119692 | orchestrator | 2026-04-08 03:46:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:46:59.119915 | orchestrator | 2026-04-08 03:46:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:47:02.168845 | orchestrator | 2026-04-08 03:47:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:47:02.170882 | orchestrator | 2026-04-08 03:47:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:47:02.170991 | orchestrator | 2026-04-08 03:47:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:47:05.220248 | orchestrator | 2026-04-08 03:47:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:47:05.223000 | orchestrator | 2026-04-08 03:47:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:47:05.223074 | orchestrator | 2026-04-08 03:47:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:47:08.270869 | orchestrator | 2026-04-08 03:47:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:47:08.272337 | orchestrator | 2026-04-08 03:47:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:47:08.272409 | orchestrator | 2026-04-08 03:47:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:47:11.316647 | orchestrator | 2026-04-08 03:47:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:47:11.317912 | orchestrator | 2026-04-08 03:47:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:47:11.317954 | orchestrator | 2026-04-08 03:47:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:47:14.372433 | orchestrator | 2026-04-08 03:47:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:47:14.374893 | orchestrator | 2026-04-08 03:47:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:47:14.374988 | orchestrator | 2026-04-08 03:47:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:47:17.420225 | orchestrator | 2026-04-08 03:47:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:47:17.421823 | orchestrator | 2026-04-08 03:47:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:47:17.421867 | orchestrator | 2026-04-08 03:47:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:47:20.471084 | orchestrator | 2026-04-08 03:47:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:47:20.474341 | orchestrator | 2026-04-08 03:47:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:47:20.474396 | orchestrator | 2026-04-08 03:47:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:47:23.521113 | orchestrator | 2026-04-08 03:47:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:47:23.522475 | orchestrator | 2026-04-08 03:47:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:47:23.522525 | orchestrator | 2026-04-08 03:47:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:47:26.570127 | orchestrator | 2026-04-08 03:47:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:47:26.571539 | orchestrator | 2026-04-08 03:47:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:47:26.571651 | orchestrator | 2026-04-08 03:47:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:47:29.620012 | orchestrator | 2026-04-08 03:47:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:47:29.620700 | orchestrator | 2026-04-08 03:47:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:47:29.620760 | orchestrator | 2026-04-08 03:47:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:47:32.671254 | orchestrator | 2026-04-08 03:47:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:47:32.673659 | orchestrator | 2026-04-08 03:47:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:47:32.673741 | orchestrator | 2026-04-08 03:47:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:47:35.723568 | orchestrator | 2026-04-08 03:47:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:47:35.724956 | orchestrator | 2026-04-08 03:47:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:47:35.725253 | orchestrator | 2026-04-08 03:47:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:47:38.771091 | orchestrator | 2026-04-08 03:47:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:47:38.771606 | orchestrator | 2026-04-08 03:47:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:47:38.771675 | orchestrator | 2026-04-08 03:47:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:47:41.811503 | orchestrator | 2026-04-08 03:47:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:47:41.814056 | orchestrator | 2026-04-08 03:47:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:47:41.814145 | orchestrator | 2026-04-08 03:47:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:47:44.864679 | orchestrator | 2026-04-08 03:47:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:47:44.866855 | orchestrator | 2026-04-08 03:47:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:47:44.866931 | orchestrator | 2026-04-08 03:47:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:47:47.921769 | orchestrator | 2026-04-08 03:47:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:47:47.922810 | orchestrator | 2026-04-08 03:47:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:47:47.922859 | orchestrator | 2026-04-08 03:47:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:47:50.969498 | orchestrator | 2026-04-08 03:47:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:47:50.970879 | orchestrator | 2026-04-08 03:47:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:47:50.970920 | orchestrator | 2026-04-08 03:47:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:47:54.019728 | orchestrator | 2026-04-08 03:47:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:47:54.021738 | orchestrator | 2026-04-08 03:47:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:47:54.021786 | orchestrator | 2026-04-08 03:47:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:47:57.066161 | orchestrator | 2026-04-08 03:47:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:47:57.067543 | orchestrator | 2026-04-08 03:47:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:47:57.067592 | orchestrator | 2026-04-08 03:47:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:48:00.115618 | orchestrator | 2026-04-08 03:48:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:48:00.119366 | orchestrator | 2026-04-08 03:48:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:48:00.119444 | orchestrator | 2026-04-08 03:48:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:48:03.163401 | orchestrator | 2026-04-08 03:48:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:48:03.164789 | orchestrator | 2026-04-08 03:48:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:48:03.164839 | orchestrator | 2026-04-08 03:48:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:48:06.208416 | orchestrator | 2026-04-08 03:48:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:48:06.210245 | orchestrator | 2026-04-08 03:48:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:48:06.210463 | orchestrator | 2026-04-08 03:48:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:48:09.250338 | orchestrator | 2026-04-08 03:48:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:48:09.251861 | orchestrator | 2026-04-08 03:48:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:48:09.251926 | orchestrator | 2026-04-08 03:48:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:48:12.301344 | orchestrator | 2026-04-08 03:48:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:48:12.302949 | orchestrator | 2026-04-08 03:48:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:48:12.303005 | orchestrator | 2026-04-08 03:48:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:48:15.359430 | orchestrator | 2026-04-08 03:48:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:48:15.360808 | orchestrator | 2026-04-08 03:48:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:48:15.360909 | orchestrator | 2026-04-08 03:48:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:48:18.411895 | orchestrator | 2026-04-08 03:48:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:48:18.415172 | orchestrator | 2026-04-08 03:48:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:48:18.415276 | orchestrator | 2026-04-08 03:48:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:48:21.463433 | orchestrator | 2026-04-08 03:48:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:48:21.465554 | orchestrator | 2026-04-08 03:48:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:48:21.465696 | orchestrator | 2026-04-08 03:48:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:48:24.507462 | orchestrator | 2026-04-08 03:48:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:48:24.509167 | orchestrator | 2026-04-08 03:48:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:48:24.509246 | orchestrator | 2026-04-08 03:48:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:48:27.553117 | orchestrator | 2026-04-08 03:48:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:48:27.554856 | orchestrator | 2026-04-08 03:48:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:48:27.554955 | orchestrator | 2026-04-08 03:48:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:48:30.600289 | orchestrator | 2026-04-08 03:48:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:48:30.603710 | orchestrator | 2026-04-08 03:48:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:48:30.603883 | orchestrator | 2026-04-08 03:48:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:48:33.649127 | orchestrator | 2026-04-08 03:48:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:48:33.649991 | orchestrator | 2026-04-08 03:48:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:48:33.650088 | orchestrator | 2026-04-08 03:48:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:48:36.701368 | orchestrator | 2026-04-08 03:48:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:48:36.703112 | orchestrator | 2026-04-08 03:48:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:48:36.703209 | orchestrator | 2026-04-08 03:48:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:48:39.762065 | orchestrator | 2026-04-08 03:48:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:48:39.763029 | orchestrator | 2026-04-08 03:48:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:48:39.763063 | orchestrator | 2026-04-08 03:48:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:48:42.812612 | orchestrator | 2026-04-08 03:48:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:48:42.813927 | orchestrator | 2026-04-08 03:48:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:48:42.813989 | orchestrator | 2026-04-08 03:48:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:48:45.864977 | orchestrator | 2026-04-08 03:48:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:48:45.867610 | orchestrator | 2026-04-08 03:48:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:48:45.867680 | orchestrator | 2026-04-08 03:48:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:48:48.916429 | orchestrator | 2026-04-08 03:48:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:48:48.918195 | orchestrator | 2026-04-08 03:48:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:48:48.918297 | orchestrator | 2026-04-08 03:48:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:48:51.965439 | orchestrator | 2026-04-08 03:48:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:48:51.967518 | orchestrator | 2026-04-08 03:48:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:48:51.967565 | orchestrator | 2026-04-08 03:48:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:48:55.010850 | orchestrator | 2026-04-08 03:48:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:48:55.012845 | orchestrator | 2026-04-08 03:48:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:48:55.012942 | orchestrator | 2026-04-08 03:48:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:48:58.067052 | orchestrator | 2026-04-08 03:48:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:48:58.068599 | orchestrator | 2026-04-08 03:48:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:48:58.068680 | orchestrator | 2026-04-08 03:48:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:49:01.118222 | orchestrator | 2026-04-08 03:49:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:49:01.120144 | orchestrator | 2026-04-08 03:49:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:49:01.120235 | orchestrator | 2026-04-08 03:49:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:49:04.166186 | orchestrator | 2026-04-08 03:49:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:49:04.166420 | orchestrator | 2026-04-08 03:49:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:49:04.166444 | orchestrator | 2026-04-08 03:49:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:49:07.217368 | orchestrator | 2026-04-08 03:49:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:49:07.219280 | orchestrator | 2026-04-08 03:49:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:49:07.219432 | orchestrator | 2026-04-08 03:49:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:49:10.263519 | orchestrator | 2026-04-08 03:49:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:49:10.265066 | orchestrator | 2026-04-08 03:49:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:49:10.265111 | orchestrator | 2026-04-08 03:49:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:49:13.312436 | orchestrator | 2026-04-08 03:49:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:49:13.313860 | orchestrator | 2026-04-08 03:49:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:49:13.313911 | orchestrator | 2026-04-08 03:49:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:49:16.362220 | orchestrator | 2026-04-08 03:49:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:49:16.364339 | orchestrator | 2026-04-08 03:49:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:49:16.364403 | orchestrator | 2026-04-08 03:49:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:49:19.416290 | orchestrator | 2026-04-08 03:49:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:49:19.417796 | orchestrator | 2026-04-08 03:49:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:49:19.417864 | orchestrator | 2026-04-08 03:49:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:49:22.464578 | orchestrator | 2026-04-08 03:49:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:49:22.465552 | orchestrator | 2026-04-08 03:49:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:49:22.465576 | orchestrator | 2026-04-08 03:49:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:49:25.517295 | orchestrator | 2026-04-08 03:49:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:49:25.519464 | orchestrator | 2026-04-08 03:49:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:49:25.519525 | orchestrator | 2026-04-08 03:49:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:49:28.572202 | orchestrator | 2026-04-08 03:49:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:49:28.573948 | orchestrator | 2026-04-08 03:49:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:49:28.574513 | orchestrator | 2026-04-08 03:49:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:49:31.623919 | orchestrator | 2026-04-08 03:49:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:49:31.626772 | orchestrator | 2026-04-08 03:49:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:49:31.626808 | orchestrator | 2026-04-08 03:49:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:49:34.676263 | orchestrator | 2026-04-08 03:49:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:49:34.677649 | orchestrator | 2026-04-08 03:49:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:49:34.677705 | orchestrator | 2026-04-08 03:49:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:51:37.821494 | orchestrator | 2026-04-08 03:51:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:51:37.821578 | orchestrator | 2026-04-08 03:51:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:51:37.821585 | orchestrator | 2026-04-08 03:51:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:51:40.865489 | orchestrator | 2026-04-08 03:51:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:51:40.866980 | orchestrator | 2026-04-08 03:51:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:51:40.867024 | orchestrator | 2026-04-08 03:51:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:51:43.909562 | orchestrator | 2026-04-08 03:51:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:51:43.910943 | orchestrator | 2026-04-08 03:51:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:51:43.911083 | orchestrator | 2026-04-08 03:51:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:51:46.963484 | orchestrator | 2026-04-08 03:51:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:51:46.965017 | orchestrator | 2026-04-08 03:51:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:51:46.965075 | orchestrator | 2026-04-08 03:51:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:51:50.020056 | orchestrator | 2026-04-08 03:51:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:51:50.020281 | orchestrator | 2026-04-08 03:51:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:51:50.020307 | orchestrator | 2026-04-08 03:51:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:51:53.061408 | orchestrator | 2026-04-08 03:51:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:51:53.063244 | orchestrator | 2026-04-08 03:51:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:51:53.063574 | orchestrator | 2026-04-08 03:51:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:51:56.117266 | orchestrator | 2026-04-08 03:51:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:51:56.119651 | orchestrator | 2026-04-08 03:51:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:51:56.119709 | orchestrator | 2026-04-08 03:51:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:51:59.160010 | orchestrator | 2026-04-08 03:51:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:51:59.161011 | orchestrator | 2026-04-08 03:51:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:51:59.161236 | orchestrator | 2026-04-08 03:51:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:52:02.205416 | orchestrator | 2026-04-08 03:52:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:52:02.207349 | orchestrator | 2026-04-08 03:52:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:52:02.207394 | orchestrator | 2026-04-08 03:52:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:52:05.250811 | orchestrator | 2026-04-08 03:52:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:52:05.254789 | orchestrator | 2026-04-08 03:52:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:52:05.254856 | orchestrator | 2026-04-08 03:52:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:52:08.296099 | orchestrator | 2026-04-08 03:52:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:52:08.298185 | orchestrator | 2026-04-08 03:52:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:52:08.298283 | orchestrator | 2026-04-08 03:52:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:52:11.341125 | orchestrator | 2026-04-08 03:52:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:52:11.342925 | orchestrator | 2026-04-08 03:52:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:52:11.342974 | orchestrator | 2026-04-08 03:52:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:52:14.388148 | orchestrator | 2026-04-08 03:52:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:52:14.390285 | orchestrator | 2026-04-08 03:52:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:52:14.390354 | orchestrator | 2026-04-08 03:52:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:52:17.435097 | orchestrator | 2026-04-08 03:52:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:52:17.436499 | orchestrator | 2026-04-08 03:52:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:52:17.436555 | orchestrator | 2026-04-08 03:52:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:52:20.484855 | orchestrator | 2026-04-08 03:52:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:52:20.487045 | orchestrator | 2026-04-08 03:52:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:52:20.487121 | orchestrator | 2026-04-08 03:52:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:52:23.538289 | orchestrator | 2026-04-08 03:52:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:52:23.540312 | orchestrator | 2026-04-08 03:52:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:52:23.540389 | orchestrator | 2026-04-08 03:52:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:52:26.584780 | orchestrator | 2026-04-08 03:52:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:52:26.587410 | orchestrator | 2026-04-08 03:52:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:52:26.587496 | orchestrator | 2026-04-08 03:52:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:52:29.628184 | orchestrator | 2026-04-08 03:52:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:52:29.629693 | orchestrator | 2026-04-08 03:52:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:52:29.629765 | orchestrator | 2026-04-08 03:52:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:52:32.677534 | orchestrator | 2026-04-08 03:52:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:52:32.680544 | orchestrator | 2026-04-08 03:52:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:52:32.680610 | orchestrator | 2026-04-08 03:52:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:52:35.718343 | orchestrator | 2026-04-08 03:52:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:52:35.720558 | orchestrator | 2026-04-08 03:52:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:52:35.720669 | orchestrator | 2026-04-08 03:52:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:52:38.770564 | orchestrator | 2026-04-08 03:52:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:52:38.772708 | orchestrator | 2026-04-08 03:52:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:52:38.772745 | orchestrator | 2026-04-08 03:52:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:52:41.818480 | orchestrator | 2026-04-08 03:52:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:52:41.820160 | orchestrator | 2026-04-08 03:52:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:52:41.820285 | orchestrator | 2026-04-08 03:52:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:52:44.866082 | orchestrator | 2026-04-08 03:52:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:52:44.869126 | orchestrator | 2026-04-08 03:52:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:52:44.869244 | orchestrator | 2026-04-08 03:52:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:52:47.920451 | orchestrator | 2026-04-08 03:52:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:52:47.923213 | orchestrator | 2026-04-08 03:52:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:52:47.923276 | orchestrator | 2026-04-08 03:52:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:52:50.970775 | orchestrator | 2026-04-08 03:52:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:52:50.971847 | orchestrator | 2026-04-08 03:52:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:52:50.971903 | orchestrator | 2026-04-08 03:52:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:52:54.020358 | orchestrator | 2026-04-08 03:52:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:52:54.022051 | orchestrator | 2026-04-08 03:52:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:52:54.022098 | orchestrator | 2026-04-08 03:52:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:52:57.065622 | orchestrator | 2026-04-08 03:52:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:52:57.067368 | orchestrator | 2026-04-08 03:52:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:52:57.067447 | orchestrator | 2026-04-08 03:52:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:53:00.102591 | orchestrator | 2026-04-08 03:53:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:53:00.103960 | orchestrator | 2026-04-08 03:53:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:53:00.104002 | orchestrator | 2026-04-08 03:53:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:53:03.154976 | orchestrator | 2026-04-08 03:53:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:53:03.156838 | orchestrator | 2026-04-08 03:53:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:53:03.156979 | orchestrator | 2026-04-08 03:53:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:53:06.197178 | orchestrator | 2026-04-08 03:53:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:53:06.199045 | orchestrator | 2026-04-08 03:53:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:53:06.199208 | orchestrator | 2026-04-08 03:53:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:53:09.243188 | orchestrator | 2026-04-08 03:53:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:53:09.245960 | orchestrator | 2026-04-08 03:53:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:53:09.246162 | orchestrator | 2026-04-08 03:53:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:53:12.289186 | orchestrator | 2026-04-08 03:53:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:53:12.290854 | orchestrator | 2026-04-08 03:53:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:53:12.290892 | orchestrator | 2026-04-08 03:53:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:53:15.331628 | orchestrator | 2026-04-08 03:53:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:53:15.333861 | orchestrator | 2026-04-08 03:53:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:53:15.333931 | orchestrator | 2026-04-08 03:53:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:53:18.375795 | orchestrator | 2026-04-08 03:53:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:53:18.377541 | orchestrator | 2026-04-08 03:53:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:53:18.377606 | orchestrator | 2026-04-08 03:53:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:53:21.424123 | orchestrator | 2026-04-08 03:53:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:53:21.425561 | orchestrator | 2026-04-08 03:53:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:53:21.425631 | orchestrator | 2026-04-08 03:53:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:53:24.469012 | orchestrator | 2026-04-08 03:53:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:53:24.471064 | orchestrator | 2026-04-08 03:53:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:53:24.471156 | orchestrator | 2026-04-08 03:53:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:53:27.511101 | orchestrator | 2026-04-08 03:53:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:53:27.513868 | orchestrator | 2026-04-08 03:53:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:53:27.513919 | orchestrator | 2026-04-08 03:53:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:53:30.560684 | orchestrator | 2026-04-08 03:53:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:53:30.562452 | orchestrator | 2026-04-08 03:53:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:53:30.562656 | orchestrator | 2026-04-08 03:53:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:53:33.611353 | orchestrator | 2026-04-08 03:53:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:53:33.613047 | orchestrator | 2026-04-08 03:53:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:53:33.613085 | orchestrator | 2026-04-08 03:53:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:53:36.659819 | orchestrator | 2026-04-08 03:53:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:53:36.659960 | orchestrator | 2026-04-08 03:53:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:53:36.660349 | orchestrator | 2026-04-08 03:53:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:53:39.705575 | orchestrator | 2026-04-08 03:53:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:53:39.708161 | orchestrator | 2026-04-08 03:53:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:53:39.708281 | orchestrator | 2026-04-08 03:53:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:53:42.763906 | orchestrator | 2026-04-08 03:53:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:53:42.763987 | orchestrator | 2026-04-08 03:53:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:53:42.763995 | orchestrator | 2026-04-08 03:53:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:53:45.801685 | orchestrator | 2026-04-08 03:53:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:53:45.803527 | orchestrator | 2026-04-08 03:53:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:53:45.803583 | orchestrator | 2026-04-08 03:53:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:53:48.848203 | orchestrator | 2026-04-08 03:53:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:53:48.850468 | orchestrator | 2026-04-08 03:53:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:53:48.850507 | orchestrator | 2026-04-08 03:53:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:53:51.896159 | orchestrator | 2026-04-08 03:53:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:53:51.898757 | orchestrator | 2026-04-08 03:53:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:53:51.898840 | orchestrator | 2026-04-08 03:53:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:53:54.945920 | orchestrator | 2026-04-08 03:53:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:53:54.947731 | orchestrator | 2026-04-08 03:53:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:53:54.947795 | orchestrator | 2026-04-08 03:53:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:53:57.989046 | orchestrator | 2026-04-08 03:53:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:53:57.989634 | orchestrator | 2026-04-08 03:53:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:53:57.989653 | orchestrator | 2026-04-08 03:53:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:54:01.035403 | orchestrator | 2026-04-08 03:54:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:54:01.038256 | orchestrator | 2026-04-08 03:54:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:54:01.038340 | orchestrator | 2026-04-08 03:54:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:54:04.084132 | orchestrator | 2026-04-08 03:54:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:54:04.085752 | orchestrator | 2026-04-08 03:54:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:54:04.085807 | orchestrator | 2026-04-08 03:54:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:54:07.134301 | orchestrator | 2026-04-08 03:54:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:54:07.135775 | orchestrator | 2026-04-08 03:54:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:54:07.135821 | orchestrator | 2026-04-08 03:54:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:54:10.182791 | orchestrator | 2026-04-08 03:54:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:54:10.184723 | orchestrator | 2026-04-08 03:54:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:54:10.184752 | orchestrator | 2026-04-08 03:54:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:54:13.228500 | orchestrator | 2026-04-08 03:54:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:54:13.230418 | orchestrator | 2026-04-08 03:54:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:54:13.230460 | orchestrator | 2026-04-08 03:54:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:54:16.279208 | orchestrator | 2026-04-08 03:54:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:54:16.281571 | orchestrator | 2026-04-08 03:54:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:54:16.281636 | orchestrator | 2026-04-08 03:54:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:54:19.338257 | orchestrator | 2026-04-08 03:54:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:54:19.339716 | orchestrator | 2026-04-08 03:54:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:54:19.339761 | orchestrator | 2026-04-08 03:54:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:54:22.384857 | orchestrator | 2026-04-08 03:54:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:54:22.386436 | orchestrator | 2026-04-08 03:54:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:54:22.386485 | orchestrator | 2026-04-08 03:54:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:54:25.433044 | orchestrator | 2026-04-08 03:54:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:54:25.435597 | orchestrator | 2026-04-08 03:54:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:54:25.435668 | orchestrator | 2026-04-08 03:54:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:54:28.481794 | orchestrator | 2026-04-08 03:54:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:54:28.484487 | orchestrator | 2026-04-08 03:54:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:54:28.484538 | orchestrator | 2026-04-08 03:54:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:54:31.530963 | orchestrator | 2026-04-08 03:54:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:54:31.532770 | orchestrator | 2026-04-08 03:54:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:54:31.532817 | orchestrator | 2026-04-08 03:54:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:54:34.579114 | orchestrator | 2026-04-08 03:54:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:54:34.580042 | orchestrator | 2026-04-08 03:54:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:54:34.580088 | orchestrator | 2026-04-08 03:54:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:54:37.625674 | orchestrator | 2026-04-08 03:54:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:54:37.626683 | orchestrator | 2026-04-08 03:54:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:54:37.626732 | orchestrator | 2026-04-08 03:54:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:54:40.676526 | orchestrator | 2026-04-08 03:54:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:54:40.677770 | orchestrator | 2026-04-08 03:54:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:54:40.677819 | orchestrator | 2026-04-08 03:54:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:54:43.728239 | orchestrator | 2026-04-08 03:54:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:54:43.730863 | orchestrator | 2026-04-08 03:54:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:54:43.730915 | orchestrator | 2026-04-08 03:54:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:54:46.780133 | orchestrator | 2026-04-08 03:54:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:54:46.782267 | orchestrator | 2026-04-08 03:54:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:54:46.782312 | orchestrator | 2026-04-08 03:54:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:54:49.826444 | orchestrator | 2026-04-08 03:54:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:54:49.826576 | orchestrator | 2026-04-08 03:54:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:54:49.826590 | orchestrator | 2026-04-08 03:54:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:54:52.873547 | orchestrator | 2026-04-08 03:54:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:54:52.875516 | orchestrator | 2026-04-08 03:54:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:54:52.875605 | orchestrator | 2026-04-08 03:54:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:54:55.922754 | orchestrator | 2026-04-08 03:54:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:54:55.924278 | orchestrator | 2026-04-08 03:54:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:54:55.924487 | orchestrator | 2026-04-08 03:54:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:54:58.971910 | orchestrator | 2026-04-08 03:54:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:54:58.973163 | orchestrator | 2026-04-08 03:54:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:54:58.973184 | orchestrator | 2026-04-08 03:54:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:55:02.029671 | orchestrator | 2026-04-08 03:55:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:55:02.032226 | orchestrator | 2026-04-08 03:55:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:55:02.032295 | orchestrator | 2026-04-08 03:55:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:55:05.072479 | orchestrator | 2026-04-08 03:55:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:55:05.073999 | orchestrator | 2026-04-08 03:55:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:55:05.074144 | orchestrator | 2026-04-08 03:55:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:55:08.119933 | orchestrator | 2026-04-08 03:55:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:55:08.122145 | orchestrator | 2026-04-08 03:55:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:55:08.122189 | orchestrator | 2026-04-08 03:55:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:55:11.165681 | orchestrator | 2026-04-08 03:55:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:55:11.167706 | orchestrator | 2026-04-08 03:55:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:55:11.167778 | orchestrator | 2026-04-08 03:55:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:55:14.215390 | orchestrator | 2026-04-08 03:55:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:55:14.218050 | orchestrator | 2026-04-08 03:55:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:55:14.218121 | orchestrator | 2026-04-08 03:55:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:55:17.266920 | orchestrator | 2026-04-08 03:55:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:55:17.268528 | orchestrator | 2026-04-08 03:55:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:55:17.270086 | orchestrator | 2026-04-08 03:55:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:55:20.313660 | orchestrator | 2026-04-08 03:55:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:55:20.314398 | orchestrator | 2026-04-08 03:55:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:55:20.314492 | orchestrator | 2026-04-08 03:55:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:55:23.366265 | orchestrator | 2026-04-08 03:55:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:55:23.368402 | orchestrator | 2026-04-08 03:55:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:55:23.368895 | orchestrator | 2026-04-08 03:55:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:55:26.414578 | orchestrator | 2026-04-08 03:55:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:55:26.417632 | orchestrator | 2026-04-08 03:55:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:55:26.463714 | orchestrator | 2026-04-08 03:55:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:55:29.460219 | orchestrator | 2026-04-08 03:55:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:55:29.462324 | orchestrator | 2026-04-08 03:55:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:55:29.462382 | orchestrator | 2026-04-08 03:55:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:55:32.509961 | orchestrator | 2026-04-08 03:55:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:55:32.512004 | orchestrator | 2026-04-08 03:55:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:55:32.512121 | orchestrator | 2026-04-08 03:55:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:55:35.560813 | orchestrator | 2026-04-08 03:55:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:55:35.562376 | orchestrator | 2026-04-08 03:55:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:55:35.562518 | orchestrator | 2026-04-08 03:55:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:55:38.607418 | orchestrator | 2026-04-08 03:55:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:55:38.610312 | orchestrator | 2026-04-08 03:55:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:55:38.610395 | orchestrator | 2026-04-08 03:55:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:55:41.650641 | orchestrator | 2026-04-08 03:55:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:55:41.653200 | orchestrator | 2026-04-08 03:55:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:55:41.653408 | orchestrator | 2026-04-08 03:55:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:55:44.700918 | orchestrator | 2026-04-08 03:55:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:55:44.703217 | orchestrator | 2026-04-08 03:55:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:55:44.703340 | orchestrator | 2026-04-08 03:55:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:55:47.745749 | orchestrator | 2026-04-08 03:55:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:55:47.746321 | orchestrator | 2026-04-08 03:55:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:55:47.746353 | orchestrator | 2026-04-08 03:55:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:55:50.791989 | orchestrator | 2026-04-08 03:55:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:55:50.794215 | orchestrator | 2026-04-08 03:55:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:55:50.794327 | orchestrator | 2026-04-08 03:55:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:55:53.840148 | orchestrator | 2026-04-08 03:55:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:55:53.841494 | orchestrator | 2026-04-08 03:55:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:55:53.841542 | orchestrator | 2026-04-08 03:55:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:55:56.888576 | orchestrator | 2026-04-08 03:55:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:55:56.890586 | orchestrator | 2026-04-08 03:55:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:55:56.890661 | orchestrator | 2026-04-08 03:55:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:55:59.933009 | orchestrator | 2026-04-08 03:55:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:55:59.935364 | orchestrator | 2026-04-08 03:55:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:55:59.935513 | orchestrator | 2026-04-08 03:55:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:56:02.977190 | orchestrator | 2026-04-08 03:56:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:56:02.978951 | orchestrator | 2026-04-08 03:56:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:56:02.978974 | orchestrator | 2026-04-08 03:56:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:56:06.025641 | orchestrator | 2026-04-08 03:56:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:56:06.027356 | orchestrator | 2026-04-08 03:56:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:56:06.027412 | orchestrator | 2026-04-08 03:56:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:56:09.069358 | orchestrator | 2026-04-08 03:56:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:56:09.069802 | orchestrator | 2026-04-08 03:56:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:56:09.069819 | orchestrator | 2026-04-08 03:56:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:56:12.113671 | orchestrator | 2026-04-08 03:56:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:56:12.115655 | orchestrator | 2026-04-08 03:56:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:56:12.115701 | orchestrator | 2026-04-08 03:56:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:56:15.167820 | orchestrator | 2026-04-08 03:56:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:56:15.169225 | orchestrator | 2026-04-08 03:56:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:56:15.169418 | orchestrator | 2026-04-08 03:56:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:56:18.212362 | orchestrator | 2026-04-08 03:56:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:56:18.213435 | orchestrator | 2026-04-08 03:56:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:56:18.213511 | orchestrator | 2026-04-08 03:56:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:56:21.255363 | orchestrator | 2026-04-08 03:56:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:56:21.258365 | orchestrator | 2026-04-08 03:56:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:56:21.258426 | orchestrator | 2026-04-08 03:56:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:56:24.306748 | orchestrator | 2026-04-08 03:56:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:56:24.308472 | orchestrator | 2026-04-08 03:56:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:56:24.308532 | orchestrator | 2026-04-08 03:56:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:56:27.361069 | orchestrator | 2026-04-08 03:56:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:56:27.362924 | orchestrator | 2026-04-08 03:56:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:56:27.362989 | orchestrator | 2026-04-08 03:56:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:56:30.408727 | orchestrator | 2026-04-08 03:56:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:56:30.413861 | orchestrator | 2026-04-08 03:56:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:56:30.413932 | orchestrator | 2026-04-08 03:56:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:56:33.463345 | orchestrator | 2026-04-08 03:56:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:56:33.464959 | orchestrator | 2026-04-08 03:56:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:56:33.465002 | orchestrator | 2026-04-08 03:56:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:56:36.517732 | orchestrator | 2026-04-08 03:56:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:56:36.520925 | orchestrator | 2026-04-08 03:56:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:56:36.521281 | orchestrator | 2026-04-08 03:56:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:56:39.559132 | orchestrator | 2026-04-08 03:56:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:56:39.561115 | orchestrator | 2026-04-08 03:56:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:56:39.561143 | orchestrator | 2026-04-08 03:56:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:56:42.606120 | orchestrator | 2026-04-08 03:56:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:56:42.609189 | orchestrator | 2026-04-08 03:56:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:56:42.609474 | orchestrator | 2026-04-08 03:56:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:56:45.641802 | orchestrator | 2026-04-08 03:56:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:56:45.641923 | orchestrator | 2026-04-08 03:56:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:56:45.641934 | orchestrator | 2026-04-08 03:56:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:56:48.680321 | orchestrator | 2026-04-08 03:56:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:56:48.683138 | orchestrator | 2026-04-08 03:56:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:56:48.683206 | orchestrator | 2026-04-08 03:56:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:56:51.726708 | orchestrator | 2026-04-08 03:56:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:56:51.727481 | orchestrator | 2026-04-08 03:56:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:56:51.727583 | orchestrator | 2026-04-08 03:56:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:56:54.778954 | orchestrator | 2026-04-08 03:56:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:56:54.780779 | orchestrator | 2026-04-08 03:56:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:56:54.781027 | orchestrator | 2026-04-08 03:56:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:56:57.821859 | orchestrator | 2026-04-08 03:56:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:56:57.822923 | orchestrator | 2026-04-08 03:56:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:56:57.822958 | orchestrator | 2026-04-08 03:56:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:57:00.869586 | orchestrator | 2026-04-08 03:57:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:57:00.870821 | orchestrator | 2026-04-08 03:57:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:57:00.870862 | orchestrator | 2026-04-08 03:57:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:57:03.923069 | orchestrator | 2026-04-08 03:57:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:57:03.924957 | orchestrator | 2026-04-08 03:57:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:57:03.925435 | orchestrator | 2026-04-08 03:57:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:57:06.971942 | orchestrator | 2026-04-08 03:57:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:57:06.973913 | orchestrator | 2026-04-08 03:57:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:57:06.973988 | orchestrator | 2026-04-08 03:57:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:57:10.025843 | orchestrator | 2026-04-08 03:57:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:57:10.026680 | orchestrator | 2026-04-08 03:57:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:57:10.027759 | orchestrator | 2026-04-08 03:57:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:57:13.079825 | orchestrator | 2026-04-08 03:57:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:57:13.080717 | orchestrator | 2026-04-08 03:57:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:57:13.080941 | orchestrator | 2026-04-08 03:57:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:57:16.126777 | orchestrator | 2026-04-08 03:57:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:57:16.127288 | orchestrator | 2026-04-08 03:57:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:57:16.127337 | orchestrator | 2026-04-08 03:57:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:57:19.177753 | orchestrator | 2026-04-08 03:57:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:57:19.181339 | orchestrator | 2026-04-08 03:57:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:57:19.181507 | orchestrator | 2026-04-08 03:57:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:57:22.238538 | orchestrator | 2026-04-08 03:57:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:57:22.240145 | orchestrator | 2026-04-08 03:57:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:57:22.240191 | orchestrator | 2026-04-08 03:57:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:57:25.289445 | orchestrator | 2026-04-08 03:57:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:57:25.292301 | orchestrator | 2026-04-08 03:57:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:57:25.292391 | orchestrator | 2026-04-08 03:57:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:57:28.343264 | orchestrator | 2026-04-08 03:57:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:57:28.346983 | orchestrator | 2026-04-08 03:57:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:57:28.347068 | orchestrator | 2026-04-08 03:57:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:57:31.394919 | orchestrator | 2026-04-08 03:57:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:57:31.396890 | orchestrator | 2026-04-08 03:57:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:57:31.397008 | orchestrator | 2026-04-08 03:57:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:57:34.444951 | orchestrator | 2026-04-08 03:57:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:57:34.446765 | orchestrator | 2026-04-08 03:57:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:57:34.446844 | orchestrator | 2026-04-08 03:57:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:57:37.493342 | orchestrator | 2026-04-08 03:57:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:57:37.495731 | orchestrator | 2026-04-08 03:57:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:57:37.496005 | orchestrator | 2026-04-08 03:57:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:57:40.549711 | orchestrator | 2026-04-08 03:57:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:57:40.551773 | orchestrator | 2026-04-08 03:57:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:57:40.551859 | orchestrator | 2026-04-08 03:57:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:57:43.600060 | orchestrator | 2026-04-08 03:57:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:57:43.601600 | orchestrator | 2026-04-08 03:57:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:57:43.601687 | orchestrator | 2026-04-08 03:57:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:57:46.651584 | orchestrator | 2026-04-08 03:57:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:57:46.653428 | orchestrator | 2026-04-08 03:57:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:57:46.653456 | orchestrator | 2026-04-08 03:57:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:57:49.706678 | orchestrator | 2026-04-08 03:57:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:57:49.708833 | orchestrator | 2026-04-08 03:57:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:57:49.708882 | orchestrator | 2026-04-08 03:57:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:57:52.756423 | orchestrator | 2026-04-08 03:57:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:57:52.758361 | orchestrator | 2026-04-08 03:57:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:57:52.758486 | orchestrator | 2026-04-08 03:57:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:57:55.812438 | orchestrator | 2026-04-08 03:57:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:57:55.814776 | orchestrator | 2026-04-08 03:57:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:57:55.814863 | orchestrator | 2026-04-08 03:57:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:57:58.862384 | orchestrator | 2026-04-08 03:57:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:57:58.865334 | orchestrator | 2026-04-08 03:57:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:57:58.865414 | orchestrator | 2026-04-08 03:57:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:58:01.914430 | orchestrator | 2026-04-08 03:58:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:58:01.916142 | orchestrator | 2026-04-08 03:58:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:58:01.916176 | orchestrator | 2026-04-08 03:58:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:58:04.966200 | orchestrator | 2026-04-08 03:58:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:58:04.967887 | orchestrator | 2026-04-08 03:58:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:58:04.967922 | orchestrator | 2026-04-08 03:58:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:58:08.015704 | orchestrator | 2026-04-08 03:58:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:58:08.017649 | orchestrator | 2026-04-08 03:58:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:58:08.017721 | orchestrator | 2026-04-08 03:58:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:58:11.059855 | orchestrator | 2026-04-08 03:58:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:58:11.061072 | orchestrator | 2026-04-08 03:58:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:58:11.061159 | orchestrator | 2026-04-08 03:58:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:58:14.111383 | orchestrator | 2026-04-08 03:58:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:58:14.113753 | orchestrator | 2026-04-08 03:58:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:58:14.113809 | orchestrator | 2026-04-08 03:58:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:58:17.162560 | orchestrator | 2026-04-08 03:58:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:58:17.164447 | orchestrator | 2026-04-08 03:58:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:58:17.164575 | orchestrator | 2026-04-08 03:58:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:58:20.219769 | orchestrator | 2026-04-08 03:58:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:58:20.220819 | orchestrator | 2026-04-08 03:58:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:58:20.220919 | orchestrator | 2026-04-08 03:58:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:58:23.271601 | orchestrator | 2026-04-08 03:58:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:58:23.273928 | orchestrator | 2026-04-08 03:58:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:58:23.273946 | orchestrator | 2026-04-08 03:58:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:58:26.319908 | orchestrator | 2026-04-08 03:58:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:58:26.322637 | orchestrator | 2026-04-08 03:58:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:58:26.322685 | orchestrator | 2026-04-08 03:58:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:58:29.371282 | orchestrator | 2026-04-08 03:58:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:58:29.372951 | orchestrator | 2026-04-08 03:58:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:58:29.373061 | orchestrator | 2026-04-08 03:58:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:58:32.428378 | orchestrator | 2026-04-08 03:58:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:58:32.430399 | orchestrator | 2026-04-08 03:58:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:58:32.430611 | orchestrator | 2026-04-08 03:58:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:58:35.474783 | orchestrator | 2026-04-08 03:58:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:58:35.476143 | orchestrator | 2026-04-08 03:58:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:58:35.476185 | orchestrator | 2026-04-08 03:58:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:58:38.518957 | orchestrator | 2026-04-08 03:58:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:58:38.519768 | orchestrator | 2026-04-08 03:58:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:58:38.519807 | orchestrator | 2026-04-08 03:58:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:58:41.568385 | orchestrator | 2026-04-08 03:58:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:58:41.570010 | orchestrator | 2026-04-08 03:58:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:58:41.570101 | orchestrator | 2026-04-08 03:58:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:58:44.617404 | orchestrator | 2026-04-08 03:58:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:58:44.620698 | orchestrator | 2026-04-08 03:58:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:58:44.620915 | orchestrator | 2026-04-08 03:58:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:58:47.674373 | orchestrator | 2026-04-08 03:58:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:58:47.675959 | orchestrator | 2026-04-08 03:58:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:58:47.676037 | orchestrator | 2026-04-08 03:58:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:58:50.730601 | orchestrator | 2026-04-08 03:58:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:58:50.733924 | orchestrator | 2026-04-08 03:58:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:58:50.734176 | orchestrator | 2026-04-08 03:58:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:58:53.782987 | orchestrator | 2026-04-08 03:58:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:58:53.784339 | orchestrator | 2026-04-08 03:58:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:58:53.784521 | orchestrator | 2026-04-08 03:58:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:58:56.838159 | orchestrator | 2026-04-08 03:58:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:58:56.839743 | orchestrator | 2026-04-08 03:58:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:58:56.839878 | orchestrator | 2026-04-08 03:58:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:58:59.888442 | orchestrator | 2026-04-08 03:58:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:58:59.890483 | orchestrator | 2026-04-08 03:58:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:58:59.890804 | orchestrator | 2026-04-08 03:58:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:59:02.961276 | orchestrator | 2026-04-08 03:59:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:59:02.961360 | orchestrator | 2026-04-08 03:59:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:59:02.961368 | orchestrator | 2026-04-08 03:59:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:59:06.012723 | orchestrator | 2026-04-08 03:59:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:59:06.015319 | orchestrator | 2026-04-08 03:59:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:59:06.015376 | orchestrator | 2026-04-08 03:59:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:59:09.063293 | orchestrator | 2026-04-08 03:59:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:59:09.063774 | orchestrator | 2026-04-08 03:59:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:59:09.063935 | orchestrator | 2026-04-08 03:59:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:59:12.112495 | orchestrator | 2026-04-08 03:59:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:59:12.114569 | orchestrator | 2026-04-08 03:59:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:59:12.114660 | orchestrator | 2026-04-08 03:59:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:59:15.156017 | orchestrator | 2026-04-08 03:59:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:59:15.157035 | orchestrator | 2026-04-08 03:59:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:59:15.157053 | orchestrator | 2026-04-08 03:59:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:59:18.209566 | orchestrator | 2026-04-08 03:59:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:59:18.211064 | orchestrator | 2026-04-08 03:59:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:59:18.211117 | orchestrator | 2026-04-08 03:59:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:59:21.258465 | orchestrator | 2026-04-08 03:59:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:59:21.260308 | orchestrator | 2026-04-08 03:59:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:59:21.260370 | orchestrator | 2026-04-08 03:59:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:59:24.316667 | orchestrator | 2026-04-08 03:59:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:59:24.318727 | orchestrator | 2026-04-08 03:59:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:59:24.318857 | orchestrator | 2026-04-08 03:59:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:59:27.360357 | orchestrator | 2026-04-08 03:59:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:59:27.361628 | orchestrator | 2026-04-08 03:59:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:59:27.361692 | orchestrator | 2026-04-08 03:59:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:59:30.410289 | orchestrator | 2026-04-08 03:59:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:59:30.410849 | orchestrator | 2026-04-08 03:59:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:59:30.411488 | orchestrator | 2026-04-08 03:59:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:59:33.457012 | orchestrator | 2026-04-08 03:59:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:59:33.458471 | orchestrator | 2026-04-08 03:59:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:59:33.458690 | orchestrator | 2026-04-08 03:59:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:59:36.509853 | orchestrator | 2026-04-08 03:59:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:59:36.511313 | orchestrator | 2026-04-08 03:59:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:59:36.511349 | orchestrator | 2026-04-08 03:59:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:59:39.565831 | orchestrator | 2026-04-08 03:59:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:59:39.567036 | orchestrator | 2026-04-08 03:59:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:59:39.567086 | orchestrator | 2026-04-08 03:59:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:59:42.614991 | orchestrator | 2026-04-08 03:59:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:59:42.617455 | orchestrator | 2026-04-08 03:59:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:59:42.617499 | orchestrator | 2026-04-08 03:59:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:59:45.668527 | orchestrator | 2026-04-08 03:59:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:59:45.670328 | orchestrator | 2026-04-08 03:59:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:59:45.670381 | orchestrator | 2026-04-08 03:59:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:59:48.714461 | orchestrator | 2026-04-08 03:59:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:59:48.715853 | orchestrator | 2026-04-08 03:59:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:59:48.715960 | orchestrator | 2026-04-08 03:59:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:59:51.758626 | orchestrator | 2026-04-08 03:59:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:59:51.760491 | orchestrator | 2026-04-08 03:59:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:59:51.760563 | orchestrator | 2026-04-08 03:59:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:59:54.812930 | orchestrator | 2026-04-08 03:59:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:59:54.814821 | orchestrator | 2026-04-08 03:59:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:59:54.814895 | orchestrator | 2026-04-08 03:59:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 03:59:57.865753 | orchestrator | 2026-04-08 03:59:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 03:59:57.867751 | orchestrator | 2026-04-08 03:59:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 03:59:57.867812 | orchestrator | 2026-04-08 03:59:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:00:00.924059 | orchestrator | 2026-04-08 04:00:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:00:00.926332 | orchestrator | 2026-04-08 04:00:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:00:00.926492 | orchestrator | 2026-04-08 04:00:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:00:03.975546 | orchestrator | 2026-04-08 04:00:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:00:03.977940 | orchestrator | 2026-04-08 04:00:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:00:03.977999 | orchestrator | 2026-04-08 04:00:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:00:07.030287 | orchestrator | 2026-04-08 04:00:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:00:07.031344 | orchestrator | 2026-04-08 04:00:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:00:07.031564 | orchestrator | 2026-04-08 04:00:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:00:10.074529 | orchestrator | 2026-04-08 04:00:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:00:10.075910 | orchestrator | 2026-04-08 04:00:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:00:10.076034 | orchestrator | 2026-04-08 04:00:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:00:13.121996 | orchestrator | 2026-04-08 04:00:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:00:13.123973 | orchestrator | 2026-04-08 04:00:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:00:13.124072 | orchestrator | 2026-04-08 04:00:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:00:16.174126 | orchestrator | 2026-04-08 04:00:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:00:16.176573 | orchestrator | 2026-04-08 04:00:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:00:16.176647 | orchestrator | 2026-04-08 04:00:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:00:19.230131 | orchestrator | 2026-04-08 04:00:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:00:19.232393 | orchestrator | 2026-04-08 04:00:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:00:19.232477 | orchestrator | 2026-04-08 04:00:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:00:22.278323 | orchestrator | 2026-04-08 04:00:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:00:22.280272 | orchestrator | 2026-04-08 04:00:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:00:22.280401 | orchestrator | 2026-04-08 04:00:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:00:25.327961 | orchestrator | 2026-04-08 04:00:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:00:25.330096 | orchestrator | 2026-04-08 04:00:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:00:25.330155 | orchestrator | 2026-04-08 04:00:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:00:28.372049 | orchestrator | 2026-04-08 04:00:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:00:28.373209 | orchestrator | 2026-04-08 04:00:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:00:28.373277 | orchestrator | 2026-04-08 04:00:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:00:31.425778 | orchestrator | 2026-04-08 04:00:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:00:31.426665 | orchestrator | 2026-04-08 04:00:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:00:31.427099 | orchestrator | 2026-04-08 04:00:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:00:34.483679 | orchestrator | 2026-04-08 04:00:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:00:34.485300 | orchestrator | 2026-04-08 04:00:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:00:34.485339 | orchestrator | 2026-04-08 04:00:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:00:37.529763 | orchestrator | 2026-04-08 04:00:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:00:37.531194 | orchestrator | 2026-04-08 04:00:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:00:37.531245 | orchestrator | 2026-04-08 04:00:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:00:40.587585 | orchestrator | 2026-04-08 04:00:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:00:40.589398 | orchestrator | 2026-04-08 04:00:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:00:40.589568 | orchestrator | 2026-04-08 04:00:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:00:43.636745 | orchestrator | 2026-04-08 04:00:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:00:43.637957 | orchestrator | 2026-04-08 04:00:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:00:43.638007 | orchestrator | 2026-04-08 04:00:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:00:46.683791 | orchestrator | 2026-04-08 04:00:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:00:46.684282 | orchestrator | 2026-04-08 04:00:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:00:46.684316 | orchestrator | 2026-04-08 04:00:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:00:49.736948 | orchestrator | 2026-04-08 04:00:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:00:49.737738 | orchestrator | 2026-04-08 04:00:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:00:49.737781 | orchestrator | 2026-04-08 04:00:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:00:52.789880 | orchestrator | 2026-04-08 04:00:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:00:52.792691 | orchestrator | 2026-04-08 04:00:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:00:52.792771 | orchestrator | 2026-04-08 04:00:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:00:55.845612 | orchestrator | 2026-04-08 04:00:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:00:55.847038 | orchestrator | 2026-04-08 04:00:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:00:55.847093 | orchestrator | 2026-04-08 04:00:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:00:58.897839 | orchestrator | 2026-04-08 04:00:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:00:58.899655 | orchestrator | 2026-04-08 04:00:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:00:58.899729 | orchestrator | 2026-04-08 04:00:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:01:01.949644 | orchestrator | 2026-04-08 04:01:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:01:01.950819 | orchestrator | 2026-04-08 04:01:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:01:01.950863 | orchestrator | 2026-04-08 04:01:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:01:04.993464 | orchestrator | 2026-04-08 04:01:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:01:04.994703 | orchestrator | 2026-04-08 04:01:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:01:04.994845 | orchestrator | 2026-04-08 04:01:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:01:08.037257 | orchestrator | 2026-04-08 04:01:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:01:08.039252 | orchestrator | 2026-04-08 04:01:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:01:08.039317 | orchestrator | 2026-04-08 04:01:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:01:11.096751 | orchestrator | 2026-04-08 04:01:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:01:11.098332 | orchestrator | 2026-04-08 04:01:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:01:11.098398 | orchestrator | 2026-04-08 04:01:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:01:14.145689 | orchestrator | 2026-04-08 04:01:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:01:14.147231 | orchestrator | 2026-04-08 04:01:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:01:14.147278 | orchestrator | 2026-04-08 04:01:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:01:17.193381 | orchestrator | 2026-04-08 04:01:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:01:17.194442 | orchestrator | 2026-04-08 04:01:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:01:17.194903 | orchestrator | 2026-04-08 04:01:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:01:20.244802 | orchestrator | 2026-04-08 04:01:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:01:20.246440 | orchestrator | 2026-04-08 04:01:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:01:20.246510 | orchestrator | 2026-04-08 04:01:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:01:23.293261 | orchestrator | 2026-04-08 04:01:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:01:23.295618 | orchestrator | 2026-04-08 04:01:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:01:23.295696 | orchestrator | 2026-04-08 04:01:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:01:26.348657 | orchestrator | 2026-04-08 04:01:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:01:26.350196 | orchestrator | 2026-04-08 04:01:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:01:26.350361 | orchestrator | 2026-04-08 04:01:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:01:29.403349 | orchestrator | 2026-04-08 04:01:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:01:29.406250 | orchestrator | 2026-04-08 04:01:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:01:29.406300 | orchestrator | 2026-04-08 04:01:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:01:32.456771 | orchestrator | 2026-04-08 04:01:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:01:32.458408 | orchestrator | 2026-04-08 04:01:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:01:32.458529 | orchestrator | 2026-04-08 04:01:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:01:35.506716 | orchestrator | 2026-04-08 04:01:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:01:35.508198 | orchestrator | 2026-04-08 04:01:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:01:35.508245 | orchestrator | 2026-04-08 04:01:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:01:38.550308 | orchestrator | 2026-04-08 04:01:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:01:38.551513 | orchestrator | 2026-04-08 04:01:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:01:38.551586 | orchestrator | 2026-04-08 04:01:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:01:41.601837 | orchestrator | 2026-04-08 04:01:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:01:41.602262 | orchestrator | 2026-04-08 04:01:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:01:41.602414 | orchestrator | 2026-04-08 04:01:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:01:44.645450 | orchestrator | 2026-04-08 04:01:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:01:44.647652 | orchestrator | 2026-04-08 04:01:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:01:44.648923 | orchestrator | 2026-04-08 04:01:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:01:47.696195 | orchestrator | 2026-04-08 04:01:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:01:47.697729 | orchestrator | 2026-04-08 04:01:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:01:47.697951 | orchestrator | 2026-04-08 04:01:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:01:50.744755 | orchestrator | 2026-04-08 04:01:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:01:50.746200 | orchestrator | 2026-04-08 04:01:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:01:50.746253 | orchestrator | 2026-04-08 04:01:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:01:53.793419 | orchestrator | 2026-04-08 04:01:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:01:53.794513 | orchestrator | 2026-04-08 04:01:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:01:53.794898 | orchestrator | 2026-04-08 04:01:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:01:56.845409 | orchestrator | 2026-04-08 04:01:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:01:56.847328 | orchestrator | 2026-04-08 04:01:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:01:56.847365 | orchestrator | 2026-04-08 04:01:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:01:59.899551 | orchestrator | 2026-04-08 04:01:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:01:59.900612 | orchestrator | 2026-04-08 04:01:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:01:59.900873 | orchestrator | 2026-04-08 04:01:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:02:02.952232 | orchestrator | 2026-04-08 04:02:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:02:02.954297 | orchestrator | 2026-04-08 04:02:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:02:02.954385 | orchestrator | 2026-04-08 04:02:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:02:05.998659 | orchestrator | 2026-04-08 04:02:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:02:06.000246 | orchestrator | 2026-04-08 04:02:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:02:06.000323 | orchestrator | 2026-04-08 04:02:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:02:09.044884 | orchestrator | 2026-04-08 04:02:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:02:09.046789 | orchestrator | 2026-04-08 04:02:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:02:09.047351 | orchestrator | 2026-04-08 04:02:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:02:12.089245 | orchestrator | 2026-04-08 04:02:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:02:12.090307 | orchestrator | 2026-04-08 04:02:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:02:12.090341 | orchestrator | 2026-04-08 04:02:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:02:15.141344 | orchestrator | 2026-04-08 04:02:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:02:15.143042 | orchestrator | 2026-04-08 04:02:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:02:15.143119 | orchestrator | 2026-04-08 04:02:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:02:18.187987 | orchestrator | 2026-04-08 04:02:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:02:18.190237 | orchestrator | 2026-04-08 04:02:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:02:18.190301 | orchestrator | 2026-04-08 04:02:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:02:21.239985 | orchestrator | 2026-04-08 04:02:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:02:21.241635 | orchestrator | 2026-04-08 04:02:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:02:21.241687 | orchestrator | 2026-04-08 04:02:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:02:24.291515 | orchestrator | 2026-04-08 04:02:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:02:24.293751 | orchestrator | 2026-04-08 04:02:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:02:24.293816 | orchestrator | 2026-04-08 04:02:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:02:27.345645 | orchestrator | 2026-04-08 04:02:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:02:27.347214 | orchestrator | 2026-04-08 04:02:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:02:27.347257 | orchestrator | 2026-04-08 04:02:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:02:30.396878 | orchestrator | 2026-04-08 04:02:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:02:30.399002 | orchestrator | 2026-04-08 04:02:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:02:30.399051 | orchestrator | 2026-04-08 04:02:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:02:33.450534 | orchestrator | 2026-04-08 04:02:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:02:33.452733 | orchestrator | 2026-04-08 04:02:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:02:33.452787 | orchestrator | 2026-04-08 04:02:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:02:36.504135 | orchestrator | 2026-04-08 04:02:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:02:36.506124 | orchestrator | 2026-04-08 04:02:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:02:36.506184 | orchestrator | 2026-04-08 04:02:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:02:39.556707 | orchestrator | 2026-04-08 04:02:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:02:39.558622 | orchestrator | 2026-04-08 04:02:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:02:39.558697 | orchestrator | 2026-04-08 04:02:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:02:42.607971 | orchestrator | 2026-04-08 04:02:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:02:42.610103 | orchestrator | 2026-04-08 04:02:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:02:42.610164 | orchestrator | 2026-04-08 04:02:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:02:45.659139 | orchestrator | 2026-04-08 04:02:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:02:45.660538 | orchestrator | 2026-04-08 04:02:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:02:45.660626 | orchestrator | 2026-04-08 04:02:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:02:48.711240 | orchestrator | 2026-04-08 04:02:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:02:48.714926 | orchestrator | 2026-04-08 04:02:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:02:48.714969 | orchestrator | 2026-04-08 04:02:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:02:51.761836 | orchestrator | 2026-04-08 04:02:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:02:51.763889 | orchestrator | 2026-04-08 04:02:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:02:51.763940 | orchestrator | 2026-04-08 04:02:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:02:54.811852 | orchestrator | 2026-04-08 04:02:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:02:54.813653 | orchestrator | 2026-04-08 04:02:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:02:54.813703 | orchestrator | 2026-04-08 04:02:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:02:57.864088 | orchestrator | 2026-04-08 04:02:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:02:57.866303 | orchestrator | 2026-04-08 04:02:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:02:57.866384 | orchestrator | 2026-04-08 04:02:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:03:00.910201 | orchestrator | 2026-04-08 04:03:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:03:00.911937 | orchestrator | 2026-04-08 04:03:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:03:00.911974 | orchestrator | 2026-04-08 04:03:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:03:03.958890 | orchestrator | 2026-04-08 04:03:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:03:03.961533 | orchestrator | 2026-04-08 04:03:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:03:03.961579 | orchestrator | 2026-04-08 04:03:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:03:07.006157 | orchestrator | 2026-04-08 04:03:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:03:07.008640 | orchestrator | 2026-04-08 04:03:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:03:07.008697 | orchestrator | 2026-04-08 04:03:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:03:10.054904 | orchestrator | 2026-04-08 04:03:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:03:10.055528 | orchestrator | 2026-04-08 04:03:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:03:10.055830 | orchestrator | 2026-04-08 04:03:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:03:13.110338 | orchestrator | 2026-04-08 04:03:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:03:13.112185 | orchestrator | 2026-04-08 04:03:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:03:13.112240 | orchestrator | 2026-04-08 04:03:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:03:16.162920 | orchestrator | 2026-04-08 04:03:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:03:16.166167 | orchestrator | 2026-04-08 04:03:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:03:16.166251 | orchestrator | 2026-04-08 04:03:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:03:19.212318 | orchestrator | 2026-04-08 04:03:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:03:19.213903 | orchestrator | 2026-04-08 04:03:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:03:19.214152 | orchestrator | 2026-04-08 04:03:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:03:22.257636 | orchestrator | 2026-04-08 04:03:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:03:22.258748 | orchestrator | 2026-04-08 04:03:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:03:22.258809 | orchestrator | 2026-04-08 04:03:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:03:25.302396 | orchestrator | 2026-04-08 04:03:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:03:25.302592 | orchestrator | 2026-04-08 04:03:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:03:25.302607 | orchestrator | 2026-04-08 04:03:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:03:28.360929 | orchestrator | 2026-04-08 04:03:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:03:28.364657 | orchestrator | 2026-04-08 04:03:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:03:28.364724 | orchestrator | 2026-04-08 04:03:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:03:31.413460 | orchestrator | 2026-04-08 04:03:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:03:31.415317 | orchestrator | 2026-04-08 04:03:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:03:31.415413 | orchestrator | 2026-04-08 04:03:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:03:34.469211 | orchestrator | 2026-04-08 04:03:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:03:34.469937 | orchestrator | 2026-04-08 04:03:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:03:34.470313 | orchestrator | 2026-04-08 04:03:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:03:37.519302 | orchestrator | 2026-04-08 04:03:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:03:37.520961 | orchestrator | 2026-04-08 04:03:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:03:37.520991 | orchestrator | 2026-04-08 04:03:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:03:40.572689 | orchestrator | 2026-04-08 04:03:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:03:40.574587 | orchestrator | 2026-04-08 04:03:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:03:40.574640 | orchestrator | 2026-04-08 04:03:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:03:43.623451 | orchestrator | 2026-04-08 04:03:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:03:43.624499 | orchestrator | 2026-04-08 04:03:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:03:43.624556 | orchestrator | 2026-04-08 04:03:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:03:46.671500 | orchestrator | 2026-04-08 04:03:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:03:46.674394 | orchestrator | 2026-04-08 04:03:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:03:46.674828 | orchestrator | 2026-04-08 04:03:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:03:49.722477 | orchestrator | 2026-04-08 04:03:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:03:49.724243 | orchestrator | 2026-04-08 04:03:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:03:49.724279 | orchestrator | 2026-04-08 04:03:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:03:52.779729 | orchestrator | 2026-04-08 04:03:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:03:52.779817 | orchestrator | 2026-04-08 04:03:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:03:52.779829 | orchestrator | 2026-04-08 04:03:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:03:55.815238 | orchestrator | 2026-04-08 04:03:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:03:55.817000 | orchestrator | 2026-04-08 04:03:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:03:55.817103 | orchestrator | 2026-04-08 04:03:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:03:58.866715 | orchestrator | 2026-04-08 04:03:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:03:58.869165 | orchestrator | 2026-04-08 04:03:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:03:58.869224 | orchestrator | 2026-04-08 04:03:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:04:01.914942 | orchestrator | 2026-04-08 04:04:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:04:01.917633 | orchestrator | 2026-04-08 04:04:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:04:01.917863 | orchestrator | 2026-04-08 04:04:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:04:04.957397 | orchestrator | 2026-04-08 04:04:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:04:04.959176 | orchestrator | 2026-04-08 04:04:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:04:04.959234 | orchestrator | 2026-04-08 04:04:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:04:08.003276 | orchestrator | 2026-04-08 04:04:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:04:08.004777 | orchestrator | 2026-04-08 04:04:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:04:08.004836 | orchestrator | 2026-04-08 04:04:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:04:11.047527 | orchestrator | 2026-04-08 04:04:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:04:11.048984 | orchestrator | 2026-04-08 04:04:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:04:11.049106 | orchestrator | 2026-04-08 04:04:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:04:14.089886 | orchestrator | 2026-04-08 04:04:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:04:14.090913 | orchestrator | 2026-04-08 04:04:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:04:14.091082 | orchestrator | 2026-04-08 04:04:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:04:17.137859 | orchestrator | 2026-04-08 04:04:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:04:17.139565 | orchestrator | 2026-04-08 04:04:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:04:17.139622 | orchestrator | 2026-04-08 04:04:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:04:20.189158 | orchestrator | 2026-04-08 04:04:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:04:20.190247 | orchestrator | 2026-04-08 04:04:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:04:20.190303 | orchestrator | 2026-04-08 04:04:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:04:23.233791 | orchestrator | 2026-04-08 04:04:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:04:23.234173 | orchestrator | 2026-04-08 04:04:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:04:23.234250 | orchestrator | 2026-04-08 04:04:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:04:26.286221 | orchestrator | 2026-04-08 04:04:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:04:26.288097 | orchestrator | 2026-04-08 04:04:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:04:26.288129 | orchestrator | 2026-04-08 04:04:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:04:29.339137 | orchestrator | 2026-04-08 04:04:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:04:29.341489 | orchestrator | 2026-04-08 04:04:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:04:29.342682 | orchestrator | 2026-04-08 04:04:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:04:32.384534 | orchestrator | 2026-04-08 04:04:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:04:32.387622 | orchestrator | 2026-04-08 04:04:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:04:32.387677 | orchestrator | 2026-04-08 04:04:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:04:35.436800 | orchestrator | 2026-04-08 04:04:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:04:35.437972 | orchestrator | 2026-04-08 04:04:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:04:35.438409 | orchestrator | 2026-04-08 04:04:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:04:38.479509 | orchestrator | 2026-04-08 04:04:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:04:38.479810 | orchestrator | 2026-04-08 04:04:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:04:38.481484 | orchestrator | 2026-04-08 04:04:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:04:41.523419 | orchestrator | 2026-04-08 04:04:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:04:41.523948 | orchestrator | 2026-04-08 04:04:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:04:41.524639 | orchestrator | 2026-04-08 04:04:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:04:44.582343 | orchestrator | 2026-04-08 04:04:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:04:44.584470 | orchestrator | 2026-04-08 04:04:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:04:44.584539 | orchestrator | 2026-04-08 04:04:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:04:47.633230 | orchestrator | 2026-04-08 04:04:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:04:47.635337 | orchestrator | 2026-04-08 04:04:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:04:47.635393 | orchestrator | 2026-04-08 04:04:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:04:50.688347 | orchestrator | 2026-04-08 04:04:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:04:50.690656 | orchestrator | 2026-04-08 04:04:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:04:50.691030 | orchestrator | 2026-04-08 04:04:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:04:53.744897 | orchestrator | 2026-04-08 04:04:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:04:53.747608 | orchestrator | 2026-04-08 04:04:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:04:53.747706 | orchestrator | 2026-04-08 04:04:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:04:56.793420 | orchestrator | 2026-04-08 04:04:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:04:56.795342 | orchestrator | 2026-04-08 04:04:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:04:56.795466 | orchestrator | 2026-04-08 04:04:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:04:59.852066 | orchestrator | 2026-04-08 04:04:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:04:59.852961 | orchestrator | 2026-04-08 04:04:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:04:59.853112 | orchestrator | 2026-04-08 04:04:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:05:02.899552 | orchestrator | 2026-04-08 04:05:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:05:02.901090 | orchestrator | 2026-04-08 04:05:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:05:02.901129 | orchestrator | 2026-04-08 04:05:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:05:05.937218 | orchestrator | 2026-04-08 04:05:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:05:05.938797 | orchestrator | 2026-04-08 04:05:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:05:05.938835 | orchestrator | 2026-04-08 04:05:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:05:08.985324 | orchestrator | 2026-04-08 04:05:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:05:08.987475 | orchestrator | 2026-04-08 04:05:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:05:08.987621 | orchestrator | 2026-04-08 04:05:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:05:12.045595 | orchestrator | 2026-04-08 04:05:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:05:12.046492 | orchestrator | 2026-04-08 04:05:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:05:12.046540 | orchestrator | 2026-04-08 04:05:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:05:15.094088 | orchestrator | 2026-04-08 04:05:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:05:15.095214 | orchestrator | 2026-04-08 04:05:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:05:15.095259 | orchestrator | 2026-04-08 04:05:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:05:18.148296 | orchestrator | 2026-04-08 04:05:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:05:18.149320 | orchestrator | 2026-04-08 04:05:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:05:18.149581 | orchestrator | 2026-04-08 04:05:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:05:21.199289 | orchestrator | 2026-04-08 04:05:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:05:21.200198 | orchestrator | 2026-04-08 04:05:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:05:21.200316 | orchestrator | 2026-04-08 04:05:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:05:24.248354 | orchestrator | 2026-04-08 04:05:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:05:24.249951 | orchestrator | 2026-04-08 04:05:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:05:24.250219 | orchestrator | 2026-04-08 04:05:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:05:27.298315 | orchestrator | 2026-04-08 04:05:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:05:27.299493 | orchestrator | 2026-04-08 04:05:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:05:27.299660 | orchestrator | 2026-04-08 04:05:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:05:30.350475 | orchestrator | 2026-04-08 04:05:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:05:30.352603 | orchestrator | 2026-04-08 04:05:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:05:30.352669 | orchestrator | 2026-04-08 04:05:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:05:33.402959 | orchestrator | 2026-04-08 04:05:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:05:33.404477 | orchestrator | 2026-04-08 04:05:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:05:33.404519 | orchestrator | 2026-04-08 04:05:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:05:36.453162 | orchestrator | 2026-04-08 04:05:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:05:36.455806 | orchestrator | 2026-04-08 04:05:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:05:36.455882 | orchestrator | 2026-04-08 04:05:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:05:39.504478 | orchestrator | 2026-04-08 04:05:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:05:39.506707 | orchestrator | 2026-04-08 04:05:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:05:39.506783 | orchestrator | 2026-04-08 04:05:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:05:42.556779 | orchestrator | 2026-04-08 04:05:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:05:42.558344 | orchestrator | 2026-04-08 04:05:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:05:42.558368 | orchestrator | 2026-04-08 04:05:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:05:45.614859 | orchestrator | 2026-04-08 04:05:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:05:45.617528 | orchestrator | 2026-04-08 04:05:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:05:45.617563 | orchestrator | 2026-04-08 04:05:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:05:48.668295 | orchestrator | 2026-04-08 04:05:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:05:48.699178 | orchestrator | 2026-04-08 04:05:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:05:48.699225 | orchestrator | 2026-04-08 04:05:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:05:51.717279 | orchestrator | 2026-04-08 04:05:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:05:51.719089 | orchestrator | 2026-04-08 04:05:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:05:51.719108 | orchestrator | 2026-04-08 04:05:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:05:54.760259 | orchestrator | 2026-04-08 04:05:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:05:54.762270 | orchestrator | 2026-04-08 04:05:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:05:54.762328 | orchestrator | 2026-04-08 04:05:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:05:57.811466 | orchestrator | 2026-04-08 04:05:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:05:57.813754 | orchestrator | 2026-04-08 04:05:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:05:57.813831 | orchestrator | 2026-04-08 04:05:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:06:00.861423 | orchestrator | 2026-04-08 04:06:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:06:00.863109 | orchestrator | 2026-04-08 04:06:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:06:00.863153 | orchestrator | 2026-04-08 04:06:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:06:03.904277 | orchestrator | 2026-04-08 04:06:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:06:03.905243 | orchestrator | 2026-04-08 04:06:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:06:03.905277 | orchestrator | 2026-04-08 04:06:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:06:06.950406 | orchestrator | 2026-04-08 04:06:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:06:06.952359 | orchestrator | 2026-04-08 04:06:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:06:06.952784 | orchestrator | 2026-04-08 04:06:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:06:10.002898 | orchestrator | 2026-04-08 04:06:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:06:10.010742 | orchestrator | 2026-04-08 04:06:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:06:10.010832 | orchestrator | 2026-04-08 04:06:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:06:13.059159 | orchestrator | 2026-04-08 04:06:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:06:13.060315 | orchestrator | 2026-04-08 04:06:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:06:13.060381 | orchestrator | 2026-04-08 04:06:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:06:16.112834 | orchestrator | 2026-04-08 04:06:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:06:16.114962 | orchestrator | 2026-04-08 04:06:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:06:16.115054 | orchestrator | 2026-04-08 04:06:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:06:19.170608 | orchestrator | 2026-04-08 04:06:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:06:19.172109 | orchestrator | 2026-04-08 04:06:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:06:19.172194 | orchestrator | 2026-04-08 04:06:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:06:22.214219 | orchestrator | 2026-04-08 04:06:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:06:22.216438 | orchestrator | 2026-04-08 04:06:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:06:22.216537 | orchestrator | 2026-04-08 04:06:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:06:25.269385 | orchestrator | 2026-04-08 04:06:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:06:25.271292 | orchestrator | 2026-04-08 04:06:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:06:25.271538 | orchestrator | 2026-04-08 04:06:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:06:28.318275 | orchestrator | 2026-04-08 04:06:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:06:28.320830 | orchestrator | 2026-04-08 04:06:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:06:28.320870 | orchestrator | 2026-04-08 04:06:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:06:31.364039 | orchestrator | 2026-04-08 04:06:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:06:31.366916 | orchestrator | 2026-04-08 04:06:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:06:31.367050 | orchestrator | 2026-04-08 04:06:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:06:34.407436 | orchestrator | 2026-04-08 04:06:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:06:34.408026 | orchestrator | 2026-04-08 04:06:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:06:34.408123 | orchestrator | 2026-04-08 04:06:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:06:37.458121 | orchestrator | 2026-04-08 04:06:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:06:37.460207 | orchestrator | 2026-04-08 04:06:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:06:37.460240 | orchestrator | 2026-04-08 04:06:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:06:40.505198 | orchestrator | 2026-04-08 04:06:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:06:40.506211 | orchestrator | 2026-04-08 04:06:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:06:40.506261 | orchestrator | 2026-04-08 04:06:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:06:43.558127 | orchestrator | 2026-04-08 04:06:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:06:43.559266 | orchestrator | 2026-04-08 04:06:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:06:43.559309 | orchestrator | 2026-04-08 04:06:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:06:46.609084 | orchestrator | 2026-04-08 04:06:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:06:46.610622 | orchestrator | 2026-04-08 04:06:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:06:46.610682 | orchestrator | 2026-04-08 04:06:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:06:49.660446 | orchestrator | 2026-04-08 04:06:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:06:49.662142 | orchestrator | 2026-04-08 04:06:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:06:49.662203 | orchestrator | 2026-04-08 04:06:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:06:52.708910 | orchestrator | 2026-04-08 04:06:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:06:52.710248 | orchestrator | 2026-04-08 04:06:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:06:52.710284 | orchestrator | 2026-04-08 04:06:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:06:55.760417 | orchestrator | 2026-04-08 04:06:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:06:55.763219 | orchestrator | 2026-04-08 04:06:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:06:55.763309 | orchestrator | 2026-04-08 04:06:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:06:58.811105 | orchestrator | 2026-04-08 04:06:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:06:58.812941 | orchestrator | 2026-04-08 04:06:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:06:58.812977 | orchestrator | 2026-04-08 04:06:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:07:01.864403 | orchestrator | 2026-04-08 04:07:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:07:01.866883 | orchestrator | 2026-04-08 04:07:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:07:01.867025 | orchestrator | 2026-04-08 04:07:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:07:04.918279 | orchestrator | 2026-04-08 04:07:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:07:04.921111 | orchestrator | 2026-04-08 04:07:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:07:04.921145 | orchestrator | 2026-04-08 04:07:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:07:07.967345 | orchestrator | 2026-04-08 04:07:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:07:07.970318 | orchestrator | 2026-04-08 04:07:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:07:07.970387 | orchestrator | 2026-04-08 04:07:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:07:11.023634 | orchestrator | 2026-04-08 04:07:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:07:11.025972 | orchestrator | 2026-04-08 04:07:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:07:11.026237 | orchestrator | 2026-04-08 04:07:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:07:14.079807 | orchestrator | 2026-04-08 04:07:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:07:14.082365 | orchestrator | 2026-04-08 04:07:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:07:14.082482 | orchestrator | 2026-04-08 04:07:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:07:17.135321 | orchestrator | 2026-04-08 04:07:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:07:17.137592 | orchestrator | 2026-04-08 04:07:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:07:17.137668 | orchestrator | 2026-04-08 04:07:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:07:20.188339 | orchestrator | 2026-04-08 04:07:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:07:20.191700 | orchestrator | 2026-04-08 04:07:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:07:20.191775 | orchestrator | 2026-04-08 04:07:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:07:23.247056 | orchestrator | 2026-04-08 04:07:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:07:23.249051 | orchestrator | 2026-04-08 04:07:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:07:23.249132 | orchestrator | 2026-04-08 04:07:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:07:26.295621 | orchestrator | 2026-04-08 04:07:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:07:26.298443 | orchestrator | 2026-04-08 04:07:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:07:26.298526 | orchestrator | 2026-04-08 04:07:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:07:29.342383 | orchestrator | 2026-04-08 04:07:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:07:29.344448 | orchestrator | 2026-04-08 04:07:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:07:29.344529 | orchestrator | 2026-04-08 04:07:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:07:32.381957 | orchestrator | 2026-04-08 04:07:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:07:32.383259 | orchestrator | 2026-04-08 04:07:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:07:32.383299 | orchestrator | 2026-04-08 04:07:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:07:35.434845 | orchestrator | 2026-04-08 04:07:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:07:35.436513 | orchestrator | 2026-04-08 04:07:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:07:35.436547 | orchestrator | 2026-04-08 04:07:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:07:38.485633 | orchestrator | 2026-04-08 04:07:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:07:38.487704 | orchestrator | 2026-04-08 04:07:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:07:38.487781 | orchestrator | 2026-04-08 04:07:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:07:41.530532 | orchestrator | 2026-04-08 04:07:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:07:41.530733 | orchestrator | 2026-04-08 04:07:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:07:41.530787 | orchestrator | 2026-04-08 04:07:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:07:44.584283 | orchestrator | 2026-04-08 04:07:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:07:44.586435 | orchestrator | 2026-04-08 04:07:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:07:44.586501 | orchestrator | 2026-04-08 04:07:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:07:47.637323 | orchestrator | 2026-04-08 04:07:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:07:47.638384 | orchestrator | 2026-04-08 04:07:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:07:47.638421 | orchestrator | 2026-04-08 04:07:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:07:50.684694 | orchestrator | 2026-04-08 04:07:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:07:50.687517 | orchestrator | 2026-04-08 04:07:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:07:50.688742 | orchestrator | 2026-04-08 04:07:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:07:53.732053 | orchestrator | 2026-04-08 04:07:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:07:53.734294 | orchestrator | 2026-04-08 04:07:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:07:53.734347 | orchestrator | 2026-04-08 04:07:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:07:56.785469 | orchestrator | 2026-04-08 04:07:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:07:56.786792 | orchestrator | 2026-04-08 04:07:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:07:56.786866 | orchestrator | 2026-04-08 04:07:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:07:59.835119 | orchestrator | 2026-04-08 04:07:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:07:59.836489 | orchestrator | 2026-04-08 04:07:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:07:59.836521 | orchestrator | 2026-04-08 04:07:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:08:02.890619 | orchestrator | 2026-04-08 04:08:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:08:02.893245 | orchestrator | 2026-04-08 04:08:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:08:02.893334 | orchestrator | 2026-04-08 04:08:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:08:05.946966 | orchestrator | 2026-04-08 04:08:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:08:05.949977 | orchestrator | 2026-04-08 04:08:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:08:05.950142 | orchestrator | 2026-04-08 04:08:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:08:08.993590 | orchestrator | 2026-04-08 04:08:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:08:08.994781 | orchestrator | 2026-04-08 04:08:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:08:08.994956 | orchestrator | 2026-04-08 04:08:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:08:12.049693 | orchestrator | 2026-04-08 04:08:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:08:12.051587 | orchestrator | 2026-04-08 04:08:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:08:12.051638 | orchestrator | 2026-04-08 04:08:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:08:15.092464 | orchestrator | 2026-04-08 04:08:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:08:15.094181 | orchestrator | 2026-04-08 04:08:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:08:15.094295 | orchestrator | 2026-04-08 04:08:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:08:18.137393 | orchestrator | 2026-04-08 04:08:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:08:18.138218 | orchestrator | 2026-04-08 04:08:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:08:18.138344 | orchestrator | 2026-04-08 04:08:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:08:21.184617 | orchestrator | 2026-04-08 04:08:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:08:21.186580 | orchestrator | 2026-04-08 04:08:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:08:21.186663 | orchestrator | 2026-04-08 04:08:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:08:24.233188 | orchestrator | 2026-04-08 04:08:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:08:24.234305 | orchestrator | 2026-04-08 04:08:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:08:24.234340 | orchestrator | 2026-04-08 04:08:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:08:27.283380 | orchestrator | 2026-04-08 04:08:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:08:27.285199 | orchestrator | 2026-04-08 04:08:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:08:27.285273 | orchestrator | 2026-04-08 04:08:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:08:30.327856 | orchestrator | 2026-04-08 04:08:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:08:30.330165 | orchestrator | 2026-04-08 04:08:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:08:30.330239 | orchestrator | 2026-04-08 04:08:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:08:33.382853 | orchestrator | 2026-04-08 04:08:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:08:33.384756 | orchestrator | 2026-04-08 04:08:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:08:33.384806 | orchestrator | 2026-04-08 04:08:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:08:36.434740 | orchestrator | 2026-04-08 04:08:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:08:36.436123 | orchestrator | 2026-04-08 04:08:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:08:36.436253 | orchestrator | 2026-04-08 04:08:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:08:39.490186 | orchestrator | 2026-04-08 04:08:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:08:39.491834 | orchestrator | 2026-04-08 04:08:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:08:39.491879 | orchestrator | 2026-04-08 04:08:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:08:42.542350 | orchestrator | 2026-04-08 04:08:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:08:42.545204 | orchestrator | 2026-04-08 04:08:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:08:42.545267 | orchestrator | 2026-04-08 04:08:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:08:45.592832 | orchestrator | 2026-04-08 04:08:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:08:45.593994 | orchestrator | 2026-04-08 04:08:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:08:45.594076 | orchestrator | 2026-04-08 04:08:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:08:48.642766 | orchestrator | 2026-04-08 04:08:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:08:48.644064 | orchestrator | 2026-04-08 04:08:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:08:48.644188 | orchestrator | 2026-04-08 04:08:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:08:51.697245 | orchestrator | 2026-04-08 04:08:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:08:51.699538 | orchestrator | 2026-04-08 04:08:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:08:51.699650 | orchestrator | 2026-04-08 04:08:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:08:54.752687 | orchestrator | 2026-04-08 04:08:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:08:54.754332 | orchestrator | 2026-04-08 04:08:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:08:54.754380 | orchestrator | 2026-04-08 04:08:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:08:57.802150 | orchestrator | 2026-04-08 04:08:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:08:57.804107 | orchestrator | 2026-04-08 04:08:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:08:57.804123 | orchestrator | 2026-04-08 04:08:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:09:00.855045 | orchestrator | 2026-04-08 04:09:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:09:00.856259 | orchestrator | 2026-04-08 04:09:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:09:00.856284 | orchestrator | 2026-04-08 04:09:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:09:03.913258 | orchestrator | 2026-04-08 04:09:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:09:03.916101 | orchestrator | 2026-04-08 04:09:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:09:03.916125 | orchestrator | 2026-04-08 04:09:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:09:06.970370 | orchestrator | 2026-04-08 04:09:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:09:06.974951 | orchestrator | 2026-04-08 04:09:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:09:06.975012 | orchestrator | 2026-04-08 04:09:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:09:10.029553 | orchestrator | 2026-04-08 04:09:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:09:10.031367 | orchestrator | 2026-04-08 04:09:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:09:10.031441 | orchestrator | 2026-04-08 04:09:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:09:13.083414 | orchestrator | 2026-04-08 04:09:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:09:13.084676 | orchestrator | 2026-04-08 04:09:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:09:13.084910 | orchestrator | 2026-04-08 04:09:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:09:16.138529 | orchestrator | 2026-04-08 04:09:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:09:16.139963 | orchestrator | 2026-04-08 04:09:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:09:16.140002 | orchestrator | 2026-04-08 04:09:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:09:19.189057 | orchestrator | 2026-04-08 04:09:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:09:19.189792 | orchestrator | 2026-04-08 04:09:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:09:19.189831 | orchestrator | 2026-04-08 04:09:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:09:22.241585 | orchestrator | 2026-04-08 04:09:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:09:22.243330 | orchestrator | 2026-04-08 04:09:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:09:22.243374 | orchestrator | 2026-04-08 04:09:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:09:25.301226 | orchestrator | 2026-04-08 04:09:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:09:25.304523 | orchestrator | 2026-04-08 04:09:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:09:25.304601 | orchestrator | 2026-04-08 04:09:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:09:28.356630 | orchestrator | 2026-04-08 04:09:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:09:28.361418 | orchestrator | 2026-04-08 04:09:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:09:28.361506 | orchestrator | 2026-04-08 04:09:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:09:31.406957 | orchestrator | 2026-04-08 04:09:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:09:31.408594 | orchestrator | 2026-04-08 04:09:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:09:31.408623 | orchestrator | 2026-04-08 04:09:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:09:34.452224 | orchestrator | 2026-04-08 04:09:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:09:34.453850 | orchestrator | 2026-04-08 04:09:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:09:34.453940 | orchestrator | 2026-04-08 04:09:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:09:37.493884 | orchestrator | 2026-04-08 04:09:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:09:37.495799 | orchestrator | 2026-04-08 04:09:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:09:37.495857 | orchestrator | 2026-04-08 04:09:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:09:40.551817 | orchestrator | 2026-04-08 04:09:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:09:40.553097 | orchestrator | 2026-04-08 04:09:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:09:40.553326 | orchestrator | 2026-04-08 04:09:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:09:43.601574 | orchestrator | 2026-04-08 04:09:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:09:43.601817 | orchestrator | 2026-04-08 04:09:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:09:43.601850 | orchestrator | 2026-04-08 04:09:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:09:46.654922 | orchestrator | 2026-04-08 04:09:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:09:46.657670 | orchestrator | 2026-04-08 04:09:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:09:46.658242 | orchestrator | 2026-04-08 04:09:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:09:49.712896 | orchestrator | 2026-04-08 04:09:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:09:49.716018 | orchestrator | 2026-04-08 04:09:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:09:49.716104 | orchestrator | 2026-04-08 04:09:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:09:52.766452 | orchestrator | 2026-04-08 04:09:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:09:52.768688 | orchestrator | 2026-04-08 04:09:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:09:52.769684 | orchestrator | 2026-04-08 04:09:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:09:55.816763 | orchestrator | 2026-04-08 04:09:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:09:55.818355 | orchestrator | 2026-04-08 04:09:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:09:55.818409 | orchestrator | 2026-04-08 04:09:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:09:58.866753 | orchestrator | 2026-04-08 04:09:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:09:58.868244 | orchestrator | 2026-04-08 04:09:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:09:58.868295 | orchestrator | 2026-04-08 04:09:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:10:01.916694 | orchestrator | 2026-04-08 04:10:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:10:01.919527 | orchestrator | 2026-04-08 04:10:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:10:01.919585 | orchestrator | 2026-04-08 04:10:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:10:04.973551 | orchestrator | 2026-04-08 04:10:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:10:04.976703 | orchestrator | 2026-04-08 04:10:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:10:04.977095 | orchestrator | 2026-04-08 04:10:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:10:08.022641 | orchestrator | 2026-04-08 04:10:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:10:08.024563 | orchestrator | 2026-04-08 04:10:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:10:08.024601 | orchestrator | 2026-04-08 04:10:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:10:11.075157 | orchestrator | 2026-04-08 04:10:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:10:11.075640 | orchestrator | 2026-04-08 04:10:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:10:11.075850 | orchestrator | 2026-04-08 04:10:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:10:14.124452 | orchestrator | 2026-04-08 04:10:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:10:14.126756 | orchestrator | 2026-04-08 04:10:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:10:14.126779 | orchestrator | 2026-04-08 04:10:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:10:17.176904 | orchestrator | 2026-04-08 04:10:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:10:17.178973 | orchestrator | 2026-04-08 04:10:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:10:17.179052 | orchestrator | 2026-04-08 04:10:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:10:20.226087 | orchestrator | 2026-04-08 04:10:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:10:20.227891 | orchestrator | 2026-04-08 04:10:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:10:20.227951 | orchestrator | 2026-04-08 04:10:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:10:23.274508 | orchestrator | 2026-04-08 04:10:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:10:23.276417 | orchestrator | 2026-04-08 04:10:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:10:23.276521 | orchestrator | 2026-04-08 04:10:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:10:26.322494 | orchestrator | 2026-04-08 04:10:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:10:26.324371 | orchestrator | 2026-04-08 04:10:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:10:26.324455 | orchestrator | 2026-04-08 04:10:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:10:29.373556 | orchestrator | 2026-04-08 04:10:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:10:29.376735 | orchestrator | 2026-04-08 04:10:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:10:29.376804 | orchestrator | 2026-04-08 04:10:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:10:32.430387 | orchestrator | 2026-04-08 04:10:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:10:32.433907 | orchestrator | 2026-04-08 04:10:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:10:32.433970 | orchestrator | 2026-04-08 04:10:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:10:35.481962 | orchestrator | 2026-04-08 04:10:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:10:35.483925 | orchestrator | 2026-04-08 04:10:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:10:35.484124 | orchestrator | 2026-04-08 04:10:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:10:38.529838 | orchestrator | 2026-04-08 04:10:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:10:38.530533 | orchestrator | 2026-04-08 04:10:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:10:38.530688 | orchestrator | 2026-04-08 04:10:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:10:41.586298 | orchestrator | 2026-04-08 04:10:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:10:41.587710 | orchestrator | 2026-04-08 04:10:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:10:41.587776 | orchestrator | 2026-04-08 04:10:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:10:44.638617 | orchestrator | 2026-04-08 04:10:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:10:44.639888 | orchestrator | 2026-04-08 04:10:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:10:44.639921 | orchestrator | 2026-04-08 04:10:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:10:47.693579 | orchestrator | 2026-04-08 04:10:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:10:47.693700 | orchestrator | 2026-04-08 04:10:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:10:47.693725 | orchestrator | 2026-04-08 04:10:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:10:50.731627 | orchestrator | 2026-04-08 04:10:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:10:50.733159 | orchestrator | 2026-04-08 04:10:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:10:50.733215 | orchestrator | 2026-04-08 04:10:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:10:53.785536 | orchestrator | 2026-04-08 04:10:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:10:53.786215 | orchestrator | 2026-04-08 04:10:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:10:53.786262 | orchestrator | 2026-04-08 04:10:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:10:56.847648 | orchestrator | 2026-04-08 04:10:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:10:56.847776 | orchestrator | 2026-04-08 04:10:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:10:56.847802 | orchestrator | 2026-04-08 04:10:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:10:59.890211 | orchestrator | 2026-04-08 04:10:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:10:59.892896 | orchestrator | 2026-04-08 04:10:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:10:59.893068 | orchestrator | 2026-04-08 04:10:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:11:02.941245 | orchestrator | 2026-04-08 04:11:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:11:02.941823 | orchestrator | 2026-04-08 04:11:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:11:02.941850 | orchestrator | 2026-04-08 04:11:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:11:05.991742 | orchestrator | 2026-04-08 04:11:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:11:05.991933 | orchestrator | 2026-04-08 04:11:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:11:05.992114 | orchestrator | 2026-04-08 04:11:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:11:09.035701 | orchestrator | 2026-04-08 04:11:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:11:09.038082 | orchestrator | 2026-04-08 04:11:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:11:09.038143 | orchestrator | 2026-04-08 04:11:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:11:12.096528 | orchestrator | 2026-04-08 04:11:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:11:12.098178 | orchestrator | 2026-04-08 04:11:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:11:12.098344 | orchestrator | 2026-04-08 04:11:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:11:15.161526 | orchestrator | 2026-04-08 04:11:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:11:15.162411 | orchestrator | 2026-04-08 04:11:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:11:15.162534 | orchestrator | 2026-04-08 04:11:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:11:18.209121 | orchestrator | 2026-04-08 04:11:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:11:18.210803 | orchestrator | 2026-04-08 04:11:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:11:18.210997 | orchestrator | 2026-04-08 04:11:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:11:21.262814 | orchestrator | 2026-04-08 04:11:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:11:21.265243 | orchestrator | 2026-04-08 04:11:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:11:21.265306 | orchestrator | 2026-04-08 04:11:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:11:24.316399 | orchestrator | 2026-04-08 04:11:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:11:24.317821 | orchestrator | 2026-04-08 04:11:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:11:24.317923 | orchestrator | 2026-04-08 04:11:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:11:27.369986 | orchestrator | 2026-04-08 04:11:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:11:27.372224 | orchestrator | 2026-04-08 04:11:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:11:27.372268 | orchestrator | 2026-04-08 04:11:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:11:30.420872 | orchestrator | 2026-04-08 04:11:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:11:30.421056 | orchestrator | 2026-04-08 04:11:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:11:30.421076 | orchestrator | 2026-04-08 04:11:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:11:33.472269 | orchestrator | 2026-04-08 04:11:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:11:33.473780 | orchestrator | 2026-04-08 04:11:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:11:33.473823 | orchestrator | 2026-04-08 04:11:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:11:36.522665 | orchestrator | 2026-04-08 04:11:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:11:36.524303 | orchestrator | 2026-04-08 04:11:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:11:36.524369 | orchestrator | 2026-04-08 04:11:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:11:39.574315 | orchestrator | 2026-04-08 04:11:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:11:39.575457 | orchestrator | 2026-04-08 04:11:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:11:39.575613 | orchestrator | 2026-04-08 04:11:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:11:42.621675 | orchestrator | 2026-04-08 04:11:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:11:42.622693 | orchestrator | 2026-04-08 04:11:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:11:42.622808 | orchestrator | 2026-04-08 04:11:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:11:45.669692 | orchestrator | 2026-04-08 04:11:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:11:45.669921 | orchestrator | 2026-04-08 04:11:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:11:45.670150 | orchestrator | 2026-04-08 04:11:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:11:48.723028 | orchestrator | 2026-04-08 04:11:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:11:48.724424 | orchestrator | 2026-04-08 04:11:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:11:48.724474 | orchestrator | 2026-04-08 04:11:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:11:51.771670 | orchestrator | 2026-04-08 04:11:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:11:51.774183 | orchestrator | 2026-04-08 04:11:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:11:51.774255 | orchestrator | 2026-04-08 04:11:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:11:54.827826 | orchestrator | 2026-04-08 04:11:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:11:54.829192 | orchestrator | 2026-04-08 04:11:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:11:54.829451 | orchestrator | 2026-04-08 04:11:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:11:57.885184 | orchestrator | 2026-04-08 04:11:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:11:57.885574 | orchestrator | 2026-04-08 04:11:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:11:57.885609 | orchestrator | 2026-04-08 04:11:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:12:00.937136 | orchestrator | 2026-04-08 04:12:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:12:00.939627 | orchestrator | 2026-04-08 04:12:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:12:00.939664 | orchestrator | 2026-04-08 04:12:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:12:03.985888 | orchestrator | 2026-04-08 04:12:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:12:03.987797 | orchestrator | 2026-04-08 04:12:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:12:03.987829 | orchestrator | 2026-04-08 04:12:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:12:07.034162 | orchestrator | 2026-04-08 04:12:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:12:07.037040 | orchestrator | 2026-04-08 04:12:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:12:07.037113 | orchestrator | 2026-04-08 04:12:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:12:10.083700 | orchestrator | 2026-04-08 04:12:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:12:10.085491 | orchestrator | 2026-04-08 04:12:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:12:10.085582 | orchestrator | 2026-04-08 04:12:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:12:13.136412 | orchestrator | 2026-04-08 04:12:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:12:13.138273 | orchestrator | 2026-04-08 04:12:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:12:13.138324 | orchestrator | 2026-04-08 04:12:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:12:16.188172 | orchestrator | 2026-04-08 04:12:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:12:16.190301 | orchestrator | 2026-04-08 04:12:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:12:16.190365 | orchestrator | 2026-04-08 04:12:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:12:19.239785 | orchestrator | 2026-04-08 04:12:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:12:19.240949 | orchestrator | 2026-04-08 04:12:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:12:19.241002 | orchestrator | 2026-04-08 04:12:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:12:22.289681 | orchestrator | 2026-04-08 04:12:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:12:22.291730 | orchestrator | 2026-04-08 04:12:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:12:22.291825 | orchestrator | 2026-04-08 04:12:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:12:25.347005 | orchestrator | 2026-04-08 04:12:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:12:25.348027 | orchestrator | 2026-04-08 04:12:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:12:25.348081 | orchestrator | 2026-04-08 04:12:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:12:28.401204 | orchestrator | 2026-04-08 04:12:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:12:28.402541 | orchestrator | 2026-04-08 04:12:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:12:28.402620 | orchestrator | 2026-04-08 04:12:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:12:31.444357 | orchestrator | 2026-04-08 04:12:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:12:31.446368 | orchestrator | 2026-04-08 04:12:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:12:31.446420 | orchestrator | 2026-04-08 04:12:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:12:34.496172 | orchestrator | 2026-04-08 04:12:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:12:34.497610 | orchestrator | 2026-04-08 04:12:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:12:34.497762 | orchestrator | 2026-04-08 04:12:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:12:37.543834 | orchestrator | 2026-04-08 04:12:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:12:37.544016 | orchestrator | 2026-04-08 04:12:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:12:37.544035 | orchestrator | 2026-04-08 04:12:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:12:40.592604 | orchestrator | 2026-04-08 04:12:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:12:40.594472 | orchestrator | 2026-04-08 04:12:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:12:40.594560 | orchestrator | 2026-04-08 04:12:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:12:43.644710 | orchestrator | 2026-04-08 04:12:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:12:43.646340 | orchestrator | 2026-04-08 04:12:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:12:43.646376 | orchestrator | 2026-04-08 04:12:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:12:46.697126 | orchestrator | 2026-04-08 04:12:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:12:46.699750 | orchestrator | 2026-04-08 04:12:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:12:46.699814 | orchestrator | 2026-04-08 04:12:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:12:49.745090 | orchestrator | 2026-04-08 04:12:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:12:49.748553 | orchestrator | 2026-04-08 04:12:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:12:49.748616 | orchestrator | 2026-04-08 04:12:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:12:52.790559 | orchestrator | 2026-04-08 04:12:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:12:52.792779 | orchestrator | 2026-04-08 04:12:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:12:52.793045 | orchestrator | 2026-04-08 04:12:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:12:55.839079 | orchestrator | 2026-04-08 04:12:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:12:55.839624 | orchestrator | 2026-04-08 04:12:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:12:55.839679 | orchestrator | 2026-04-08 04:12:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:12:58.898197 | orchestrator | 2026-04-08 04:12:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:12:58.899112 | orchestrator | 2026-04-08 04:12:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:12:58.899140 | orchestrator | 2026-04-08 04:12:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:13:01.952108 | orchestrator | 2026-04-08 04:13:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:13:01.952395 | orchestrator | 2026-04-08 04:13:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:13:01.952435 | orchestrator | 2026-04-08 04:13:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:13:05.001646 | orchestrator | 2026-04-08 04:13:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:13:05.003274 | orchestrator | 2026-04-08 04:13:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:13:05.003380 | orchestrator | 2026-04-08 04:13:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:13:08.050806 | orchestrator | 2026-04-08 04:13:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:13:08.051739 | orchestrator | 2026-04-08 04:13:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:13:08.051806 | orchestrator | 2026-04-08 04:13:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:13:11.099240 | orchestrator | 2026-04-08 04:13:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:13:11.100402 | orchestrator | 2026-04-08 04:13:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:13:11.100463 | orchestrator | 2026-04-08 04:13:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:13:14.148344 | orchestrator | 2026-04-08 04:13:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:13:14.149333 | orchestrator | 2026-04-08 04:13:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:13:14.149375 | orchestrator | 2026-04-08 04:13:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:13:17.199963 | orchestrator | 2026-04-08 04:13:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:13:17.200772 | orchestrator | 2026-04-08 04:13:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:13:17.201111 | orchestrator | 2026-04-08 04:13:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:13:20.244218 | orchestrator | 2026-04-08 04:13:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:13:20.244953 | orchestrator | 2026-04-08 04:13:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:13:20.244987 | orchestrator | 2026-04-08 04:13:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:13:23.298191 | orchestrator | 2026-04-08 04:13:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:13:23.299449 | orchestrator | 2026-04-08 04:13:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:13:23.299479 | orchestrator | 2026-04-08 04:13:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:13:26.350953 | orchestrator | 2026-04-08 04:13:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:13:26.351985 | orchestrator | 2026-04-08 04:13:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:13:26.352016 | orchestrator | 2026-04-08 04:13:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:13:29.409093 | orchestrator | 2026-04-08 04:13:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:13:29.411608 | orchestrator | 2026-04-08 04:13:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:13:29.411635 | orchestrator | 2026-04-08 04:13:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:13:32.457663 | orchestrator | 2026-04-08 04:13:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:13:32.459498 | orchestrator | 2026-04-08 04:13:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:13:32.459546 | orchestrator | 2026-04-08 04:13:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:13:35.507749 | orchestrator | 2026-04-08 04:13:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:13:35.508224 | orchestrator | 2026-04-08 04:13:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:13:35.508306 | orchestrator | 2026-04-08 04:13:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:13:38.558911 | orchestrator | 2026-04-08 04:13:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:13:38.560386 | orchestrator | 2026-04-08 04:13:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:13:38.560443 | orchestrator | 2026-04-08 04:13:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:13:41.612328 | orchestrator | 2026-04-08 04:13:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:13:41.614961 | orchestrator | 2026-04-08 04:13:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:13:41.615019 | orchestrator | 2026-04-08 04:13:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:13:44.673133 | orchestrator | 2026-04-08 04:13:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:13:44.673922 | orchestrator | 2026-04-08 04:13:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:13:44.673956 | orchestrator | 2026-04-08 04:13:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:13:47.723021 | orchestrator | 2026-04-08 04:13:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:13:47.724888 | orchestrator | 2026-04-08 04:13:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:13:47.725024 | orchestrator | 2026-04-08 04:13:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:13:50.763235 | orchestrator | 2026-04-08 04:13:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:13:50.765240 | orchestrator | 2026-04-08 04:13:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:13:50.765362 | orchestrator | 2026-04-08 04:13:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:13:53.813336 | orchestrator | 2026-04-08 04:13:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:13:53.815305 | orchestrator | 2026-04-08 04:13:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:13:53.815378 | orchestrator | 2026-04-08 04:13:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:13:56.865037 | orchestrator | 2026-04-08 04:13:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:13:56.868167 | orchestrator | 2026-04-08 04:13:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:13:56.868237 | orchestrator | 2026-04-08 04:13:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:13:59.922794 | orchestrator | 2026-04-08 04:13:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:13:59.924603 | orchestrator | 2026-04-08 04:13:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:13:59.924662 | orchestrator | 2026-04-08 04:13:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:14:02.964968 | orchestrator | 2026-04-08 04:14:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:14:02.967033 | orchestrator | 2026-04-08 04:14:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:14:02.967079 | orchestrator | 2026-04-08 04:14:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:14:06.018244 | orchestrator | 2026-04-08 04:14:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:14:06.019167 | orchestrator | 2026-04-08 04:14:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:14:06.019249 | orchestrator | 2026-04-08 04:14:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:14:09.058435 | orchestrator | 2026-04-08 04:14:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:14:09.059719 | orchestrator | 2026-04-08 04:14:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:14:09.059774 | orchestrator | 2026-04-08 04:14:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:14:12.107505 | orchestrator | 2026-04-08 04:14:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:14:12.109465 | orchestrator | 2026-04-08 04:14:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:14:12.109912 | orchestrator | 2026-04-08 04:14:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:14:15.162211 | orchestrator | 2026-04-08 04:14:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:14:15.163295 | orchestrator | 2026-04-08 04:14:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:14:15.163324 | orchestrator | 2026-04-08 04:14:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:14:18.211691 | orchestrator | 2026-04-08 04:14:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:14:18.214249 | orchestrator | 2026-04-08 04:14:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:14:18.214319 | orchestrator | 2026-04-08 04:14:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:14:21.259999 | orchestrator | 2026-04-08 04:14:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:14:21.261559 | orchestrator | 2026-04-08 04:14:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:14:21.261602 | orchestrator | 2026-04-08 04:14:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:14:24.308808 | orchestrator | 2026-04-08 04:14:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:14:24.310185 | orchestrator | 2026-04-08 04:14:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:14:24.310240 | orchestrator | 2026-04-08 04:14:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:14:27.359963 | orchestrator | 2026-04-08 04:14:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:14:27.361199 | orchestrator | 2026-04-08 04:14:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:14:27.361246 | orchestrator | 2026-04-08 04:14:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:14:30.410140 | orchestrator | 2026-04-08 04:14:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:14:30.411248 | orchestrator | 2026-04-08 04:14:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:14:30.411270 | orchestrator | 2026-04-08 04:14:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:14:33.462472 | orchestrator | 2026-04-08 04:14:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:14:33.462941 | orchestrator | 2026-04-08 04:14:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:14:33.462992 | orchestrator | 2026-04-08 04:14:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:14:36.512532 | orchestrator | 2026-04-08 04:14:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:14:36.514188 | orchestrator | 2026-04-08 04:14:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:14:36.514221 | orchestrator | 2026-04-08 04:14:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:14:39.570501 | orchestrator | 2026-04-08 04:14:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:14:39.571153 | orchestrator | 2026-04-08 04:14:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:14:39.571489 | orchestrator | 2026-04-08 04:14:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:14:42.623740 | orchestrator | 2026-04-08 04:14:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:14:42.624793 | orchestrator | 2026-04-08 04:14:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:14:42.624889 | orchestrator | 2026-04-08 04:14:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:14:45.676347 | orchestrator | 2026-04-08 04:14:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:14:45.680180 | orchestrator | 2026-04-08 04:14:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:14:45.680284 | orchestrator | 2026-04-08 04:14:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:14:48.724273 | orchestrator | 2026-04-08 04:14:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:14:48.727115 | orchestrator | 2026-04-08 04:14:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:14:48.727396 | orchestrator | 2026-04-08 04:14:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:14:51.773921 | orchestrator | 2026-04-08 04:14:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:14:51.777027 | orchestrator | 2026-04-08 04:14:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:14:51.777105 | orchestrator | 2026-04-08 04:14:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:14:54.819425 | orchestrator | 2026-04-08 04:14:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:14:54.821172 | orchestrator | 2026-04-08 04:14:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:14:54.821229 | orchestrator | 2026-04-08 04:14:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:14:57.872428 | orchestrator | 2026-04-08 04:14:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:14:57.873994 | orchestrator | 2026-04-08 04:14:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:14:57.874114 | orchestrator | 2026-04-08 04:14:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:15:00.922485 | orchestrator | 2026-04-08 04:15:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:15:00.923743 | orchestrator | 2026-04-08 04:15:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:15:00.923781 | orchestrator | 2026-04-08 04:15:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:15:03.973033 | orchestrator | 2026-04-08 04:15:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:15:03.975062 | orchestrator | 2026-04-08 04:15:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:15:03.975093 | orchestrator | 2026-04-08 04:15:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:15:07.024259 | orchestrator | 2026-04-08 04:15:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:15:07.029057 | orchestrator | 2026-04-08 04:15:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:15:07.029150 | orchestrator | 2026-04-08 04:15:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:15:10.083249 | orchestrator | 2026-04-08 04:15:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:15:10.083837 | orchestrator | 2026-04-08 04:15:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:15:10.083859 | orchestrator | 2026-04-08 04:15:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:15:13.138542 | orchestrator | 2026-04-08 04:15:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:15:13.140506 | orchestrator | 2026-04-08 04:15:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:15:13.140554 | orchestrator | 2026-04-08 04:15:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:15:16.189249 | orchestrator | 2026-04-08 04:15:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:15:16.193228 | orchestrator | 2026-04-08 04:15:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:15:16.193319 | orchestrator | 2026-04-08 04:15:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:15:19.238212 | orchestrator | 2026-04-08 04:15:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:15:19.241023 | orchestrator | 2026-04-08 04:15:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:15:19.241102 | orchestrator | 2026-04-08 04:15:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:15:22.286390 | orchestrator | 2026-04-08 04:15:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:15:22.288859 | orchestrator | 2026-04-08 04:15:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:15:22.288924 | orchestrator | 2026-04-08 04:15:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:15:25.342165 | orchestrator | 2026-04-08 04:15:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:15:25.344954 | orchestrator | 2026-04-08 04:15:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:15:25.345020 | orchestrator | 2026-04-08 04:15:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:15:28.390606 | orchestrator | 2026-04-08 04:15:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:15:28.392039 | orchestrator | 2026-04-08 04:15:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:15:28.392074 | orchestrator | 2026-04-08 04:15:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:15:31.442742 | orchestrator | 2026-04-08 04:15:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:15:31.444003 | orchestrator | 2026-04-08 04:15:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:15:31.444213 | orchestrator | 2026-04-08 04:15:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:15:34.494923 | orchestrator | 2026-04-08 04:15:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:15:34.497179 | orchestrator | 2026-04-08 04:15:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:15:34.497258 | orchestrator | 2026-04-08 04:15:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:15:37.543351 | orchestrator | 2026-04-08 04:15:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:15:37.545057 | orchestrator | 2026-04-08 04:15:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:15:37.545092 | orchestrator | 2026-04-08 04:15:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:15:40.595682 | orchestrator | 2026-04-08 04:15:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:15:40.596890 | orchestrator | 2026-04-08 04:15:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:15:40.596922 | orchestrator | 2026-04-08 04:15:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:15:43.647060 | orchestrator | 2026-04-08 04:15:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:15:43.649232 | orchestrator | 2026-04-08 04:15:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:15:43.649314 | orchestrator | 2026-04-08 04:15:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:15:46.706358 | orchestrator | 2026-04-08 04:15:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:15:46.707573 | orchestrator | 2026-04-08 04:15:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:15:46.707593 | orchestrator | 2026-04-08 04:15:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:15:49.751562 | orchestrator | 2026-04-08 04:15:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:15:49.752957 | orchestrator | 2026-04-08 04:15:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:15:49.753019 | orchestrator | 2026-04-08 04:15:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:15:52.806567 | orchestrator | 2026-04-08 04:15:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:15:52.809018 | orchestrator | 2026-04-08 04:15:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:15:52.809080 | orchestrator | 2026-04-08 04:15:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:15:55.860110 | orchestrator | 2026-04-08 04:15:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:15:55.862533 | orchestrator | 2026-04-08 04:15:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:15:55.862619 | orchestrator | 2026-04-08 04:15:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:15:58.911609 | orchestrator | 2026-04-08 04:15:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:15:58.917241 | orchestrator | 2026-04-08 04:15:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:15:58.917355 | orchestrator | 2026-04-08 04:15:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:16:01.972050 | orchestrator | 2026-04-08 04:16:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:16:01.973053 | orchestrator | 2026-04-08 04:16:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:16:01.973077 | orchestrator | 2026-04-08 04:16:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:16:05.023730 | orchestrator | 2026-04-08 04:16:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:16:05.025358 | orchestrator | 2026-04-08 04:16:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:16:05.025411 | orchestrator | 2026-04-08 04:16:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:16:08.063204 | orchestrator | 2026-04-08 04:16:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:16:08.064625 | orchestrator | 2026-04-08 04:16:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:16:08.064734 | orchestrator | 2026-04-08 04:16:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:16:11.113783 | orchestrator | 2026-04-08 04:16:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:16:11.116178 | orchestrator | 2026-04-08 04:16:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:16:11.116231 | orchestrator | 2026-04-08 04:16:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:16:14.165436 | orchestrator | 2026-04-08 04:16:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:16:14.166438 | orchestrator | 2026-04-08 04:16:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:16:14.166475 | orchestrator | 2026-04-08 04:16:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:16:17.213128 | orchestrator | 2026-04-08 04:16:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:16:17.215694 | orchestrator | 2026-04-08 04:16:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:16:17.215791 | orchestrator | 2026-04-08 04:16:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:16:20.266566 | orchestrator | 2026-04-08 04:16:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:16:20.269192 | orchestrator | 2026-04-08 04:16:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:16:20.269371 | orchestrator | 2026-04-08 04:16:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:16:23.316689 | orchestrator | 2026-04-08 04:16:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:16:23.318566 | orchestrator | 2026-04-08 04:16:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:16:23.318678 | orchestrator | 2026-04-08 04:16:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:16:26.363292 | orchestrator | 2026-04-08 04:16:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:16:26.363740 | orchestrator | 2026-04-08 04:16:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:16:26.363772 | orchestrator | 2026-04-08 04:16:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:16:29.411936 | orchestrator | 2026-04-08 04:16:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:16:29.413822 | orchestrator | 2026-04-08 04:16:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:16:29.413843 | orchestrator | 2026-04-08 04:16:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:16:32.465304 | orchestrator | 2026-04-08 04:16:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:16:32.466591 | orchestrator | 2026-04-08 04:16:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:16:32.466634 | orchestrator | 2026-04-08 04:16:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:16:35.516415 | orchestrator | 2026-04-08 04:16:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:16:35.518183 | orchestrator | 2026-04-08 04:16:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:16:35.518248 | orchestrator | 2026-04-08 04:16:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:16:38.563673 | orchestrator | 2026-04-08 04:16:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:16:38.565054 | orchestrator | 2026-04-08 04:16:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:16:38.565117 | orchestrator | 2026-04-08 04:16:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:16:41.613275 | orchestrator | 2026-04-08 04:16:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:16:41.615780 | orchestrator | 2026-04-08 04:16:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:16:41.615853 | orchestrator | 2026-04-08 04:16:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:16:44.669072 | orchestrator | 2026-04-08 04:16:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:16:44.670645 | orchestrator | 2026-04-08 04:16:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:16:44.670737 | orchestrator | 2026-04-08 04:16:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:16:47.723139 | orchestrator | 2026-04-08 04:16:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:16:47.723412 | orchestrator | 2026-04-08 04:16:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:16:47.723433 | orchestrator | 2026-04-08 04:16:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:16:50.777986 | orchestrator | 2026-04-08 04:16:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:16:50.780212 | orchestrator | 2026-04-08 04:16:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:16:50.780388 | orchestrator | 2026-04-08 04:16:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:16:53.826314 | orchestrator | 2026-04-08 04:16:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:16:53.827389 | orchestrator | 2026-04-08 04:16:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:16:53.827436 | orchestrator | 2026-04-08 04:16:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:16:56.876393 | orchestrator | 2026-04-08 04:16:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:16:56.878267 | orchestrator | 2026-04-08 04:16:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:16:56.878339 | orchestrator | 2026-04-08 04:16:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:16:59.931696 | orchestrator | 2026-04-08 04:16:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:16:59.932215 | orchestrator | 2026-04-08 04:16:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:16:59.932248 | orchestrator | 2026-04-08 04:16:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:17:02.982318 | orchestrator | 2026-04-08 04:17:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:17:02.984553 | orchestrator | 2026-04-08 04:17:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:17:02.984899 | orchestrator | 2026-04-08 04:17:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:17:06.035983 | orchestrator | 2026-04-08 04:17:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:17:06.037881 | orchestrator | 2026-04-08 04:17:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:17:06.037916 | orchestrator | 2026-04-08 04:17:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:17:09.086279 | orchestrator | 2026-04-08 04:17:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:17:09.088868 | orchestrator | 2026-04-08 04:17:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:17:09.089051 | orchestrator | 2026-04-08 04:17:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:17:12.132928 | orchestrator | 2026-04-08 04:17:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:17:12.134673 | orchestrator | 2026-04-08 04:17:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:17:12.134716 | orchestrator | 2026-04-08 04:17:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:17:15.184358 | orchestrator | 2026-04-08 04:17:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:17:15.185619 | orchestrator | 2026-04-08 04:17:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:17:15.185658 | orchestrator | 2026-04-08 04:17:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:17:18.233273 | orchestrator | 2026-04-08 04:17:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:17:18.234643 | orchestrator | 2026-04-08 04:17:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:17:18.234703 | orchestrator | 2026-04-08 04:17:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:17:21.286886 | orchestrator | 2026-04-08 04:17:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:17:21.287529 | orchestrator | 2026-04-08 04:17:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:17:21.288055 | orchestrator | 2026-04-08 04:17:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:17:24.341183 | orchestrator | 2026-04-08 04:17:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:17:24.343454 | orchestrator | 2026-04-08 04:17:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:17:24.343666 | orchestrator | 2026-04-08 04:17:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:17:27.396726 | orchestrator | 2026-04-08 04:17:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:17:27.398343 | orchestrator | 2026-04-08 04:17:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:17:27.398383 | orchestrator | 2026-04-08 04:17:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:17:30.446087 | orchestrator | 2026-04-08 04:17:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:17:30.447870 | orchestrator | 2026-04-08 04:17:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:17:30.447920 | orchestrator | 2026-04-08 04:17:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:17:33.501586 | orchestrator | 2026-04-08 04:17:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:17:33.504373 | orchestrator | 2026-04-08 04:17:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:17:33.504458 | orchestrator | 2026-04-08 04:17:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:17:36.555867 | orchestrator | 2026-04-08 04:17:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:17:36.559527 | orchestrator | 2026-04-08 04:17:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:17:36.559600 | orchestrator | 2026-04-08 04:17:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:17:39.607093 | orchestrator | 2026-04-08 04:17:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:17:39.610565 | orchestrator | 2026-04-08 04:17:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:17:39.610648 | orchestrator | 2026-04-08 04:17:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:17:42.663582 | orchestrator | 2026-04-08 04:17:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:17:42.666833 | orchestrator | 2026-04-08 04:17:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:17:42.666940 | orchestrator | 2026-04-08 04:17:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:17:45.717977 | orchestrator | 2026-04-08 04:17:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:17:45.722282 | orchestrator | 2026-04-08 04:17:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:17:45.722360 | orchestrator | 2026-04-08 04:17:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:17:48.776073 | orchestrator | 2026-04-08 04:17:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:17:48.778398 | orchestrator | 2026-04-08 04:17:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:17:48.778446 | orchestrator | 2026-04-08 04:17:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:17:51.829261 | orchestrator | 2026-04-08 04:17:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:17:51.830649 | orchestrator | 2026-04-08 04:17:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:17:51.830841 | orchestrator | 2026-04-08 04:17:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:17:54.883120 | orchestrator | 2026-04-08 04:17:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:17:54.884917 | orchestrator | 2026-04-08 04:17:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:17:54.884974 | orchestrator | 2026-04-08 04:17:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:17:57.940018 | orchestrator | 2026-04-08 04:17:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:17:57.943453 | orchestrator | 2026-04-08 04:17:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:17:57.943523 | orchestrator | 2026-04-08 04:17:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:18:00.995982 | orchestrator | 2026-04-08 04:18:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:18:00.997744 | orchestrator | 2026-04-08 04:18:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:18:00.997823 | orchestrator | 2026-04-08 04:18:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:18:04.043732 | orchestrator | 2026-04-08 04:18:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:18:04.044331 | orchestrator | 2026-04-08 04:18:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:18:04.044347 | orchestrator | 2026-04-08 04:18:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:18:07.092992 | orchestrator | 2026-04-08 04:18:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:18:07.094282 | orchestrator | 2026-04-08 04:18:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:18:07.094323 | orchestrator | 2026-04-08 04:18:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:18:10.149438 | orchestrator | 2026-04-08 04:18:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:18:10.151076 | orchestrator | 2026-04-08 04:18:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:18:10.151121 | orchestrator | 2026-04-08 04:18:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:18:13.206950 | orchestrator | 2026-04-08 04:18:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:18:13.207653 | orchestrator | 2026-04-08 04:18:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:18:13.207702 | orchestrator | 2026-04-08 04:18:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:18:16.258259 | orchestrator | 2026-04-08 04:18:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:18:16.259048 | orchestrator | 2026-04-08 04:18:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:18:16.259135 | orchestrator | 2026-04-08 04:18:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:18:19.313676 | orchestrator | 2026-04-08 04:18:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:18:19.315920 | orchestrator | 2026-04-08 04:18:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:18:19.316005 | orchestrator | 2026-04-08 04:18:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:18:22.362187 | orchestrator | 2026-04-08 04:18:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:18:22.364092 | orchestrator | 2026-04-08 04:18:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:18:22.364148 | orchestrator | 2026-04-08 04:18:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:18:25.413767 | orchestrator | 2026-04-08 04:18:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:18:25.415385 | orchestrator | 2026-04-08 04:18:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:18:25.415486 | orchestrator | 2026-04-08 04:18:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:18:28.463141 | orchestrator | 2026-04-08 04:18:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:18:28.464855 | orchestrator | 2026-04-08 04:18:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:18:28.464948 | orchestrator | 2026-04-08 04:18:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:18:31.510991 | orchestrator | 2026-04-08 04:18:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:18:31.513353 | orchestrator | 2026-04-08 04:18:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:18:31.513429 | orchestrator | 2026-04-08 04:18:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:18:34.566359 | orchestrator | 2026-04-08 04:18:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:18:34.569066 | orchestrator | 2026-04-08 04:18:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:18:34.569124 | orchestrator | 2026-04-08 04:18:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:18:37.622578 | orchestrator | 2026-04-08 04:18:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:18:37.623541 | orchestrator | 2026-04-08 04:18:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:18:37.623589 | orchestrator | 2026-04-08 04:18:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:18:40.675161 | orchestrator | 2026-04-08 04:18:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:18:40.676423 | orchestrator | 2026-04-08 04:18:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:18:40.676520 | orchestrator | 2026-04-08 04:18:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:18:43.722406 | orchestrator | 2026-04-08 04:18:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:18:43.723593 | orchestrator | 2026-04-08 04:18:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:18:43.723924 | orchestrator | 2026-04-08 04:18:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:18:46.769675 | orchestrator | 2026-04-08 04:18:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:18:46.769976 | orchestrator | 2026-04-08 04:18:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:18:46.770001 | orchestrator | 2026-04-08 04:18:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:18:49.819249 | orchestrator | 2026-04-08 04:18:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:18:49.820684 | orchestrator | 2026-04-08 04:18:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:18:49.821140 | orchestrator | 2026-04-08 04:18:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:18:52.871035 | orchestrator | 2026-04-08 04:18:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:18:52.873130 | orchestrator | 2026-04-08 04:18:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:18:52.873178 | orchestrator | 2026-04-08 04:18:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:18:55.910359 | orchestrator | 2026-04-08 04:18:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:18:55.911172 | orchestrator | 2026-04-08 04:18:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:18:55.911236 | orchestrator | 2026-04-08 04:18:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:18:58.956414 | orchestrator | 2026-04-08 04:18:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:18:58.957685 | orchestrator | 2026-04-08 04:18:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:18:58.957718 | orchestrator | 2026-04-08 04:18:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:19:02.011765 | orchestrator | 2026-04-08 04:19:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:19:02.013430 | orchestrator | 2026-04-08 04:19:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:19:02.013884 | orchestrator | 2026-04-08 04:19:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:19:05.067072 | orchestrator | 2026-04-08 04:19:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:19:05.068983 | orchestrator | 2026-04-08 04:19:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:19:05.069039 | orchestrator | 2026-04-08 04:19:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:19:08.113672 | orchestrator | 2026-04-08 04:19:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:19:08.115528 | orchestrator | 2026-04-08 04:19:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:19:08.115599 | orchestrator | 2026-04-08 04:19:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:19:11.161199 | orchestrator | 2026-04-08 04:19:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:19:11.162104 | orchestrator | 2026-04-08 04:19:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:19:11.162205 | orchestrator | 2026-04-08 04:19:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:19:14.209356 | orchestrator | 2026-04-08 04:19:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:19:14.211010 | orchestrator | 2026-04-08 04:19:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:19:14.211160 | orchestrator | 2026-04-08 04:19:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:19:17.260552 | orchestrator | 2026-04-08 04:19:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:19:17.260914 | orchestrator | 2026-04-08 04:19:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:19:17.260962 | orchestrator | 2026-04-08 04:19:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:19:20.313232 | orchestrator | 2026-04-08 04:19:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:19:20.313870 | orchestrator | 2026-04-08 04:19:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:19:20.313966 | orchestrator | 2026-04-08 04:19:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:19:23.366417 | orchestrator | 2026-04-08 04:19:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:19:23.369291 | orchestrator | 2026-04-08 04:19:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:19:23.369458 | orchestrator | 2026-04-08 04:19:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:19:26.417432 | orchestrator | 2026-04-08 04:19:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:19:26.418821 | orchestrator | 2026-04-08 04:19:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:19:26.418916 | orchestrator | 2026-04-08 04:19:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:19:29.470334 | orchestrator | 2026-04-08 04:19:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:19:29.470719 | orchestrator | 2026-04-08 04:19:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:19:29.470760 | orchestrator | 2026-04-08 04:19:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:19:32.512030 | orchestrator | 2026-04-08 04:19:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:19:32.512889 | orchestrator | 2026-04-08 04:19:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:19:32.512941 | orchestrator | 2026-04-08 04:19:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:19:35.562724 | orchestrator | 2026-04-08 04:19:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:19:35.564830 | orchestrator | 2026-04-08 04:19:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:19:35.564921 | orchestrator | 2026-04-08 04:19:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:19:38.612127 | orchestrator | 2026-04-08 04:19:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:19:38.613817 | orchestrator | 2026-04-08 04:19:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:19:38.613885 | orchestrator | 2026-04-08 04:19:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:19:41.653855 | orchestrator | 2026-04-08 04:19:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:19:41.654862 | orchestrator | 2026-04-08 04:19:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:19:41.654880 | orchestrator | 2026-04-08 04:19:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:19:44.706211 | orchestrator | 2026-04-08 04:19:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:19:44.707648 | orchestrator | 2026-04-08 04:19:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:19:44.707693 | orchestrator | 2026-04-08 04:19:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:19:47.758135 | orchestrator | 2026-04-08 04:19:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:19:47.760556 | orchestrator | 2026-04-08 04:19:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:19:47.760615 | orchestrator | 2026-04-08 04:19:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:19:50.803285 | orchestrator | 2026-04-08 04:19:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:19:50.805131 | orchestrator | 2026-04-08 04:19:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:19:50.805191 | orchestrator | 2026-04-08 04:19:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:19:53.859225 | orchestrator | 2026-04-08 04:19:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:19:53.861468 | orchestrator | 2026-04-08 04:19:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:19:53.861743 | orchestrator | 2026-04-08 04:19:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:19:56.904309 | orchestrator | 2026-04-08 04:19:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:19:56.904569 | orchestrator | 2026-04-08 04:19:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:19:56.904606 | orchestrator | 2026-04-08 04:19:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:19:59.951569 | orchestrator | 2026-04-08 04:19:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:19:59.953255 | orchestrator | 2026-04-08 04:19:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:19:59.953321 | orchestrator | 2026-04-08 04:19:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:20:03.001075 | orchestrator | 2026-04-08 04:20:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:20:03.002700 | orchestrator | 2026-04-08 04:20:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:20:03.002738 | orchestrator | 2026-04-08 04:20:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:20:06.059521 | orchestrator | 2026-04-08 04:20:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:20:06.060534 | orchestrator | 2026-04-08 04:20:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:20:06.060581 | orchestrator | 2026-04-08 04:20:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:20:09.106850 | orchestrator | 2026-04-08 04:20:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:20:09.108393 | orchestrator | 2026-04-08 04:20:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:20:09.108468 | orchestrator | 2026-04-08 04:20:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:20:12.159145 | orchestrator | 2026-04-08 04:20:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:20:12.159729 | orchestrator | 2026-04-08 04:20:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:20:12.159880 | orchestrator | 2026-04-08 04:20:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:20:15.208918 | orchestrator | 2026-04-08 04:20:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:20:15.211060 | orchestrator | 2026-04-08 04:20:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:20:15.211156 | orchestrator | 2026-04-08 04:20:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:20:18.259865 | orchestrator | 2026-04-08 04:20:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:20:18.260757 | orchestrator | 2026-04-08 04:20:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:20:18.260973 | orchestrator | 2026-04-08 04:20:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:20:21.310009 | orchestrator | 2026-04-08 04:20:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:20:21.311470 | orchestrator | 2026-04-08 04:20:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:20:21.311504 | orchestrator | 2026-04-08 04:20:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:20:24.354304 | orchestrator | 2026-04-08 04:20:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:20:24.355480 | orchestrator | 2026-04-08 04:20:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:20:24.355528 | orchestrator | 2026-04-08 04:20:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:20:27.401501 | orchestrator | 2026-04-08 04:20:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:20:27.403275 | orchestrator | 2026-04-08 04:20:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:20:27.403467 | orchestrator | 2026-04-08 04:20:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:20:30.446624 | orchestrator | 2026-04-08 04:20:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:20:30.449506 | orchestrator | 2026-04-08 04:20:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:20:30.449578 | orchestrator | 2026-04-08 04:20:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:20:33.499086 | orchestrator | 2026-04-08 04:20:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:20:33.503161 | orchestrator | 2026-04-08 04:20:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:20:33.503239 | orchestrator | 2026-04-08 04:20:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:20:36.546392 | orchestrator | 2026-04-08 04:20:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:20:36.550246 | orchestrator | 2026-04-08 04:20:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:20:36.550310 | orchestrator | 2026-04-08 04:20:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:20:39.600229 | orchestrator | 2026-04-08 04:20:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:22:39.713208 | orchestrator | 2026-04-08 04:22:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:22:39.713322 | orchestrator | 2026-04-08 04:22:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:22:42.756656 | orchestrator | 2026-04-08 04:22:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:22:42.758184 | orchestrator | 2026-04-08 04:22:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:22:42.758246 | orchestrator | 2026-04-08 04:22:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:22:45.808953 | orchestrator | 2026-04-08 04:22:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:22:45.810807 | orchestrator | 2026-04-08 04:22:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:22:45.810887 | orchestrator | 2026-04-08 04:22:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:22:48.860255 | orchestrator | 2026-04-08 04:22:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:22:48.862823 | orchestrator | 2026-04-08 04:22:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:22:48.862865 | orchestrator | 2026-04-08 04:22:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:22:51.902736 | orchestrator | 2026-04-08 04:22:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:22:51.904718 | orchestrator | 2026-04-08 04:22:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:22:51.904833 | orchestrator | 2026-04-08 04:22:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:22:54.957391 | orchestrator | 2026-04-08 04:22:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:22:54.959008 | orchestrator | 2026-04-08 04:22:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:22:54.959067 | orchestrator | 2026-04-08 04:22:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:22:58.002343 | orchestrator | 2026-04-08 04:22:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:22:58.004647 | orchestrator | 2026-04-08 04:22:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:22:58.004674 | orchestrator | 2026-04-08 04:22:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:23:01.043666 | orchestrator | 2026-04-08 04:23:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:23:01.045107 | orchestrator | 2026-04-08 04:23:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:23:01.045185 | orchestrator | 2026-04-08 04:23:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:23:04.089525 | orchestrator | 2026-04-08 04:23:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:23:04.090560 | orchestrator | 2026-04-08 04:23:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:23:04.090602 | orchestrator | 2026-04-08 04:23:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:23:07.142715 | orchestrator | 2026-04-08 04:23:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:23:07.143316 | orchestrator | 2026-04-08 04:23:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:23:07.143460 | orchestrator | 2026-04-08 04:23:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:23:10.196021 | orchestrator | 2026-04-08 04:23:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:23:10.197141 | orchestrator | 2026-04-08 04:23:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:23:10.197174 | orchestrator | 2026-04-08 04:23:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:23:13.247168 | orchestrator | 2026-04-08 04:23:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:23:13.248619 | orchestrator | 2026-04-08 04:23:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:23:13.248652 | orchestrator | 2026-04-08 04:23:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:23:16.294877 | orchestrator | 2026-04-08 04:23:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:23:16.296485 | orchestrator | 2026-04-08 04:23:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:23:16.296535 | orchestrator | 2026-04-08 04:23:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:23:19.352846 | orchestrator | 2026-04-08 04:23:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:23:19.355277 | orchestrator | 2026-04-08 04:23:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:23:19.355369 | orchestrator | 2026-04-08 04:23:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:23:22.410699 | orchestrator | 2026-04-08 04:23:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:23:22.412885 | orchestrator | 2026-04-08 04:23:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:23:22.412934 | orchestrator | 2026-04-08 04:23:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:23:25.462314 | orchestrator | 2026-04-08 04:23:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:23:25.606405 | orchestrator | 2026-04-08 04:23:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:23:25.606480 | orchestrator | 2026-04-08 04:23:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:23:28.509808 | orchestrator | 2026-04-08 04:23:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:23:28.511254 | orchestrator | 2026-04-08 04:23:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:23:28.511349 | orchestrator | 2026-04-08 04:23:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:23:31.555360 | orchestrator | 2026-04-08 04:23:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:23:31.558924 | orchestrator | 2026-04-08 04:23:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:23:31.559041 | orchestrator | 2026-04-08 04:23:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:23:34.608978 | orchestrator | 2026-04-08 04:23:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:23:34.613133 | orchestrator | 2026-04-08 04:23:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:23:34.613202 | orchestrator | 2026-04-08 04:23:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:23:37.660280 | orchestrator | 2026-04-08 04:23:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:23:37.662254 | orchestrator | 2026-04-08 04:23:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:23:37.662330 | orchestrator | 2026-04-08 04:23:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:23:40.713347 | orchestrator | 2026-04-08 04:23:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:23:40.715493 | orchestrator | 2026-04-08 04:23:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:23:40.715547 | orchestrator | 2026-04-08 04:23:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:23:43.766597 | orchestrator | 2026-04-08 04:23:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:23:43.768333 | orchestrator | 2026-04-08 04:23:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:23:43.768412 | orchestrator | 2026-04-08 04:23:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:23:46.814281 | orchestrator | 2026-04-08 04:23:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:23:46.815537 | orchestrator | 2026-04-08 04:23:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:23:46.815571 | orchestrator | 2026-04-08 04:23:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:23:49.866375 | orchestrator | 2026-04-08 04:23:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:23:49.867472 | orchestrator | 2026-04-08 04:23:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:23:49.867839 | orchestrator | 2026-04-08 04:23:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:23:52.912995 | orchestrator | 2026-04-08 04:23:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:23:52.914638 | orchestrator | 2026-04-08 04:23:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:23:52.914699 | orchestrator | 2026-04-08 04:23:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:23:55.961560 | orchestrator | 2026-04-08 04:23:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:23:55.963559 | orchestrator | 2026-04-08 04:23:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:23:55.963618 | orchestrator | 2026-04-08 04:23:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:23:59.008700 | orchestrator | 2026-04-08 04:23:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:23:59.011039 | orchestrator | 2026-04-08 04:23:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:23:59.011101 | orchestrator | 2026-04-08 04:23:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:24:02.063221 | orchestrator | 2026-04-08 04:24:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:24:02.065570 | orchestrator | 2026-04-08 04:24:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:24:02.065830 | orchestrator | 2026-04-08 04:24:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:24:05.114272 | orchestrator | 2026-04-08 04:24:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:24:05.115730 | orchestrator | 2026-04-08 04:24:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:24:05.115953 | orchestrator | 2026-04-08 04:24:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:24:08.162415 | orchestrator | 2026-04-08 04:24:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:24:08.163932 | orchestrator | 2026-04-08 04:24:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:24:08.163985 | orchestrator | 2026-04-08 04:24:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:24:11.205961 | orchestrator | 2026-04-08 04:24:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:24:11.208034 | orchestrator | 2026-04-08 04:24:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:24:11.208076 | orchestrator | 2026-04-08 04:24:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:24:14.259747 | orchestrator | 2026-04-08 04:24:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:24:14.261447 | orchestrator | 2026-04-08 04:24:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:24:14.261519 | orchestrator | 2026-04-08 04:24:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:24:17.307353 | orchestrator | 2026-04-08 04:24:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:24:17.308705 | orchestrator | 2026-04-08 04:24:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:24:17.308750 | orchestrator | 2026-04-08 04:24:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:24:20.357814 | orchestrator | 2026-04-08 04:24:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:24:20.359574 | orchestrator | 2026-04-08 04:24:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:24:20.360236 | orchestrator | 2026-04-08 04:24:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:24:23.412688 | orchestrator | 2026-04-08 04:24:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:24:23.414454 | orchestrator | 2026-04-08 04:24:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:24:23.414953 | orchestrator | 2026-04-08 04:24:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:24:26.469255 | orchestrator | 2026-04-08 04:24:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:24:26.471356 | orchestrator | 2026-04-08 04:24:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:24:26.471432 | orchestrator | 2026-04-08 04:24:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:24:29.522625 | orchestrator | 2026-04-08 04:24:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:24:29.524275 | orchestrator | 2026-04-08 04:24:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:24:29.524390 | orchestrator | 2026-04-08 04:24:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:24:32.575357 | orchestrator | 2026-04-08 04:24:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:24:32.575552 | orchestrator | 2026-04-08 04:24:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:24:32.575572 | orchestrator | 2026-04-08 04:24:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:24:35.632285 | orchestrator | 2026-04-08 04:24:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:24:35.633690 | orchestrator | 2026-04-08 04:24:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:24:35.633745 | orchestrator | 2026-04-08 04:24:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:24:38.682422 | orchestrator | 2026-04-08 04:24:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:24:38.685219 | orchestrator | 2026-04-08 04:24:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:24:38.685308 | orchestrator | 2026-04-08 04:24:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:24:41.735338 | orchestrator | 2026-04-08 04:24:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:24:41.736604 | orchestrator | 2026-04-08 04:24:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:24:41.736667 | orchestrator | 2026-04-08 04:24:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:24:44.787569 | orchestrator | 2026-04-08 04:24:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:24:44.787676 | orchestrator | 2026-04-08 04:24:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:24:44.787692 | orchestrator | 2026-04-08 04:24:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:24:47.841615 | orchestrator | 2026-04-08 04:24:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:24:47.843629 | orchestrator | 2026-04-08 04:24:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:24:47.843700 | orchestrator | 2026-04-08 04:24:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:24:50.891925 | orchestrator | 2026-04-08 04:24:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:24:50.893536 | orchestrator | 2026-04-08 04:24:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:24:50.893658 | orchestrator | 2026-04-08 04:24:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:24:53.943312 | orchestrator | 2026-04-08 04:24:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:24:53.944412 | orchestrator | 2026-04-08 04:24:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:24:53.944492 | orchestrator | 2026-04-08 04:24:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:24:56.995320 | orchestrator | 2026-04-08 04:24:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:24:56.997634 | orchestrator | 2026-04-08 04:24:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:24:56.997852 | orchestrator | 2026-04-08 04:24:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:25:00.041901 | orchestrator | 2026-04-08 04:25:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:25:00.041976 | orchestrator | 2026-04-08 04:25:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:25:00.041986 | orchestrator | 2026-04-08 04:25:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:25:03.091759 | orchestrator | 2026-04-08 04:25:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:25:03.093333 | orchestrator | 2026-04-08 04:25:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:25:03.093416 | orchestrator | 2026-04-08 04:25:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:25:06.143590 | orchestrator | 2026-04-08 04:25:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:25:06.146914 | orchestrator | 2026-04-08 04:25:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:25:06.146972 | orchestrator | 2026-04-08 04:25:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:25:09.195327 | orchestrator | 2026-04-08 04:25:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:25:09.197239 | orchestrator | 2026-04-08 04:25:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:25:09.197289 | orchestrator | 2026-04-08 04:25:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:25:12.243251 | orchestrator | 2026-04-08 04:25:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:25:12.243455 | orchestrator | 2026-04-08 04:25:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:25:12.243473 | orchestrator | 2026-04-08 04:25:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:25:15.288154 | orchestrator | 2026-04-08 04:25:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:25:15.290711 | orchestrator | 2026-04-08 04:25:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:25:15.290769 | orchestrator | 2026-04-08 04:25:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:25:18.342386 | orchestrator | 2026-04-08 04:25:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:25:18.344200 | orchestrator | 2026-04-08 04:25:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:25:18.344270 | orchestrator | 2026-04-08 04:25:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:25:21.399515 | orchestrator | 2026-04-08 04:25:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:25:21.400859 | orchestrator | 2026-04-08 04:25:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:25:21.400900 | orchestrator | 2026-04-08 04:25:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:25:24.448363 | orchestrator | 2026-04-08 04:25:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:25:24.449865 | orchestrator | 2026-04-08 04:25:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:25:24.449936 | orchestrator | 2026-04-08 04:25:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:25:27.507581 | orchestrator | 2026-04-08 04:25:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:25:27.509154 | orchestrator | 2026-04-08 04:25:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:25:27.509194 | orchestrator | 2026-04-08 04:25:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:25:30.560909 | orchestrator | 2026-04-08 04:25:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:25:30.563816 | orchestrator | 2026-04-08 04:25:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:25:30.563903 | orchestrator | 2026-04-08 04:25:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:25:33.616369 | orchestrator | 2026-04-08 04:25:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:25:33.619141 | orchestrator | 2026-04-08 04:25:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:25:33.619226 | orchestrator | 2026-04-08 04:25:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:25:36.666592 | orchestrator | 2026-04-08 04:25:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:25:36.670309 | orchestrator | 2026-04-08 04:25:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:25:36.670361 | orchestrator | 2026-04-08 04:25:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:25:39.723139 | orchestrator | 2026-04-08 04:25:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:25:39.725517 | orchestrator | 2026-04-08 04:25:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:25:39.725562 | orchestrator | 2026-04-08 04:25:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:25:42.776721 | orchestrator | 2026-04-08 04:25:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:25:42.778275 | orchestrator | 2026-04-08 04:25:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:25:42.778326 | orchestrator | 2026-04-08 04:25:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:25:45.822295 | orchestrator | 2026-04-08 04:25:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:25:45.825000 | orchestrator | 2026-04-08 04:25:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:25:45.825074 | orchestrator | 2026-04-08 04:25:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:25:48.871369 | orchestrator | 2026-04-08 04:25:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:25:48.873491 | orchestrator | 2026-04-08 04:25:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:25:48.873584 | orchestrator | 2026-04-08 04:25:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:25:51.923036 | orchestrator | 2026-04-08 04:25:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:25:51.926346 | orchestrator | 2026-04-08 04:25:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:25:51.926408 | orchestrator | 2026-04-08 04:25:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:25:54.976734 | orchestrator | 2026-04-08 04:25:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:25:54.979012 | orchestrator | 2026-04-08 04:25:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:25:54.979067 | orchestrator | 2026-04-08 04:25:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:25:58.026513 | orchestrator | 2026-04-08 04:25:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:25:58.028296 | orchestrator | 2026-04-08 04:25:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:25:58.028374 | orchestrator | 2026-04-08 04:25:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:26:01.070358 | orchestrator | 2026-04-08 04:26:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:26:01.072248 | orchestrator | 2026-04-08 04:26:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:26:01.072324 | orchestrator | 2026-04-08 04:26:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:26:04.121048 | orchestrator | 2026-04-08 04:26:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:26:04.123889 | orchestrator | 2026-04-08 04:26:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:26:04.124100 | orchestrator | 2026-04-08 04:26:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:26:07.172679 | orchestrator | 2026-04-08 04:26:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:26:07.175201 | orchestrator | 2026-04-08 04:26:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:26:07.175345 | orchestrator | 2026-04-08 04:26:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:26:10.226654 | orchestrator | 2026-04-08 04:26:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:26:10.233059 | orchestrator | 2026-04-08 04:26:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:26:10.233134 | orchestrator | 2026-04-08 04:26:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:26:13.280409 | orchestrator | 2026-04-08 04:26:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:26:13.282224 | orchestrator | 2026-04-08 04:26:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:26:13.282267 | orchestrator | 2026-04-08 04:26:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:26:16.328309 | orchestrator | 2026-04-08 04:26:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:26:16.329841 | orchestrator | 2026-04-08 04:26:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:26:16.329990 | orchestrator | 2026-04-08 04:26:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:26:19.374542 | orchestrator | 2026-04-08 04:26:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:26:19.377234 | orchestrator | 2026-04-08 04:26:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:26:19.377301 | orchestrator | 2026-04-08 04:26:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:26:22.423602 | orchestrator | 2026-04-08 04:26:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:26:22.425878 | orchestrator | 2026-04-08 04:26:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:26:22.426209 | orchestrator | 2026-04-08 04:26:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:26:25.478561 | orchestrator | 2026-04-08 04:26:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:26:25.480940 | orchestrator | 2026-04-08 04:26:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:26:25.480980 | orchestrator | 2026-04-08 04:26:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:26:28.539161 | orchestrator | 2026-04-08 04:26:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:26:28.541017 | orchestrator | 2026-04-08 04:26:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:26:28.541186 | orchestrator | 2026-04-08 04:26:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:26:31.589861 | orchestrator | 2026-04-08 04:26:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:26:31.591240 | orchestrator | 2026-04-08 04:26:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:26:31.591315 | orchestrator | 2026-04-08 04:26:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:26:34.640046 | orchestrator | 2026-04-08 04:26:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:26:34.640332 | orchestrator | 2026-04-08 04:26:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:26:34.640374 | orchestrator | 2026-04-08 04:26:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:26:37.681533 | orchestrator | 2026-04-08 04:26:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:26:37.683885 | orchestrator | 2026-04-08 04:26:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:26:37.684125 | orchestrator | 2026-04-08 04:26:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:26:40.732270 | orchestrator | 2026-04-08 04:26:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:26:40.734208 | orchestrator | 2026-04-08 04:26:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:26:40.734420 | orchestrator | 2026-04-08 04:26:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:26:43.781024 | orchestrator | 2026-04-08 04:26:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:26:43.781873 | orchestrator | 2026-04-08 04:26:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:26:43.781926 | orchestrator | 2026-04-08 04:26:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:26:46.830370 | orchestrator | 2026-04-08 04:26:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:26:46.832883 | orchestrator | 2026-04-08 04:26:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:26:46.832986 | orchestrator | 2026-04-08 04:26:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:26:49.878492 | orchestrator | 2026-04-08 04:26:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:26:49.880670 | orchestrator | 2026-04-08 04:26:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:26:49.880749 | orchestrator | 2026-04-08 04:26:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:26:52.927381 | orchestrator | 2026-04-08 04:26:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:26:52.929540 | orchestrator | 2026-04-08 04:26:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:26:52.930365 | orchestrator | 2026-04-08 04:26:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:26:55.980025 | orchestrator | 2026-04-08 04:26:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:26:55.982004 | orchestrator | 2026-04-08 04:26:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:26:55.982150 | orchestrator | 2026-04-08 04:26:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:26:59.030463 | orchestrator | 2026-04-08 04:26:59 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:26:59.030683 | orchestrator | 2026-04-08 04:26:59 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:26:59.030706 | orchestrator | 2026-04-08 04:26:59 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:27:02.077381 | orchestrator | 2026-04-08 04:27:02 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:27:02.079279 | orchestrator | 2026-04-08 04:27:02 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:27:02.079340 | orchestrator | 2026-04-08 04:27:02 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:27:05.133278 | orchestrator | 2026-04-08 04:27:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:27:05.133417 | orchestrator | 2026-04-08 04:27:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:27:05.133446 | orchestrator | 2026-04-08 04:27:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:27:08.182672 | orchestrator | 2026-04-08 04:27:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:27:08.182778 | orchestrator | 2026-04-08 04:27:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:27:08.182861 | orchestrator | 2026-04-08 04:27:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:27:11.227648 | orchestrator | 2026-04-08 04:27:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:27:11.230860 | orchestrator | 2026-04-08 04:27:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:27:11.230904 | orchestrator | 2026-04-08 04:27:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:27:14.282209 | orchestrator | 2026-04-08 04:27:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:27:14.283462 | orchestrator | 2026-04-08 04:27:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:27:14.283633 | orchestrator | 2026-04-08 04:27:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:27:17.325185 | orchestrator | 2026-04-08 04:27:17 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:27:17.326955 | orchestrator | 2026-04-08 04:27:17 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:27:17.327019 | orchestrator | 2026-04-08 04:27:17 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:27:20.368252 | orchestrator | 2026-04-08 04:27:20 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:27:20.370283 | orchestrator | 2026-04-08 04:27:20 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:27:20.370338 | orchestrator | 2026-04-08 04:27:20 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:27:23.416148 | orchestrator | 2026-04-08 04:27:23 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:27:23.417499 | orchestrator | 2026-04-08 04:27:23 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:27:23.417551 | orchestrator | 2026-04-08 04:27:23 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:27:26.463085 | orchestrator | 2026-04-08 04:27:26 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:27:26.465668 | orchestrator | 2026-04-08 04:27:26 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:27:26.465737 | orchestrator | 2026-04-08 04:27:26 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:27:29.527287 | orchestrator | 2026-04-08 04:27:29 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:27:29.529568 | orchestrator | 2026-04-08 04:27:29 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:27:29.529591 | orchestrator | 2026-04-08 04:27:29 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:27:32.575563 | orchestrator | 2026-04-08 04:27:32 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:27:32.577399 | orchestrator | 2026-04-08 04:27:32 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:27:32.577455 | orchestrator | 2026-04-08 04:27:32 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:27:35.623089 | orchestrator | 2026-04-08 04:27:35 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:27:35.623688 | orchestrator | 2026-04-08 04:27:35 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:27:35.623739 | orchestrator | 2026-04-08 04:27:35 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:27:38.674255 | orchestrator | 2026-04-08 04:27:38 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:27:38.675906 | orchestrator | 2026-04-08 04:27:38 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:27:38.676002 | orchestrator | 2026-04-08 04:27:38 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:27:41.719719 | orchestrator | 2026-04-08 04:27:41 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:27:41.720173 | orchestrator | 2026-04-08 04:27:41 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:27:41.720440 | orchestrator | 2026-04-08 04:27:41 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:27:44.766574 | orchestrator | 2026-04-08 04:27:44 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:27:44.768001 | orchestrator | 2026-04-08 04:27:44 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:27:44.768505 | orchestrator | 2026-04-08 04:27:44 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:27:47.816259 | orchestrator | 2026-04-08 04:27:47 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:27:47.816373 | orchestrator | 2026-04-08 04:27:47 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:27:47.816389 | orchestrator | 2026-04-08 04:27:47 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:27:50.860990 | orchestrator | 2026-04-08 04:27:50 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:27:50.861105 | orchestrator | 2026-04-08 04:27:50 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:27:50.861116 | orchestrator | 2026-04-08 04:27:50 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:27:53.911109 | orchestrator | 2026-04-08 04:27:53 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:27:53.912864 | orchestrator | 2026-04-08 04:27:53 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:27:53.912936 | orchestrator | 2026-04-08 04:27:53 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:27:56.961946 | orchestrator | 2026-04-08 04:27:56 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:27:56.963794 | orchestrator | 2026-04-08 04:27:56 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:27:56.964102 | orchestrator | 2026-04-08 04:27:56 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:28:00.016105 | orchestrator | 2026-04-08 04:28:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:28:00.018364 | orchestrator | 2026-04-08 04:28:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:28:00.018485 | orchestrator | 2026-04-08 04:28:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:28:03.059011 | orchestrator | 2026-04-08 04:28:03 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:28:03.061272 | orchestrator | 2026-04-08 04:28:03 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:28:03.061404 | orchestrator | 2026-04-08 04:28:03 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:28:06.110449 | orchestrator | 2026-04-08 04:28:06 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:28:06.111641 | orchestrator | 2026-04-08 04:28:06 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:28:06.111705 | orchestrator | 2026-04-08 04:28:06 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:28:09.157040 | orchestrator | 2026-04-08 04:28:09 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:28:09.158772 | orchestrator | 2026-04-08 04:28:09 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:28:09.159262 | orchestrator | 2026-04-08 04:28:09 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:28:12.199481 | orchestrator | 2026-04-08 04:28:12 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:28:12.200768 | orchestrator | 2026-04-08 04:28:12 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:28:12.200851 | orchestrator | 2026-04-08 04:28:12 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:28:15.249984 | orchestrator | 2026-04-08 04:28:15 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:28:15.251979 | orchestrator | 2026-04-08 04:28:15 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:28:15.252022 | orchestrator | 2026-04-08 04:28:15 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:28:18.297371 | orchestrator | 2026-04-08 04:28:18 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:28:18.298228 | orchestrator | 2026-04-08 04:28:18 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:28:18.298277 | orchestrator | 2026-04-08 04:28:18 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:28:21.337404 | orchestrator | 2026-04-08 04:28:21 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:28:21.340164 | orchestrator | 2026-04-08 04:28:21 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:28:21.340855 | orchestrator | 2026-04-08 04:28:21 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:28:24.388108 | orchestrator | 2026-04-08 04:28:24 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:28:24.390403 | orchestrator | 2026-04-08 04:28:24 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:28:24.390482 | orchestrator | 2026-04-08 04:28:24 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:28:27.433327 | orchestrator | 2026-04-08 04:28:27 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:28:27.434595 | orchestrator | 2026-04-08 04:28:27 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:28:27.434636 | orchestrator | 2026-04-08 04:28:27 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:28:30.486689 | orchestrator | 2026-04-08 04:28:30 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:28:30.487461 | orchestrator | 2026-04-08 04:28:30 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:28:30.487543 | orchestrator | 2026-04-08 04:28:30 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:28:33.532179 | orchestrator | 2026-04-08 04:28:33 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:28:33.533274 | orchestrator | 2026-04-08 04:28:33 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:28:33.533365 | orchestrator | 2026-04-08 04:28:33 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:28:36.577315 | orchestrator | 2026-04-08 04:28:36 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:28:36.579057 | orchestrator | 2026-04-08 04:28:36 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:28:36.579111 | orchestrator | 2026-04-08 04:28:36 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:28:39.627293 | orchestrator | 2026-04-08 04:28:39 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:28:39.628948 | orchestrator | 2026-04-08 04:28:39 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:28:39.629025 | orchestrator | 2026-04-08 04:28:39 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:28:42.681377 | orchestrator | 2026-04-08 04:28:42 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:28:42.682972 | orchestrator | 2026-04-08 04:28:42 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:28:42.683015 | orchestrator | 2026-04-08 04:28:42 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:28:45.732010 | orchestrator | 2026-04-08 04:28:45 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:28:45.733858 | orchestrator | 2026-04-08 04:28:45 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:28:45.733928 | orchestrator | 2026-04-08 04:28:45 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:28:48.773876 | orchestrator | 2026-04-08 04:28:48 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:28:48.775377 | orchestrator | 2026-04-08 04:28:48 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:28:48.775669 | orchestrator | 2026-04-08 04:28:48 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:28:51.825556 | orchestrator | 2026-04-08 04:28:51 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:28:51.826597 | orchestrator | 2026-04-08 04:28:51 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:28:51.826638 | orchestrator | 2026-04-08 04:28:51 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:28:54.872708 | orchestrator | 2026-04-08 04:28:54 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:28:54.873672 | orchestrator | 2026-04-08 04:28:54 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:28:54.873698 | orchestrator | 2026-04-08 04:28:54 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:28:57.918756 | orchestrator | 2026-04-08 04:28:57 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:28:57.922097 | orchestrator | 2026-04-08 04:28:57 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:28:57.922142 | orchestrator | 2026-04-08 04:28:57 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:29:00.968712 | orchestrator | 2026-04-08 04:29:00 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:29:00.971484 | orchestrator | 2026-04-08 04:29:00 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:29:00.971582 | orchestrator | 2026-04-08 04:29:00 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:29:04.019479 | orchestrator | 2026-04-08 04:29:04 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:29:04.022642 | orchestrator | 2026-04-08 04:29:04 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:29:04.022701 | orchestrator | 2026-04-08 04:29:04 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:29:07.061898 | orchestrator | 2026-04-08 04:29:07 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:29:07.062601 | orchestrator | 2026-04-08 04:29:07 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:29:07.062630 | orchestrator | 2026-04-08 04:29:07 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:29:10.107095 | orchestrator | 2026-04-08 04:29:10 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:29:10.108971 | orchestrator | 2026-04-08 04:29:10 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:29:10.109021 | orchestrator | 2026-04-08 04:29:10 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:29:13.154418 | orchestrator | 2026-04-08 04:29:13 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:29:13.155195 | orchestrator | 2026-04-08 04:29:13 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:29:13.155337 | orchestrator | 2026-04-08 04:29:13 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:29:16.205673 | orchestrator | 2026-04-08 04:29:16 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:29:16.206875 | orchestrator | 2026-04-08 04:29:16 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:29:16.206914 | orchestrator | 2026-04-08 04:29:16 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:29:19.247019 | orchestrator | 2026-04-08 04:29:19 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:29:19.247930 | orchestrator | 2026-04-08 04:29:19 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:29:19.248096 | orchestrator | 2026-04-08 04:29:19 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:29:22.293201 | orchestrator | 2026-04-08 04:29:22 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:29:22.296154 | orchestrator | 2026-04-08 04:29:22 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:29:22.296313 | orchestrator | 2026-04-08 04:29:22 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:29:25.344934 | orchestrator | 2026-04-08 04:29:25 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:29:25.346497 | orchestrator | 2026-04-08 04:29:25 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:29:25.346552 | orchestrator | 2026-04-08 04:29:25 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:29:28.392882 | orchestrator | 2026-04-08 04:29:28 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:29:28.394217 | orchestrator | 2026-04-08 04:29:28 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:29:28.394249 | orchestrator | 2026-04-08 04:29:28 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:29:31.445705 | orchestrator | 2026-04-08 04:29:31 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:29:31.449088 | orchestrator | 2026-04-08 04:29:31 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:29:31.449170 | orchestrator | 2026-04-08 04:29:31 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:29:34.499024 | orchestrator | 2026-04-08 04:29:34 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:29:34.500242 | orchestrator | 2026-04-08 04:29:34 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:29:34.500291 | orchestrator | 2026-04-08 04:29:34 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:29:37.545124 | orchestrator | 2026-04-08 04:29:37 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:29:37.547708 | orchestrator | 2026-04-08 04:29:37 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:29:37.548079 | orchestrator | 2026-04-08 04:29:37 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:29:40.598810 | orchestrator | 2026-04-08 04:29:40 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:29:40.602799 | orchestrator | 2026-04-08 04:29:40 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:29:40.602877 | orchestrator | 2026-04-08 04:29:40 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:29:43.655614 | orchestrator | 2026-04-08 04:29:43 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:29:43.657358 | orchestrator | 2026-04-08 04:29:43 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:29:43.657411 | orchestrator | 2026-04-08 04:29:43 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:29:46.709297 | orchestrator | 2026-04-08 04:29:46 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:29:46.710253 | orchestrator | 2026-04-08 04:29:46 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:29:46.710528 | orchestrator | 2026-04-08 04:29:46 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:29:49.760790 | orchestrator | 2026-04-08 04:29:49 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:29:49.764253 | orchestrator | 2026-04-08 04:29:49 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:29:49.764366 | orchestrator | 2026-04-08 04:29:49 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:29:52.816404 | orchestrator | 2026-04-08 04:29:52 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:29:52.818205 | orchestrator | 2026-04-08 04:29:52 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:29:52.818279 | orchestrator | 2026-04-08 04:29:52 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:29:55.874668 | orchestrator | 2026-04-08 04:29:55 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:29:55.876626 | orchestrator | 2026-04-08 04:29:55 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:29:55.876688 | orchestrator | 2026-04-08 04:29:55 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:29:58.927157 | orchestrator | 2026-04-08 04:29:58 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:29:58.929268 | orchestrator | 2026-04-08 04:29:58 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:29:58.929323 | orchestrator | 2026-04-08 04:29:58 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:30:01.981588 | orchestrator | 2026-04-08 04:30:01 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:30:01.983116 | orchestrator | 2026-04-08 04:30:01 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:30:01.983180 | orchestrator | 2026-04-08 04:30:01 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:30:05.039091 | orchestrator | 2026-04-08 04:30:05 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:30:05.041376 | orchestrator | 2026-04-08 04:30:05 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:30:05.041660 | orchestrator | 2026-04-08 04:30:05 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:30:08.084161 | orchestrator | 2026-04-08 04:30:08 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:30:08.085080 | orchestrator | 2026-04-08 04:30:08 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:30:08.085122 | orchestrator | 2026-04-08 04:30:08 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:30:11.133932 | orchestrator | 2026-04-08 04:30:11 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:30:11.135317 | orchestrator | 2026-04-08 04:30:11 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:30:11.135373 | orchestrator | 2026-04-08 04:30:11 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:30:14.185510 | orchestrator | 2026-04-08 04:30:14 | INFO  | Task e8d40863-7499-4ee6-9471-9efa1265639c is in state STARTED 2026-04-08 04:30:14.186224 | orchestrator | 2026-04-08 04:30:14 | INFO  | Task 29308bb9-69e9-4c8d-bd06-ff7b62e055f3 is in state STARTED 2026-04-08 04:30:14.186274 | orchestrator | 2026-04-08 04:30:14 | INFO  | Wait 1 second(s) until the next check 2026-04-08 04:30:15.808195 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-04-08 04:30:15.810485 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-08 04:30:16.627593 | 2026-04-08 04:30:16.627766 | PLAY [Post output play] 2026-04-08 04:30:16.650729 | 2026-04-08 04:30:16.650953 | LOOP [stage-output : Register sources] 2026-04-08 04:30:16.724810 | 2026-04-08 04:30:16.725170 | TASK [stage-output : Check sudo] 2026-04-08 04:30:17.626722 | orchestrator | sudo: a password is required 2026-04-08 04:30:17.768870 | orchestrator | ok: Runtime: 0:00:00.016283 2026-04-08 04:30:17.783176 | 2026-04-08 04:30:17.783340 | LOOP [stage-output : Set source and destination for files and folders] 2026-04-08 04:30:17.825455 | 2026-04-08 04:30:17.825736 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-04-08 04:30:17.895385 | orchestrator | ok 2026-04-08 04:30:17.904588 | 2026-04-08 04:30:17.904736 | LOOP [stage-output : Ensure target folders exist] 2026-04-08 04:30:18.377547 | orchestrator | ok: "docs" 2026-04-08 04:30:18.377913 | 2026-04-08 04:30:18.651962 | orchestrator | ok: "artifacts" 2026-04-08 04:30:18.964569 | orchestrator | ok: "logs" 2026-04-08 04:30:18.982190 | 2026-04-08 04:30:18.982370 | LOOP [stage-output : Copy files and folders to staging folder] 2026-04-08 04:30:19.021619 | 2026-04-08 04:30:19.021982 | TASK [stage-output : Make all log files readable] 2026-04-08 04:30:19.330914 | orchestrator | ok 2026-04-08 04:30:19.347899 | 2026-04-08 04:30:19.348164 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-04-08 04:30:19.375028 | orchestrator | skipping: Conditional result was False 2026-04-08 04:30:19.390228 | 2026-04-08 04:30:19.390418 | TASK [stage-output : Discover log files for compression] 2026-04-08 04:30:19.415537 | orchestrator | skipping: Conditional result was False 2026-04-08 04:30:19.434654 | 2026-04-08 04:30:19.435018 | LOOP [stage-output : Archive everything from logs] 2026-04-08 04:30:19.482244 | 2026-04-08 04:30:19.482521 | PLAY [Post cleanup play] 2026-04-08 04:30:19.492101 | 2026-04-08 04:30:19.492251 | TASK [Set cloud fact (Zuul deployment)] 2026-04-08 04:30:19.561817 | orchestrator | ok 2026-04-08 04:30:19.573339 | 2026-04-08 04:30:19.573500 | TASK [Set cloud fact (local deployment)] 2026-04-08 04:30:19.599202 | orchestrator | skipping: Conditional result was False 2026-04-08 04:30:19.611398 | 2026-04-08 04:30:19.611574 | TASK [Clean the cloud environment] 2026-04-08 04:30:20.268513 | orchestrator | 2026-04-08 04:30:20 - clean up servers 2026-04-08 04:30:21.376157 | orchestrator | 2026-04-08 04:30:21 - testbed-manager 2026-04-08 04:30:21.458619 | orchestrator | 2026-04-08 04:30:21 - testbed-node-1 2026-04-08 04:30:21.545684 | orchestrator | 2026-04-08 04:30:21 - testbed-node-0 2026-04-08 04:30:21.643503 | orchestrator | 2026-04-08 04:30:21 - testbed-node-2 2026-04-08 04:30:21.732447 | orchestrator | 2026-04-08 04:30:21 - testbed-node-3 2026-04-08 04:30:21.837208 | orchestrator | 2026-04-08 04:30:21 - testbed-node-5 2026-04-08 04:30:21.935950 | orchestrator | 2026-04-08 04:30:21 - testbed-node-4 2026-04-08 04:30:22.026867 | orchestrator | 2026-04-08 04:30:22 - clean up keypairs 2026-04-08 04:30:22.043616 | orchestrator | 2026-04-08 04:30:22 - testbed 2026-04-08 04:30:22.067082 | orchestrator | 2026-04-08 04:30:22 - wait for servers to be gone 2026-04-08 04:30:33.042697 | orchestrator | 2026-04-08 04:30:33 - clean up ports 2026-04-08 04:30:33.251503 | orchestrator | 2026-04-08 04:30:33 - 7372f077-57ae-4538-8cd2-d59aaad18a41 2026-04-08 04:30:33.573179 | orchestrator | 2026-04-08 04:30:33 - 75da94a1-313d-4eab-9ccb-09f9f7e13170 2026-04-08 04:30:33.804943 | orchestrator | 2026-04-08 04:30:33 - 7f062bbf-642d-440f-8147-87ef15fffa4b 2026-04-08 04:30:34.104336 | orchestrator | 2026-04-08 04:30:34 - 81c9d017-b23e-459b-81a4-dfdef21fa65f 2026-04-08 04:30:34.328455 | orchestrator | 2026-04-08 04:30:34 - b1c5ba36-f26a-48c5-9e05-713d363ec01b 2026-04-08 04:30:34.579710 | orchestrator | 2026-04-08 04:30:34 - e9d4034d-0963-4f8d-b121-8b1e467e1ddb 2026-04-08 04:30:35.041255 | orchestrator | 2026-04-08 04:30:35 - f40c56aa-471a-4e30-9296-cf582196ee53 2026-04-08 04:30:35.301264 | orchestrator | 2026-04-08 04:30:35 - clean up volumes 2026-04-08 04:30:35.413747 | orchestrator | 2026-04-08 04:30:35 - testbed-volume-5-node-base 2026-04-08 04:30:35.460415 | orchestrator | 2026-04-08 04:30:35 - testbed-volume-2-node-base 2026-04-08 04:30:35.506693 | orchestrator | 2026-04-08 04:30:35 - testbed-volume-1-node-base 2026-04-08 04:30:35.551091 | orchestrator | 2026-04-08 04:30:35 - testbed-volume-0-node-base 2026-04-08 04:30:35.590001 | orchestrator | 2026-04-08 04:30:35 - testbed-volume-3-node-base 2026-04-08 04:30:35.631572 | orchestrator | 2026-04-08 04:30:35 - testbed-volume-4-node-base 2026-04-08 04:30:35.678885 | orchestrator | 2026-04-08 04:30:35 - testbed-volume-manager-base 2026-04-08 04:30:35.727811 | orchestrator | 2026-04-08 04:30:35 - testbed-volume-6-node-3 2026-04-08 04:30:35.774313 | orchestrator | 2026-04-08 04:30:35 - testbed-volume-8-node-5 2026-04-08 04:30:35.823955 | orchestrator | 2026-04-08 04:30:35 - testbed-volume-5-node-5 2026-04-08 04:30:35.870773 | orchestrator | 2026-04-08 04:30:35 - testbed-volume-7-node-4 2026-04-08 04:30:35.916951 | orchestrator | 2026-04-08 04:30:35 - testbed-volume-0-node-3 2026-04-08 04:30:35.961583 | orchestrator | 2026-04-08 04:30:35 - testbed-volume-2-node-5 2026-04-08 04:30:36.012798 | orchestrator | 2026-04-08 04:30:36 - testbed-volume-4-node-4 2026-04-08 04:30:36.062867 | orchestrator | 2026-04-08 04:30:36 - testbed-volume-3-node-3 2026-04-08 04:30:36.110393 | orchestrator | 2026-04-08 04:30:36 - testbed-volume-1-node-4 2026-04-08 04:30:36.155964 | orchestrator | 2026-04-08 04:30:36 - disconnect routers 2026-04-08 04:30:36.316888 | orchestrator | 2026-04-08 04:30:36 - testbed 2026-04-08 04:30:37.884096 | orchestrator | 2026-04-08 04:30:37 - clean up subnets 2026-04-08 04:30:38.028227 | orchestrator | 2026-04-08 04:30:38 - subnet-testbed-management 2026-04-08 04:30:38.246507 | orchestrator | 2026-04-08 04:30:38 - clean up networks 2026-04-08 04:30:38.466865 | orchestrator | 2026-04-08 04:30:38 - net-testbed-management 2026-04-08 04:30:38.770984 | orchestrator | 2026-04-08 04:30:38 - clean up security groups 2026-04-08 04:30:38.820582 | orchestrator | 2026-04-08 04:30:38 - testbed-node 2026-04-08 04:30:38.986359 | orchestrator | 2026-04-08 04:30:38 - testbed-management 2026-04-08 04:30:39.102358 | orchestrator | 2026-04-08 04:30:39 - clean up floating ips 2026-04-08 04:30:39.134261 | orchestrator | 2026-04-08 04:30:39 - 81.163.193.114 2026-04-08 04:30:39.525771 | orchestrator | 2026-04-08 04:30:39 - clean up routers 2026-04-08 04:30:39.642181 | orchestrator | 2026-04-08 04:30:39 - testbed 2026-04-08 04:30:41.187553 | orchestrator | ok: Runtime: 0:00:21.201817 2026-04-08 04:30:41.189969 | 2026-04-08 04:30:41.190070 | PLAY RECAP 2026-04-08 04:30:41.190136 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-04-08 04:30:41.190168 | 2026-04-08 04:30:41.384044 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-08 04:30:41.385704 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-08 04:30:42.206151 | 2026-04-08 04:30:42.206340 | PLAY [Cleanup play] 2026-04-08 04:30:42.222897 | 2026-04-08 04:30:42.223065 | TASK [Set cloud fact (Zuul deployment)] 2026-04-08 04:30:42.290142 | orchestrator | ok 2026-04-08 04:30:42.300335 | 2026-04-08 04:30:42.300526 | TASK [Set cloud fact (local deployment)] 2026-04-08 04:30:42.336128 | orchestrator | skipping: Conditional result was False 2026-04-08 04:30:42.351241 | 2026-04-08 04:30:42.351398 | TASK [Clean the cloud environment] 2026-04-08 04:30:43.576286 | orchestrator | 2026-04-08 04:30:43 - clean up servers 2026-04-08 04:30:44.191128 | orchestrator | 2026-04-08 04:30:44 - clean up keypairs 2026-04-08 04:30:44.210958 | orchestrator | 2026-04-08 04:30:44 - wait for servers to be gone 2026-04-08 04:30:44.260898 | orchestrator | 2026-04-08 04:30:44 - clean up ports 2026-04-08 04:30:44.341416 | orchestrator | 2026-04-08 04:30:44 - clean up volumes 2026-04-08 04:30:44.405257 | orchestrator | 2026-04-08 04:30:44 - disconnect routers 2026-04-08 04:30:44.431317 | orchestrator | 2026-04-08 04:30:44 - clean up subnets 2026-04-08 04:30:44.456172 | orchestrator | 2026-04-08 04:30:44 - clean up networks 2026-04-08 04:30:44.654361 | orchestrator | 2026-04-08 04:30:44 - clean up security groups 2026-04-08 04:30:44.693733 | orchestrator | 2026-04-08 04:30:44 - clean up floating ips 2026-04-08 04:30:44.718456 | orchestrator | 2026-04-08 04:30:44 - clean up routers 2026-04-08 04:30:44.899093 | orchestrator | ok: Runtime: 0:00:01.608315 2026-04-08 04:30:44.901866 | 2026-04-08 04:30:44.902006 | PLAY RECAP 2026-04-08 04:30:44.902111 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-04-08 04:30:44.902168 | 2026-04-08 04:30:45.098620 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-08 04:30:45.099724 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-08 04:30:45.901208 | 2026-04-08 04:30:45.901389 | PLAY [Base post-fetch] 2026-04-08 04:30:45.917390 | 2026-04-08 04:30:45.917549 | TASK [fetch-output : Set log path for multiple nodes] 2026-04-08 04:30:45.994154 | orchestrator | skipping: Conditional result was False 2026-04-08 04:30:46.009648 | 2026-04-08 04:30:46.010004 | TASK [fetch-output : Set log path for single node] 2026-04-08 04:30:46.080978 | orchestrator | ok 2026-04-08 04:30:46.088879 | 2026-04-08 04:30:46.089039 | LOOP [fetch-output : Ensure local output dirs] 2026-04-08 04:30:46.593857 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/041735f2a82d4175bace8e93dd5cfed6/work/logs" 2026-04-08 04:30:46.899115 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/041735f2a82d4175bace8e93dd5cfed6/work/artifacts" 2026-04-08 04:30:47.219735 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/041735f2a82d4175bace8e93dd5cfed6/work/docs" 2026-04-08 04:30:47.247203 | 2026-04-08 04:30:47.247387 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-04-08 04:30:48.296304 | orchestrator | changed: .d..t...... ./ 2026-04-08 04:30:48.296619 | orchestrator | changed: All items complete 2026-04-08 04:30:48.296657 | 2026-04-08 04:30:49.102775 | orchestrator | changed: .d..t...... ./ 2026-04-08 04:30:49.855379 | orchestrator | changed: .d..t...... ./ 2026-04-08 04:30:49.873019 | 2026-04-08 04:30:49.873273 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-04-08 04:30:49.917515 | orchestrator | skipping: Conditional result was False 2026-04-08 04:30:49.934940 | orchestrator | skipping: Conditional result was False 2026-04-08 04:30:49.957757 | 2026-04-08 04:30:49.957883 | PLAY RECAP 2026-04-08 04:30:49.957938 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-04-08 04:30:49.957967 | 2026-04-08 04:30:50.118970 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-08 04:30:50.120892 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-08 04:30:50.939076 | 2026-04-08 04:30:50.939249 | PLAY [Base post] 2026-04-08 04:30:50.954627 | 2026-04-08 04:30:50.954881 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-04-08 04:30:52.154643 | orchestrator | changed 2026-04-08 04:30:52.166584 | 2026-04-08 04:30:52.167216 | PLAY RECAP 2026-04-08 04:30:52.167847 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-04-08 04:30:52.168006 | 2026-04-08 04:30:52.382402 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-08 04:30:52.383922 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-04-08 04:30:53.304899 | 2026-04-08 04:30:53.305081 | PLAY [Base post-logs] 2026-04-08 04:30:53.316232 | 2026-04-08 04:30:53.316412 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-04-08 04:30:53.813497 | localhost | changed 2026-04-08 04:30:53.828348 | 2026-04-08 04:30:53.828536 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-04-08 04:30:53.857575 | localhost | ok 2026-04-08 04:30:53.862524 | 2026-04-08 04:30:53.862650 | TASK [Set zuul-log-path fact] 2026-04-08 04:30:53.890669 | localhost | ok 2026-04-08 04:30:53.910307 | 2026-04-08 04:30:53.910499 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-08 04:30:53.971053 | localhost | ok 2026-04-08 04:30:53.975789 | 2026-04-08 04:30:53.975939 | TASK [upload-logs : Create log directories] 2026-04-08 04:30:54.536178 | localhost | changed 2026-04-08 04:30:54.538915 | 2026-04-08 04:30:54.539025 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-04-08 04:30:55.002639 | localhost -> localhost | ok: Runtime: 0:00:00.007139 2026-04-08 04:30:55.011998 | 2026-04-08 04:30:55.012209 | TASK [upload-logs : Upload logs to log server] 2026-04-08 04:30:55.536931 | localhost | Output suppressed because no_log was given 2026-04-08 04:30:55.539233 | 2026-04-08 04:30:55.539353 | LOOP [upload-logs : Compress console log and json output] 2026-04-08 04:30:55.592196 | localhost | skipping: Conditional result was False 2026-04-08 04:30:55.598421 | localhost | skipping: Conditional result was False 2026-04-08 04:30:55.606279 | 2026-04-08 04:30:55.606383 | LOOP [upload-logs : Upload compressed console log and json output] 2026-04-08 04:30:55.653822 | localhost | skipping: Conditional result was False 2026-04-08 04:30:55.654456 | 2026-04-08 04:30:55.657126 | localhost | skipping: Conditional result was False 2026-04-08 04:30:55.662631 | 2026-04-08 04:30:55.662735 | LOOP [upload-logs : Upload console log and json output]